Nov 29 01:15:55 np0005539550 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 01:15:55 np0005539550 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 01:15:55 np0005539550 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539550 kernel: BIOS-provided physical RAM map:
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 01:15:55 np0005539550 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 01:15:55 np0005539550 kernel: NX (Execute Disable) protection: active
Nov 29 01:15:55 np0005539550 kernel: APIC: Static calls initialized
Nov 29 01:15:55 np0005539550 kernel: SMBIOS 2.8 present.
Nov 29 01:15:55 np0005539550 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 01:15:55 np0005539550 kernel: Hypervisor detected: KVM
Nov 29 01:15:55 np0005539550 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 01:15:55 np0005539550 kernel: kvm-clock: using sched offset of 4066296510 cycles
Nov 29 01:15:55 np0005539550 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 01:15:55 np0005539550 kernel: tsc: Detected 2800.000 MHz processor
Nov 29 01:15:55 np0005539550 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 01:15:55 np0005539550 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 01:15:55 np0005539550 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 01:15:55 np0005539550 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 01:15:55 np0005539550 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 01:15:55 np0005539550 kernel: Using GB pages for direct mapping
Nov 29 01:15:55 np0005539550 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 01:15:55 np0005539550 kernel: ACPI: Early table checksum verification disabled
Nov 29 01:15:55 np0005539550 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 01:15:55 np0005539550 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539550 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539550 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539550 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 01:15:55 np0005539550 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539550 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 01:15:55 np0005539550 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 01:15:55 np0005539550 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 01:15:55 np0005539550 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 01:15:55 np0005539550 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 01:15:55 np0005539550 kernel: No NUMA configuration found
Nov 29 01:15:55 np0005539550 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539550 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 29 01:15:55 np0005539550 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 01:15:55 np0005539550 kernel: Zone ranges:
Nov 29 01:15:55 np0005539550 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 01:15:55 np0005539550 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 01:15:55 np0005539550 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539550 kernel:  Device   empty
Nov 29 01:15:55 np0005539550 kernel: Movable zone start for each node
Nov 29 01:15:55 np0005539550 kernel: Early memory node ranges
Nov 29 01:15:55 np0005539550 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 01:15:55 np0005539550 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 01:15:55 np0005539550 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 01:15:55 np0005539550 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 01:15:55 np0005539550 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 01:15:55 np0005539550 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 01:15:55 np0005539550 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 01:15:55 np0005539550 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 01:15:55 np0005539550 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 01:15:55 np0005539550 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 01:15:55 np0005539550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 01:15:55 np0005539550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 01:15:55 np0005539550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 01:15:55 np0005539550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 01:15:55 np0005539550 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 01:15:55 np0005539550 kernel: TSC deadline timer available
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Max. logical packages:   8
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Max. logical dies:       8
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Max. dies per package:   1
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Max. threads per core:   1
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Num. cores per package:     1
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Num. threads per package:   1
Nov 29 01:15:55 np0005539550 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 01:15:55 np0005539550 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 01:15:55 np0005539550 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 01:15:55 np0005539550 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 01:15:55 np0005539550 kernel: Booting paravirtualized kernel on KVM
Nov 29 01:15:55 np0005539550 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 01:15:55 np0005539550 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 01:15:55 np0005539550 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 01:15:55 np0005539550 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 01:15:55 np0005539550 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539550 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 01:15:55 np0005539550 kernel: random: crng init done
Nov 29 01:15:55 np0005539550 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: Fallback order for Node 0: 0 
Nov 29 01:15:55 np0005539550 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 01:15:55 np0005539550 kernel: Policy zone: Normal
Nov 29 01:15:55 np0005539550 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 01:15:55 np0005539550 kernel: software IO TLB: area num 8.
Nov 29 01:15:55 np0005539550 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 01:15:55 np0005539550 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 01:15:55 np0005539550 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 01:15:55 np0005539550 kernel: Dynamic Preempt: voluntary
Nov 29 01:15:55 np0005539550 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 01:15:55 np0005539550 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 01:15:55 np0005539550 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 01:15:55 np0005539550 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539550 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539550 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 01:15:55 np0005539550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 01:15:55 np0005539550 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 01:15:55 np0005539550 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539550 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539550 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 01:15:55 np0005539550 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 01:15:55 np0005539550 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 01:15:55 np0005539550 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 01:15:55 np0005539550 kernel: Console: colour VGA+ 80x25
Nov 29 01:15:55 np0005539550 kernel: printk: console [ttyS0] enabled
Nov 29 01:15:55 np0005539550 kernel: ACPI: Core revision 20230331
Nov 29 01:15:55 np0005539550 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 01:15:55 np0005539550 kernel: x2apic enabled
Nov 29 01:15:55 np0005539550 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 01:15:55 np0005539550 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 01:15:55 np0005539550 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 29 01:15:55 np0005539550 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 01:15:55 np0005539550 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 01:15:55 np0005539550 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 01:15:55 np0005539550 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 01:15:55 np0005539550 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 01:15:55 np0005539550 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 01:15:55 np0005539550 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 01:15:55 np0005539550 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 01:15:55 np0005539550 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 01:15:55 np0005539550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 01:15:55 np0005539550 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 01:15:55 np0005539550 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 01:15:55 np0005539550 kernel: x86/bugs: return thunk changed
Nov 29 01:15:55 np0005539550 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 01:15:55 np0005539550 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 01:15:55 np0005539550 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 01:15:55 np0005539550 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 01:15:55 np0005539550 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 01:15:55 np0005539550 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 01:15:55 np0005539550 kernel: Freeing SMP alternatives memory: 40K
Nov 29 01:15:55 np0005539550 kernel: pid_max: default: 32768 minimum: 301
Nov 29 01:15:55 np0005539550 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 01:15:55 np0005539550 kernel: landlock: Up and running.
Nov 29 01:15:55 np0005539550 kernel: Yama: becoming mindful.
Nov 29 01:15:55 np0005539550 kernel: SELinux:  Initializing.
Nov 29 01:15:55 np0005539550 kernel: LSM support for eBPF active
Nov 29 01:15:55 np0005539550 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 01:15:55 np0005539550 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 01:15:55 np0005539550 kernel: ... version:                0
Nov 29 01:15:55 np0005539550 kernel: ... bit width:              48
Nov 29 01:15:55 np0005539550 kernel: ... generic registers:      6
Nov 29 01:15:55 np0005539550 kernel: ... value mask:             0000ffffffffffff
Nov 29 01:15:55 np0005539550 kernel: ... max period:             00007fffffffffff
Nov 29 01:15:55 np0005539550 kernel: ... fixed-purpose events:   0
Nov 29 01:15:55 np0005539550 kernel: ... event mask:             000000000000003f
Nov 29 01:15:55 np0005539550 kernel: signal: max sigframe size: 1776
Nov 29 01:15:55 np0005539550 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 01:15:55 np0005539550 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 01:15:55 np0005539550 kernel: smp: Bringing up secondary CPUs ...
Nov 29 01:15:55 np0005539550 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 01:15:55 np0005539550 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 01:15:55 np0005539550 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 01:15:55 np0005539550 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 29 01:15:55 np0005539550 kernel: node 0 deferred pages initialised in 32ms
Nov 29 01:15:55 np0005539550 kernel: Memory: 7765832K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616272K reserved, 0K cma-reserved)
Nov 29 01:15:55 np0005539550 kernel: devtmpfs: initialized
Nov 29 01:15:55 np0005539550 kernel: x86/mm: Memory block size: 128MB
Nov 29 01:15:55 np0005539550 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 01:15:55 np0005539550 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 01:15:55 np0005539550 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 01:15:55 np0005539550 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 01:15:55 np0005539550 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 01:15:55 np0005539550 kernel: audit: initializing netlink subsys (disabled)
Nov 29 01:15:55 np0005539550 kernel: audit: type=2000 audit(1764396952.068:1): state=initialized audit_enabled=0 res=1
Nov 29 01:15:55 np0005539550 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 01:15:55 np0005539550 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 01:15:55 np0005539550 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 01:15:55 np0005539550 kernel: cpuidle: using governor menu
Nov 29 01:15:55 np0005539550 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 01:15:55 np0005539550 kernel: PCI: Using configuration type 1 for base access
Nov 29 01:15:55 np0005539550 kernel: PCI: Using configuration type 1 for extended access
Nov 29 01:15:55 np0005539550 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 01:15:55 np0005539550 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 01:15:55 np0005539550 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 01:15:55 np0005539550 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 01:15:55 np0005539550 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 01:15:55 np0005539550 kernel: Demotion targets for Node 0: null
Nov 29 01:15:55 np0005539550 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 01:15:55 np0005539550 kernel: ACPI: Added _OSI(Module Device)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 01:15:55 np0005539550 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 01:15:55 np0005539550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 01:15:55 np0005539550 kernel: ACPI: Interpreter enabled
Nov 29 01:15:55 np0005539550 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 01:15:55 np0005539550 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 01:15:55 np0005539550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 01:15:55 np0005539550 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 01:15:55 np0005539550 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 01:15:55 np0005539550 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [3] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [4] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [5] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [6] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [7] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [8] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [9] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [10] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [11] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [12] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [13] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [14] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [15] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [16] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [17] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [18] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [19] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [20] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [21] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [22] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [23] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [24] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [25] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [26] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [27] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [28] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [29] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [30] registered
Nov 29 01:15:55 np0005539550 kernel: acpiphp: Slot [31] registered
Nov 29 01:15:55 np0005539550 kernel: PCI host bridge to bus 0000:00
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 01:15:55 np0005539550 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 01:15:55 np0005539550 kernel: iommu: Default domain type: Translated
Nov 29 01:15:55 np0005539550 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 01:15:55 np0005539550 kernel: SCSI subsystem initialized
Nov 29 01:15:55 np0005539550 kernel: ACPI: bus type USB registered
Nov 29 01:15:55 np0005539550 kernel: usbcore: registered new interface driver usbfs
Nov 29 01:15:55 np0005539550 kernel: usbcore: registered new interface driver hub
Nov 29 01:15:55 np0005539550 kernel: usbcore: registered new device driver usb
Nov 29 01:15:55 np0005539550 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 01:15:55 np0005539550 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 01:15:55 np0005539550 kernel: PTP clock support registered
Nov 29 01:15:55 np0005539550 kernel: EDAC MC: Ver: 3.0.0
Nov 29 01:15:55 np0005539550 kernel: NetLabel: Initializing
Nov 29 01:15:55 np0005539550 kernel: NetLabel:  domain hash size = 128
Nov 29 01:15:55 np0005539550 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 01:15:55 np0005539550 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 01:15:55 np0005539550 kernel: PCI: Using ACPI for IRQ routing
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 01:15:55 np0005539550 kernel: vgaarb: loaded
Nov 29 01:15:55 np0005539550 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 01:15:55 np0005539550 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 01:15:55 np0005539550 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 01:15:55 np0005539550 kernel: pnp: PnP ACPI init
Nov 29 01:15:55 np0005539550 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 01:15:55 np0005539550 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_INET protocol family
Nov 29 01:15:55 np0005539550 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 01:15:55 np0005539550 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_XDP protocol family
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 01:15:55 np0005539550 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 01:15:55 np0005539550 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 01:15:55 np0005539550 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74089 usecs
Nov 29 01:15:55 np0005539550 kernel: PCI: CLS 0 bytes, default 64
Nov 29 01:15:55 np0005539550 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 01:15:55 np0005539550 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 01:15:55 np0005539550 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 01:15:55 np0005539550 kernel: ACPI: bus type thunderbolt registered
Nov 29 01:15:55 np0005539550 kernel: Initialise system trusted keyrings
Nov 29 01:15:55 np0005539550 kernel: Key type blacklist registered
Nov 29 01:15:55 np0005539550 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 01:15:55 np0005539550 kernel: zbud: loaded
Nov 29 01:15:55 np0005539550 kernel: integrity: Platform Keyring initialized
Nov 29 01:15:55 np0005539550 kernel: integrity: Machine keyring initialized
Nov 29 01:15:55 np0005539550 kernel: Freeing initrd memory: 85868K
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_ALG protocol family
Nov 29 01:15:55 np0005539550 kernel: xor: automatically using best checksumming function   avx       
Nov 29 01:15:55 np0005539550 kernel: Key type asymmetric registered
Nov 29 01:15:55 np0005539550 kernel: Asymmetric key parser 'x509' registered
Nov 29 01:15:55 np0005539550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 01:15:55 np0005539550 kernel: io scheduler mq-deadline registered
Nov 29 01:15:55 np0005539550 kernel: io scheduler kyber registered
Nov 29 01:15:55 np0005539550 kernel: io scheduler bfq registered
Nov 29 01:15:55 np0005539550 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 01:15:55 np0005539550 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 01:15:55 np0005539550 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 01:15:55 np0005539550 kernel: ACPI: button: Power Button [PWRF]
Nov 29 01:15:55 np0005539550 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 01:15:55 np0005539550 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 01:15:55 np0005539550 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 01:15:55 np0005539550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 01:15:55 np0005539550 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 01:15:55 np0005539550 kernel: Non-volatile memory driver v1.3
Nov 29 01:15:55 np0005539550 kernel: rdac: device handler registered
Nov 29 01:15:55 np0005539550 kernel: hp_sw: device handler registered
Nov 29 01:15:55 np0005539550 kernel: emc: device handler registered
Nov 29 01:15:55 np0005539550 kernel: alua: device handler registered
Nov 29 01:15:55 np0005539550 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 01:15:55 np0005539550 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 01:15:55 np0005539550 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 01:15:55 np0005539550 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 01:15:55 np0005539550 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 01:15:55 np0005539550 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 01:15:55 np0005539550 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 01:15:55 np0005539550 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 01:15:55 np0005539550 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 01:15:55 np0005539550 kernel: hub 1-0:1.0: USB hub found
Nov 29 01:15:55 np0005539550 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 01:15:55 np0005539550 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 01:15:55 np0005539550 kernel: usbserial: USB Serial support registered for generic
Nov 29 01:15:55 np0005539550 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 01:15:55 np0005539550 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 01:15:55 np0005539550 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 01:15:55 np0005539550 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 01:15:55 np0005539550 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 01:15:55 np0005539550 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 01:15:55 np0005539550 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 01:15:55 np0005539550 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T06:15:54 UTC (1764396954)
Nov 29 01:15:55 np0005539550 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 01:15:55 np0005539550 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 01:15:55 np0005539550 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 01:15:55 np0005539550 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 01:15:55 np0005539550 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 01:15:55 np0005539550 kernel: usbcore: registered new interface driver usbhid
Nov 29 01:15:55 np0005539550 kernel: usbhid: USB HID core driver
Nov 29 01:15:55 np0005539550 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 01:15:55 np0005539550 kernel: Initializing XFRM netlink socket
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_INET6 protocol family
Nov 29 01:15:55 np0005539550 kernel: Segment Routing with IPv6
Nov 29 01:15:55 np0005539550 kernel: NET: Registered PF_PACKET protocol family
Nov 29 01:15:55 np0005539550 kernel: mpls_gso: MPLS GSO support
Nov 29 01:15:55 np0005539550 kernel: IPI shorthand broadcast: enabled
Nov 29 01:15:55 np0005539550 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 01:15:55 np0005539550 kernel: AES CTR mode by8 optimization enabled
Nov 29 01:15:55 np0005539550 kernel: sched_clock: Marking stable (2681001610, 157556510)->(3050211940, -211653820)
Nov 29 01:15:55 np0005539550 kernel: registered taskstats version 1
Nov 29 01:15:55 np0005539550 kernel: Loading compiled-in X.509 certificates
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 01:15:55 np0005539550 kernel: Demotion targets for Node 0: null
Nov 29 01:15:55 np0005539550 kernel: page_owner is disabled
Nov 29 01:15:55 np0005539550 kernel: Key type .fscrypt registered
Nov 29 01:15:55 np0005539550 kernel: Key type fscrypt-provisioning registered
Nov 29 01:15:55 np0005539550 kernel: Key type big_key registered
Nov 29 01:15:55 np0005539550 kernel: Key type encrypted registered
Nov 29 01:15:55 np0005539550 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 01:15:55 np0005539550 kernel: Loading compiled-in module X.509 certificates
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 01:15:55 np0005539550 kernel: ima: Allocated hash algorithm: sha256
Nov 29 01:15:55 np0005539550 kernel: ima: No architecture policies found
Nov 29 01:15:55 np0005539550 kernel: evm: Initialising EVM extended attributes:
Nov 29 01:15:55 np0005539550 kernel: evm: security.selinux
Nov 29 01:15:55 np0005539550 kernel: evm: security.SMACK64 (disabled)
Nov 29 01:15:55 np0005539550 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 01:15:55 np0005539550 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 01:15:55 np0005539550 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 01:15:55 np0005539550 kernel: evm: security.apparmor (disabled)
Nov 29 01:15:55 np0005539550 kernel: evm: security.ima
Nov 29 01:15:55 np0005539550 kernel: evm: security.capability
Nov 29 01:15:55 np0005539550 kernel: evm: HMAC attrs: 0x1
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 01:15:55 np0005539550 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 01:15:55 np0005539550 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 01:15:55 np0005539550 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 01:15:55 np0005539550 kernel: Running certificate verification RSA selftest
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 01:15:55 np0005539550 kernel: Running certificate verification ECDSA selftest
Nov 29 01:15:55 np0005539550 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 01:15:55 np0005539550 kernel: clk: Disabling unused clocks
Nov 29 01:15:55 np0005539550 kernel: Freeing unused decrypted memory: 2028K
Nov 29 01:15:55 np0005539550 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 01:15:55 np0005539550 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 01:15:55 np0005539550 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 01:15:55 np0005539550 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 01:15:55 np0005539550 kernel: Run /init as init process
Nov 29 01:15:55 np0005539550 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:15:55 np0005539550 systemd: Detected virtualization kvm.
Nov 29 01:15:55 np0005539550 systemd: Detected architecture x86-64.
Nov 29 01:15:55 np0005539550 systemd: Running in initrd.
Nov 29 01:15:55 np0005539550 systemd: No hostname configured, using default hostname.
Nov 29 01:15:55 np0005539550 systemd: Hostname set to <localhost>.
Nov 29 01:15:55 np0005539550 systemd: Initializing machine ID from VM UUID.
Nov 29 01:15:55 np0005539550 systemd: Queued start job for default target Initrd Default Target.
Nov 29 01:15:55 np0005539550 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:55 np0005539550 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:15:55 np0005539550 systemd: Reached target Initrd /usr File System.
Nov 29 01:15:55 np0005539550 systemd: Reached target Local File Systems.
Nov 29 01:15:55 np0005539550 systemd: Reached target Path Units.
Nov 29 01:15:55 np0005539550 systemd: Reached target Slice Units.
Nov 29 01:15:55 np0005539550 systemd: Reached target Swaps.
Nov 29 01:15:55 np0005539550 systemd: Reached target Timer Units.
Nov 29 01:15:55 np0005539550 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 01:15:55 np0005539550 systemd: Listening on Journal Socket (/dev/log).
Nov 29 01:15:55 np0005539550 systemd: Listening on Journal Socket.
Nov 29 01:15:55 np0005539550 systemd: Listening on udev Control Socket.
Nov 29 01:15:55 np0005539550 systemd: Listening on udev Kernel Socket.
Nov 29 01:15:55 np0005539550 systemd: Reached target Socket Units.
Nov 29 01:15:55 np0005539550 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:15:55 np0005539550 systemd: Starting Journal Service...
Nov 29 01:15:55 np0005539550 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:15:55 np0005539550 systemd: Starting Apply Kernel Variables...
Nov 29 01:15:55 np0005539550 systemd: Starting Create System Users...
Nov 29 01:15:55 np0005539550 systemd: Starting Setup Virtual Console...
Nov 29 01:15:55 np0005539550 systemd: Finished Create List of Static Device Nodes.
Nov 29 01:15:55 np0005539550 systemd: Finished Create System Users.
Nov 29 01:15:55 np0005539550 systemd-journald[313]: Journal started
Nov 29 01:15:55 np0005539550 systemd-journald[313]: Runtime Journal (/run/log/journal/9851e351ef5d4a0c9f85d561f6a4210f) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:15:55 np0005539550 systemd-sysusers[318]: Creating group 'users' with GID 100.
Nov 29 01:15:55 np0005539550 systemd-sysusers[318]: Creating group 'dbus' with GID 81.
Nov 29 01:15:55 np0005539550 systemd-sysusers[318]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 01:15:55 np0005539550 systemd: Started Journal Service.
Nov 29 01:15:55 np0005539550 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:15:55 np0005539550 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:15:55 np0005539550 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:15:55 np0005539550 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:15:55 np0005539550 systemd[1]: Finished Setup Virtual Console.
Nov 29 01:15:55 np0005539550 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 01:15:55 np0005539550 systemd[1]: Starting dracut cmdline hook...
Nov 29 01:15:55 np0005539550 dracut-cmdline[334]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 01:15:55 np0005539550 dracut-cmdline[334]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 01:15:55 np0005539550 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:15:55 np0005539550 systemd[1]: Finished dracut cmdline hook.
Nov 29 01:15:55 np0005539550 systemd[1]: Starting dracut pre-udev hook...
Nov 29 01:15:56 np0005539550 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 01:15:56 np0005539550 kernel: device-mapper: uevent: version 1.0.3
Nov 29 01:15:56 np0005539550 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 01:15:56 np0005539550 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 01:15:56 np0005539550 kernel: RPC: Registered udp transport module.
Nov 29 01:15:56 np0005539550 kernel: RPC: Registered tcp transport module.
Nov 29 01:15:56 np0005539550 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 01:15:56 np0005539550 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 01:15:56 np0005539550 rpc.statd[451]: Version 2.5.4 starting
Nov 29 01:15:56 np0005539550 rpc.statd[451]: Initializing NSM state
Nov 29 01:15:56 np0005539550 rpc.idmapd[456]: Setting log level to 0
Nov 29 01:15:56 np0005539550 systemd[1]: Finished dracut pre-udev hook.
Nov 29 01:15:56 np0005539550 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:15:56 np0005539550 systemd-udevd[469]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:15:56 np0005539550 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:15:56 np0005539550 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 01:15:56 np0005539550 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 01:15:56 np0005539550 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 01:15:56 np0005539550 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 01:15:56 np0005539550 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:15:56 np0005539550 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:15:56 np0005539550 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:15:56 np0005539550 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:15:56 np0005539550 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 01:15:56 np0005539550 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:56 np0005539550 systemd[1]: Reached target Network.
Nov 29 01:15:56 np0005539550 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 01:15:56 np0005539550 systemd[1]: Starting dracut initqueue hook...
Nov 29 01:15:56 np0005539550 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 01:15:56 np0005539550 systemd[1]: Reached target System Initialization.
Nov 29 01:15:56 np0005539550 systemd[1]: Reached target Basic System.
Nov 29 01:15:56 np0005539550 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 01:15:56 np0005539550 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 01:15:56 np0005539550 kernel: vda: vda1
Nov 29 01:15:56 np0005539550 kernel: scsi host0: ata_piix
Nov 29 01:15:56 np0005539550 kernel: scsi host1: ata_piix
Nov 29 01:15:56 np0005539550 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 01:15:56 np0005539550 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 01:15:56 np0005539550 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:56 np0005539550 systemd[1]: Reached target Initrd Root Device.
Nov 29 01:15:56 np0005539550 kernel: ata1: found unknown device (class 0)
Nov 29 01:15:56 np0005539550 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 01:15:56 np0005539550 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 01:15:56 np0005539550 systemd-udevd[483]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:15:56 np0005539550 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 01:15:56 np0005539550 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 01:15:56 np0005539550 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 01:15:57 np0005539550 systemd[1]: Finished dracut initqueue hook.
Nov 29 01:15:57 np0005539550 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:15:57 np0005539550 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 01:15:57 np0005539550 systemd[1]: Reached target Remote File Systems.
Nov 29 01:15:57 np0005539550 systemd[1]: Starting dracut pre-mount hook...
Nov 29 01:15:57 np0005539550 systemd[1]: Finished dracut pre-mount hook.
Nov 29 01:15:57 np0005539550 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 01:15:57 np0005539550 systemd-fsck[561]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 01:15:57 np0005539550 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 01:15:57 np0005539550 systemd[1]: Mounting /sysroot...
Nov 29 01:15:57 np0005539550 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 01:15:57 np0005539550 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 01:15:58 np0005539550 kernel: XFS (vda1): Ending clean mount
Nov 29 01:15:58 np0005539550 systemd[1]: Mounted /sysroot.
Nov 29 01:15:58 np0005539550 systemd[1]: Reached target Initrd Root File System.
Nov 29 01:15:58 np0005539550 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 01:15:58 np0005539550 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 01:15:58 np0005539550 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 01:15:58 np0005539550 systemd[1]: Reached target Initrd File Systems.
Nov 29 01:15:58 np0005539550 systemd[1]: Reached target Initrd Default Target.
Nov 29 01:15:58 np0005539550 systemd[1]: Starting dracut mount hook...
Nov 29 01:15:59 np0005539550 systemd[1]: Finished dracut mount hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 01:15:59 np0005539550 rpc.idmapd[456]: exiting on signal 15
Nov 29 01:15:59 np0005539550 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Network.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Timer Units.
Nov 29 01:15:59 np0005539550 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Initrd Default Target.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Basic System.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Initrd Root Device.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Path Units.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Remote File Systems.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Slice Units.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Socket Units.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target System Initialization.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Local File Systems.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Swaps.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut mount hook.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut initqueue hook.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Setup Virtual Console.
Nov 29 01:15:59 np0005539550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Closed udev Control Socket.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Closed udev Kernel Socket.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 01:15:59 np0005539550 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped dracut cmdline hook.
Nov 29 01:15:59 np0005539550 systemd[1]: Starting Cleanup udev Database...
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 01:15:59 np0005539550 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 01:15:59 np0005539550 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Stopped Create System Users.
Nov 29 01:15:59 np0005539550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 01:15:59 np0005539550 systemd[1]: Finished Cleanup udev Database.
Nov 29 01:15:59 np0005539550 systemd[1]: Reached target Switch Root.
Nov 29 01:15:59 np0005539550 systemd[1]: Starting Switch Root...
Nov 29 01:15:59 np0005539550 systemd[1]: Switching root.
Nov 29 01:15:59 np0005539550 systemd-journald[313]: Journal stopped
Nov 29 01:16:00 np0005539550 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 29 01:16:00 np0005539550 kernel: audit: type=1404 audit(1764396959.413:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:16:00 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:16:00 np0005539550 kernel: audit: type=1403 audit(1764396959.552:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 01:16:00 np0005539550 systemd: Successfully loaded SELinux policy in 142.620ms.
Nov 29 01:16:00 np0005539550 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 36.639ms.
Nov 29 01:16:00 np0005539550 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 01:16:00 np0005539550 systemd: Detected virtualization kvm.
Nov 29 01:16:00 np0005539550 systemd: Detected architecture x86-64.
Nov 29 01:16:00 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:16:00 np0005539550 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd: Stopped Switch Root.
Nov 29 01:16:00 np0005539550 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 01:16:00 np0005539550 systemd: Created slice Slice /system/getty.
Nov 29 01:16:00 np0005539550 systemd: Created slice Slice /system/serial-getty.
Nov 29 01:16:00 np0005539550 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 01:16:00 np0005539550 systemd: Created slice User and Session Slice.
Nov 29 01:16:00 np0005539550 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 01:16:00 np0005539550 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 01:16:00 np0005539550 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 01:16:00 np0005539550 systemd: Reached target Local Encrypted Volumes.
Nov 29 01:16:00 np0005539550 systemd: Stopped target Switch Root.
Nov 29 01:16:00 np0005539550 systemd: Stopped target Initrd File Systems.
Nov 29 01:16:00 np0005539550 systemd: Stopped target Initrd Root File System.
Nov 29 01:16:00 np0005539550 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 01:16:00 np0005539550 systemd: Reached target Path Units.
Nov 29 01:16:00 np0005539550 systemd: Reached target rpc_pipefs.target.
Nov 29 01:16:00 np0005539550 systemd: Reached target Slice Units.
Nov 29 01:16:00 np0005539550 systemd: Reached target Swaps.
Nov 29 01:16:00 np0005539550 systemd: Reached target Local Verity Protected Volumes.
Nov 29 01:16:00 np0005539550 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 01:16:00 np0005539550 systemd: Reached target RPC Port Mapper.
Nov 29 01:16:00 np0005539550 systemd: Listening on Process Core Dump Socket.
Nov 29 01:16:00 np0005539550 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 01:16:00 np0005539550 systemd: Listening on udev Control Socket.
Nov 29 01:16:00 np0005539550 systemd: Listening on udev Kernel Socket.
Nov 29 01:16:00 np0005539550 systemd: Mounting Huge Pages File System...
Nov 29 01:16:00 np0005539550 systemd: Mounting POSIX Message Queue File System...
Nov 29 01:16:00 np0005539550 systemd: Mounting Kernel Debug File System...
Nov 29 01:16:00 np0005539550 systemd: Mounting Kernel Trace File System...
Nov 29 01:16:00 np0005539550 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:16:00 np0005539550 systemd: Starting Create List of Static Device Nodes...
Nov 29 01:16:00 np0005539550 systemd: Starting Load Kernel Module configfs...
Nov 29 01:16:00 np0005539550 systemd: Starting Load Kernel Module drm...
Nov 29 01:16:00 np0005539550 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 01:16:00 np0005539550 systemd: Starting Load Kernel Module fuse...
Nov 29 01:16:00 np0005539550 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 01:16:00 np0005539550 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd: Stopped File System Check on Root Device.
Nov 29 01:16:00 np0005539550 systemd: Stopped Journal Service.
Nov 29 01:16:00 np0005539550 kernel: fuse: init (API version 7.37)
Nov 29 01:16:00 np0005539550 systemd: Starting Journal Service...
Nov 29 01:16:00 np0005539550 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 01:16:00 np0005539550 systemd: Starting Generate network units from Kernel command line...
Nov 29 01:16:00 np0005539550 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:00 np0005539550 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 01:16:00 np0005539550 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 01:16:00 np0005539550 systemd: Starting Apply Kernel Variables...
Nov 29 01:16:00 np0005539550 systemd: Starting Coldplug All udev Devices...
Nov 29 01:16:00 np0005539550 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 01:16:00 np0005539550 systemd-journald[682]: Journal started
Nov 29 01:16:00 np0005539550 systemd-journald[682]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:16:00 np0005539550 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 01:16:00 np0005539550 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd: Mounted Huge Pages File System.
Nov 29 01:16:00 np0005539550 systemd: Started Journal Service.
Nov 29 01:16:00 np0005539550 systemd[1]: Mounted POSIX Message Queue File System.
Nov 29 01:16:00 np0005539550 systemd[1]: Mounted Kernel Debug File System.
Nov 29 01:16:00 np0005539550 systemd[1]: Mounted Kernel Trace File System.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 01:16:00 np0005539550 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:16:00 np0005539550 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 01:16:00 np0005539550 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 01:16:00 np0005539550 kernel: ACPI: bus type drm_connector registered
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 01:16:00 np0005539550 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Load Kernel Module drm.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Apply Kernel Variables.
Nov 29 01:16:00 np0005539550 systemd[1]: Mounting FUSE Control File System...
Nov 29 01:16:00 np0005539550 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 01:16:00 np0005539550 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Create System Users...
Nov 29 01:16:00 np0005539550 systemd[1]: Mounted FUSE Control File System.
Nov 29 01:16:00 np0005539550 systemd-journald[682]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 01:16:00 np0005539550 systemd-journald[682]: Received client request to flush runtime journal.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Create System Users.
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 01:16:00 np0005539550 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 01:16:00 np0005539550 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 01:16:00 np0005539550 systemd[1]: Reached target Local File Systems.
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 01:16:00 np0005539550 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 01:16:00 np0005539550 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 01:16:00 np0005539550 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 01:16:00 np0005539550 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 01:16:00 np0005539550 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 01:16:00 np0005539550 bootctl[700]: Couldn't find EFI system partition, skipping.
Nov 29 01:16:00 np0005539550 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 01:16:01 np0005539550 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 01:16:01 np0005539550 systemd[1]: Starting Security Auditing Service...
Nov 29 01:16:01 np0005539550 systemd[1]: Starting RPC Bind...
Nov 29 01:16:01 np0005539550 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 01:16:01 np0005539550 auditd[706]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 01:16:01 np0005539550 auditd[706]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 01:16:01 np0005539550 systemd[1]: Started RPC Bind.
Nov 29 01:16:01 np0005539550 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 01:16:01 np0005539550 augenrules[711]: /sbin/augenrules: No change
Nov 29 01:16:01 np0005539550 augenrules[726]: No rules
Nov 29 01:16:01 np0005539550 augenrules[726]: enabled 1
Nov 29 01:16:01 np0005539550 augenrules[726]: failure 1
Nov 29 01:16:01 np0005539550 augenrules[726]: pid 706
Nov 29 01:16:01 np0005539550 augenrules[726]: rate_limit 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_limit 8192
Nov 29 01:16:01 np0005539550 augenrules[726]: lost 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog 4
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time 60000
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time_actual 0
Nov 29 01:16:01 np0005539550 augenrules[726]: enabled 1
Nov 29 01:16:01 np0005539550 augenrules[726]: failure 1
Nov 29 01:16:01 np0005539550 augenrules[726]: pid 706
Nov 29 01:16:01 np0005539550 augenrules[726]: rate_limit 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_limit 8192
Nov 29 01:16:01 np0005539550 augenrules[726]: lost 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog 8
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time 60000
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time_actual 0
Nov 29 01:16:01 np0005539550 augenrules[726]: enabled 1
Nov 29 01:16:01 np0005539550 augenrules[726]: failure 1
Nov 29 01:16:01 np0005539550 augenrules[726]: pid 706
Nov 29 01:16:01 np0005539550 augenrules[726]: rate_limit 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_limit 8192
Nov 29 01:16:01 np0005539550 augenrules[726]: lost 0
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog 12
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time 60000
Nov 29 01:16:01 np0005539550 augenrules[726]: backlog_wait_time_actual 0
Nov 29 01:16:01 np0005539550 systemd[1]: Started Security Auditing Service.
Nov 29 01:16:01 np0005539550 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 01:16:01 np0005539550 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 01:16:01 np0005539550 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 01:16:02 np0005539550 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 01:16:02 np0005539550 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 01:16:02 np0005539550 systemd[1]: Starting Update is Completed...
Nov 29 01:16:02 np0005539550 systemd[1]: Finished Update is Completed.
Nov 29 01:16:02 np0005539550 systemd-udevd[734]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 01:16:02 np0005539550 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target System Initialization.
Nov 29 01:16:02 np0005539550 systemd[1]: Started dnf makecache --timer.
Nov 29 01:16:02 np0005539550 systemd[1]: Started Daily rotation of log files.
Nov 29 01:16:02 np0005539550 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target Timer Units.
Nov 29 01:16:02 np0005539550 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 01:16:02 np0005539550 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target Socket Units.
Nov 29 01:16:02 np0005539550 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 01:16:02 np0005539550 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:02 np0005539550 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 01:16:02 np0005539550 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 01:16:02 np0005539550 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 01:16:02 np0005539550 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 01:16:02 np0005539550 systemd[1]: Started D-Bus System Message Bus.
Nov 29 01:16:02 np0005539550 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target Basic System.
Nov 29 01:16:02 np0005539550 dbus-broker-lau[751]: Ready
Nov 29 01:16:02 np0005539550 systemd[1]: Starting NTP client/server...
Nov 29 01:16:02 np0005539550 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 01:16:02 np0005539550 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 01:16:02 np0005539550 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 01:16:02 np0005539550 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 01:16:02 np0005539550 systemd[1]: Started irqbalance daemon.
Nov 29 01:16:02 np0005539550 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 01:16:02 np0005539550 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:02 np0005539550 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:02 np0005539550 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target sshd-keygen.target.
Nov 29 01:16:02 np0005539550 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 01:16:02 np0005539550 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 01:16:02 np0005539550 systemd[1]: Starting User Login Management...
Nov 29 01:16:02 np0005539550 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 01:16:02 np0005539550 chronyd[794]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 01:16:02 np0005539550 chronyd[794]: Loaded 0 symmetric keys
Nov 29 01:16:02 np0005539550 chronyd[794]: Using right/UTC timezone to obtain leap second data
Nov 29 01:16:02 np0005539550 chronyd[794]: Loaded seccomp filter (level 2)
Nov 29 01:16:02 np0005539550 systemd[1]: Started NTP client/server.
Nov 29 01:16:02 np0005539550 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 01:16:02 np0005539550 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 01:16:02 np0005539550 systemd-logind[788]: New seat seat0.
Nov 29 01:16:02 np0005539550 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 01:16:02 np0005539550 systemd[1]: Started User Login Management.
Nov 29 01:16:02 np0005539550 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 01:16:02 np0005539550 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 01:16:02 np0005539550 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 01:16:02 np0005539550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 01:16:02 np0005539550 kernel: kvm_amd: TSC scaling supported
Nov 29 01:16:02 np0005539550 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 01:16:02 np0005539550 kernel: kvm_amd: Nested Paging enabled
Nov 29 01:16:02 np0005539550 kernel: kvm_amd: LBR virtualization supported
Nov 29 01:16:02 np0005539550 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 01:16:02 np0005539550 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 01:16:02 np0005539550 kernel: Console: switching to colour dummy device 80x25
Nov 29 01:16:02 np0005539550 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 01:16:02 np0005539550 kernel: [drm] features: -context_init
Nov 29 01:16:02 np0005539550 kernel: [drm] number of scanouts: 1
Nov 29 01:16:02 np0005539550 kernel: [drm] number of cap sets: 0
Nov 29 01:16:02 np0005539550 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 01:16:02 np0005539550 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 01:16:02 np0005539550 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 01:16:02 np0005539550 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 01:16:02 np0005539550 iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Nov 29 01:16:02 np0005539550 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 01:16:03 np0005539550 cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 06:16:03 +0000. Up 11.13 seconds.
Nov 29 01:16:03 np0005539550 systemd[1]: run-cloud\x2dinit-tmp-tmpk356445x.mount: Deactivated successfully.
Nov 29 01:16:03 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 01:16:03 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 01:16:03 np0005539550 systemd-hostnamed[857]: Hostname set to <np0005539550.novalocal> (static)
Nov 29 01:16:03 np0005539550 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 01:16:03 np0005539550 systemd[1]: Reached target Preparation for Network.
Nov 29 01:16:03 np0005539550 systemd[1]: Starting Network Manager...
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6366] NetworkManager (version 1.54.1-1.el9) is starting... (boot:c483fbf3-82cb-4f31-8932-5eb81827298b)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6370] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6457] manager[0x55eb3314e080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6509] hostname: hostname: using hostnamed
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6509] hostname: static hostname changed from (none) to "np0005539550.novalocal"
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6514] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6708] manager[0x55eb3314e080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6709] manager[0x55eb3314e080]: rfkill: WWAN hardware radio set enabled
Nov 29 01:16:03 np0005539550 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6772] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6773] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6773] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6774] manager: Networking is enabled by state file
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6775] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6784] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6802] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6812] dhcp: init: Using DHCP client 'internal'
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6814] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6826] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6832] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6839] device (lo): Activation: starting connection 'lo' (10063a61-047c-420e-af96-644d75bb9c9d)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6848] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6852] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6886] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6889] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6891] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6893] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6894] device (eth0): carrier: link connected
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6895] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6900] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6906] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6912] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6913] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6915] manager: NetworkManager state is now CONNECTING
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6916] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6924] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6926] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6974] dhcp4 (eth0): state changed new lease, address=38.102.83.190
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6981] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.6999] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:16:03 np0005539550 systemd[1]: Started Network Manager.
Nov 29 01:16:03 np0005539550 systemd[1]: Reached target Network.
Nov 29 01:16:03 np0005539550 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:16:03 np0005539550 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 01:16:03 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7296] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7299] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7300] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7306] device (lo): Activation: successful, device activated.
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7311] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7314] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7317] device (eth0): Activation: successful, device activated.
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7326] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:16:03 np0005539550 NetworkManager[861]: <info>  [1764396963.7330] manager: startup complete
Nov 29 01:16:03 np0005539550 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 01:16:03 np0005539550 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 01:16:03 np0005539550 systemd[1]: Reached target NFS client services.
Nov 29 01:16:03 np0005539550 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 01:16:03 np0005539550 systemd[1]: Reached target Remote File Systems.
Nov 29 01:16:03 np0005539550 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 01:16:03 np0005539550 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:16:03 np0005539550 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 01:16:04 np0005539550 cloud-init[924]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 06:16:04 +0000. Up 12.21 seconds.
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |  eth0  | True |        38.102.83.190        | 255.255.255.0 | global | fa:16:3e:66:04:00 |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |  eth0  | True | fe80::f816:3eff:fe66:400/64 |       .       |  link  | fa:16:3e:66:04:00 |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 01:16:04 np0005539550 cloud-init[924]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 01:16:08 np0005539550 chronyd[794]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 29 01:16:08 np0005539550 chronyd[794]: System clock TAI offset set to 37 seconds
Nov 29 01:16:12 np0005539550 cloud-init[924]: Generating public/private rsa key pair.
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key fingerprint is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: SHA256:mTblhLZb3dwpY1lmWjuE56JoJgey6rSlI9eYHFzng3Y root@np0005539550.novalocal
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key's randomart image is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: +---[RSA 3072]----+
Nov 29 01:16:12 np0005539550 cloud-init[924]: |                 |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |         .    .  |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |        o o  . B |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |     . o B . o@.o|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |  . ..+.S o .Bo=.|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |   o ooE.+. o + .|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |  ..*o..o= .     |
Nov 29 01:16:12 np0005539550 cloud-init[924]: | ..*=.  =        |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |  +=.            |
Nov 29 01:16:12 np0005539550 cloud-init[924]: +----[SHA256]-----+
Nov 29 01:16:12 np0005539550 cloud-init[924]: Generating public/private ecdsa key pair.
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key fingerprint is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: SHA256:0YmyR6hBG+7KCZiotFFlhqeOlwPtFqXWjIFbXUFnIWc root@np0005539550.novalocal
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key's randomart image is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: +---[ECDSA 256]---+
Nov 29 01:16:12 np0005539550 cloud-init[924]: |  . =+o=.E.      |
Nov 29 01:16:12 np0005539550 cloud-init[924]: | . *+* .*o .     |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |  +.& o + o      |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |o+.O = + .       |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |=o* = . S        |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |o+oX   .         |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |..* .            |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |                 |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |                 |
Nov 29 01:16:12 np0005539550 cloud-init[924]: +----[SHA256]-----+
Nov 29 01:16:12 np0005539550 cloud-init[924]: Generating public/private ed25519 key pair.
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 01:16:12 np0005539550 cloud-init[924]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key fingerprint is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: SHA256:FPa/OGF9ylMhFI1pwtEv4q53NkIBUcCi/Kt/fcdW9+Y root@np0005539550.novalocal
Nov 29 01:16:12 np0005539550 cloud-init[924]: The key's randomart image is:
Nov 29 01:16:12 np0005539550 cloud-init[924]: +--[ED25519 256]--+
Nov 29 01:16:12 np0005539550 cloud-init[924]: |       .==ooo=   |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |      ..ooo.= .  |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |   . . ....o...  |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |    o  .  oo.... |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |     .  S.ooo.o  |
Nov 29 01:16:12 np0005539550 cloud-init[924]: |      .  .o+ =  o|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |       . +o =. .o|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |      . . =.=.+ o|
Nov 29 01:16:12 np0005539550 cloud-init[924]: |    .o...o = + oE|
Nov 29 01:16:12 np0005539550 cloud-init[924]: +----[SHA256]-----+
Nov 29 01:16:12 np0005539550 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 01:16:12 np0005539550 systemd[1]: Reached target Cloud-config availability.
Nov 29 01:16:12 np0005539550 systemd[1]: Reached target Network is Online.
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 01:16:12 np0005539550 systemd[1]: Starting System Logging Service...
Nov 29 01:16:12 np0005539550 systemd[1]: Starting OpenSSH server daemon...
Nov 29 01:16:12 np0005539550 sm-notify[1007]: Version 2.5.4 starting
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Permit User Sessions...
Nov 29 01:16:12 np0005539550 systemd[1]: Started OpenSSH server daemon.
Nov 29 01:16:12 np0005539550 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 01:16:12 np0005539550 systemd[1]: Finished Permit User Sessions.
Nov 29 01:16:12 np0005539550 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Nov 29 01:16:12 np0005539550 rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 01:16:12 np0005539550 systemd[1]: Started Command Scheduler.
Nov 29 01:16:12 np0005539550 systemd[1]: Started Getty on tty1.
Nov 29 01:16:12 np0005539550 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 01:16:12 np0005539550 systemd[1]: Reached target Login Prompts.
Nov 29 01:16:12 np0005539550 systemd[1]: Started System Logging Service.
Nov 29 01:16:12 np0005539550 systemd[1]: Reached target Multi-User System.
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 01:16:12 np0005539550 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 01:16:12 np0005539550 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 01:16:12 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 01:16:12 np0005539550 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 29 01:16:12 np0005539550 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 25 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 31 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 28 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 32 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 30 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 01:16:12 np0005539550 irqbalance[786]: IRQ 29 affinity is now unmanaged
Nov 29 01:16:12 np0005539550 cloud-init[1148]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 06:16:12 +0000. Up 21.00 seconds.
Nov 29 01:16:12 np0005539550 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 01:16:12 np0005539550 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 01:16:13 np0005539550 dracut[1273]: dracut-057-102.git20250818.el9
Nov 29 01:16:13 np0005539550 cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 06:16:13 +0000. Up 21.34 seconds.
Nov 29 01:16:13 np0005539550 dracut[1276]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 01:16:13 np0005539550 cloud-init[1328]: #############################################################
Nov 29 01:16:13 np0005539550 cloud-init[1331]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:13 np0005539550 cloud-init[1337]: 256 SHA256:0YmyR6hBG+7KCZiotFFlhqeOlwPtFqXWjIFbXUFnIWc root@np0005539550.novalocal (ECDSA)
Nov 29 01:16:13 np0005539550 cloud-init[1342]: 256 SHA256:FPa/OGF9ylMhFI1pwtEv4q53NkIBUcCi/Kt/fcdW9+Y root@np0005539550.novalocal (ED25519)
Nov 29 01:16:13 np0005539550 cloud-init[1348]: 3072 SHA256:mTblhLZb3dwpY1lmWjuE56JoJgey6rSlI9eYHFzng3Y root@np0005539550.novalocal (RSA)
Nov 29 01:16:13 np0005539550 cloud-init[1353]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 01:16:13 np0005539550 cloud-init[1359]: #############################################################
Nov 29 01:16:13 np0005539550 cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 06:16:13 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 21.59 seconds
Nov 29 01:16:13 np0005539550 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 01:16:13 np0005539550 systemd[1]: Reached target Cloud-init target.
Nov 29 01:16:13 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:13 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: memstrack is not available
Nov 29 01:16:14 np0005539550 dracut[1276]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 01:16:14 np0005539550 dracut[1276]: memstrack is not available
Nov 29 01:16:14 np0005539550 dracut[1276]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 01:16:15 np0005539550 dracut[1276]: *** Including module: systemd ***
Nov 29 01:16:15 np0005539550 dracut[1276]: *** Including module: fips ***
Nov 29 01:16:15 np0005539550 dracut[1276]: *** Including module: systemd-initrd ***
Nov 29 01:16:15 np0005539550 dracut[1276]: *** Including module: i18n ***
Nov 29 01:16:15 np0005539550 dracut[1276]: *** Including module: drm ***
Nov 29 01:16:16 np0005539550 dracut[1276]: *** Including module: prefixdevname ***
Nov 29 01:16:16 np0005539550 dracut[1276]: *** Including module: kernel-modules ***
Nov 29 01:16:16 np0005539550 kernel: block vda: the capability attribute has been deprecated.
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: kernel-modules-extra ***
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: qemu ***
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: fstab-sys ***
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: rootfs-block ***
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: terminfo ***
Nov 29 01:16:17 np0005539550 dracut[1276]: *** Including module: udev-rules ***
Nov 29 01:16:17 np0005539550 dracut[1276]: Skipping udev rule: 91-permissions.rules
Nov 29 01:16:17 np0005539550 dracut[1276]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: virtiofs ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: dracut-systemd ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: usrmount ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: base ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: fs-lib ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: kdumpbase ***
Nov 29 01:16:18 np0005539550 dracut[1276]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 01:16:18 np0005539550 dracut[1276]:  microcode_ctl module: mangling fw_dir
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel" is ignored
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 01:16:18 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 01:16:19 np0005539550 dracut[1276]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 01:16:19 np0005539550 dracut[1276]: *** Including module: openssl ***
Nov 29 01:16:19 np0005539550 dracut[1276]: *** Including module: shutdown ***
Nov 29 01:16:19 np0005539550 dracut[1276]: *** Including module: squash ***
Nov 29 01:16:19 np0005539550 dracut[1276]: *** Including modules done ***
Nov 29 01:16:19 np0005539550 dracut[1276]: *** Installing kernel module dependencies ***
Nov 29 01:16:20 np0005539550 dracut[1276]: *** Installing kernel module dependencies done ***
Nov 29 01:16:20 np0005539550 dracut[1276]: *** Resolving executable dependencies ***
Nov 29 01:16:21 np0005539550 dracut[1276]: *** Resolving executable dependencies done ***
Nov 29 01:16:21 np0005539550 dracut[1276]: *** Generating early-microcode cpio image ***
Nov 29 01:16:21 np0005539550 dracut[1276]: *** Store current command line parameters ***
Nov 29 01:16:21 np0005539550 dracut[1276]: Stored kernel commandline:
Nov 29 01:16:21 np0005539550 dracut[1276]: No dracut internal kernel commandline stored in the initramfs
Nov 29 01:16:22 np0005539550 dracut[1276]: *** Install squash loader ***
Nov 29 01:16:23 np0005539550 dracut[1276]: *** Squashing the files inside the initramfs ***
Nov 29 01:16:24 np0005539550 dracut[1276]: *** Squashing the files inside the initramfs done ***
Nov 29 01:16:24 np0005539550 dracut[1276]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 01:16:24 np0005539550 dracut[1276]: *** Hardlinking files ***
Nov 29 01:16:24 np0005539550 dracut[1276]: *** Hardlinking files done ***
Nov 29 01:16:24 np0005539550 dracut[1276]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 01:16:25 np0005539550 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 29 01:16:25 np0005539550 kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 29 01:16:25 np0005539550 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 01:16:25 np0005539550 systemd[1]: Startup finished in 3.250s (kernel) + 4.280s (initrd) + 26.383s (userspace) = 33.914s.
Nov 29 01:16:27 np0005539550 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 01:16:27 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 01:16:27 np0005539550 systemd-logind[788]: New session 1 of user zuul.
Nov 29 01:16:27 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 01:16:27 np0005539550 systemd[1]: Starting User Manager for UID 1000...
Nov 29 01:16:28 np0005539550 systemd[4303]: Queued start job for default target Main User Target.
Nov 29 01:16:28 np0005539550 systemd[4303]: Created slice User Application Slice.
Nov 29 01:16:28 np0005539550 systemd[4303]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 01:16:28 np0005539550 systemd[4303]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 01:16:28 np0005539550 systemd[4303]: Reached target Paths.
Nov 29 01:16:28 np0005539550 systemd[4303]: Reached target Timers.
Nov 29 01:16:28 np0005539550 systemd[4303]: Starting D-Bus User Message Bus Socket...
Nov 29 01:16:28 np0005539550 systemd[4303]: Starting Create User's Volatile Files and Directories...
Nov 29 01:16:28 np0005539550 systemd[4303]: Finished Create User's Volatile Files and Directories.
Nov 29 01:16:28 np0005539550 systemd[4303]: Listening on D-Bus User Message Bus Socket.
Nov 29 01:16:28 np0005539550 systemd[4303]: Reached target Sockets.
Nov 29 01:16:28 np0005539550 systemd[4303]: Reached target Basic System.
Nov 29 01:16:28 np0005539550 systemd[4303]: Reached target Main User Target.
Nov 29 01:16:28 np0005539550 systemd[4303]: Startup finished in 109ms.
Nov 29 01:16:28 np0005539550 systemd[1]: Started User Manager for UID 1000.
Nov 29 01:16:28 np0005539550 systemd[1]: Started Session 1 of User zuul.
Nov 29 01:16:28 np0005539550 python3[4385]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:31 np0005539550 python3[4413]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:33 np0005539550 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:16:41 np0005539550 python3[4473]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:42 np0005539550 python3[4513]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 01:16:44 np0005539550 python3[4539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCudHL3tHiIrGUdr3CZx/jgOAB+sTyj6z0B6SJoPEhADY63ZPdbzjdQhpgyhpnTdwlh6+Z4xPQ+DxOd+FPfH9ETfjTAZyODPGBr+U3/aWYFrr1YsSqkwWe+DI0V25XzOJIl8WeH42Z3m8/2jQ1VE7oAtX0LFpiSM33D5G6jGs1zirRd3I823HIkLEWTOQev3Et6zuPF/J/lkKUMsa94htQ/yvthrhpk7+QsWEk8T5uet2LZvnIsjZFIgfCCgTeGtE4eqcC9tdVxfYIwVhUeu3eCkwwBkVi0t0HhAh3qbiXsTIErO5yg2fPPye0mC6UjHMgSqc5crO5b4VU6uuoKLqXHXfoyrjf1PG1bb3S1A7UO9fs+mG8UJ2N53kHSyQ5YcQ+hZyyXqVeKPIQFPvwTxYMEb+rxzq5f56DdR8ruRmocVTqpGu1VTEdGIWkU8IaEB9kOEu7t8oFgiym6LUXBbbd9a6AkVCauAPe7Kq0Q4VHZVxtWSFjTAEi5x3CFG2qzn0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:45 np0005539550 python3[4563]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:45 np0005539550 python3[4662]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:46 np0005539550 python3[4733]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397005.3738394-251-238763865874088/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa follow=False checksum=03e35f16cd901f940500378f2e2f2ebf2de0be9d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:46 np0005539550 python3[4856]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:47 np0005539550 python3[4927]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397006.3502169-306-85057225686164/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=e891a9c3f1e64b45bc756f48ef3ae3aa_id_rsa.pub follow=False checksum=b1e1e3a6e20142e56b32029e5e58b508f3db73ab backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:48 np0005539550 python3[4975]: ansible-ping Invoked with data=pong
Nov 29 01:16:49 np0005539550 python3[4999]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:16:52 np0005539550 python3[5057]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 01:16:53 np0005539550 python3[5089]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539550 python3[5113]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:53 np0005539550 python3[5137]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539550 python3[5161]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539550 python3[5185]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:54 np0005539550 python3[5209]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:56 np0005539550 python3[5235]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:57 np0005539550 python3[5313]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:16:57 np0005539550 python3[5386]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397016.5397499-31-118287364298136/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:16:58 np0005539550 python3[5434]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539550 python3[5458]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:58 np0005539550 python3[5482]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539550 python3[5506]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539550 python3[5530]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539550 python3[5554]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:16:59 np0005539550 python3[5578]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539550 python3[5602]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539550 python3[5626]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:00 np0005539550 python3[5650]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539550 python3[5674]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539550 python3[5698]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539550 python3[5722]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:01 np0005539550 python3[5746]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539550 python3[5770]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539550 python3[5794]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:02 np0005539550 python3[5818]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539550 python3[5842]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539550 python3[5866]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:03 np0005539550 python3[5890]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539550 python3[5914]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539550 python3[5938]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539550 python3[5962]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:04 np0005539550 python3[5986]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539550 python3[6010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:05 np0005539550 python3[6034]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:17:08 np0005539550 python3[6060]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 01:17:08 np0005539550 systemd[1]: Starting Time & Date Service...
Nov 29 01:17:08 np0005539550 systemd[1]: Started Time & Date Service.
Nov 29 01:17:08 np0005539550 systemd-timedated[6062]: Changed time zone to 'UTC' (UTC).
Nov 29 01:17:08 np0005539550 python3[6091]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:09 np0005539550 python3[6167]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:09 np0005539550 python3[6238]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764397028.882375-251-158737903208195/source _original_basename=tmpibnor4nf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:10 np0005539550 python3[6338]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:10 np0005539550 python3[6409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397029.784494-301-162146466474281/source _original_basename=tmpk4ar8qkf follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:11 np0005539550 python3[6511]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:11 np0005539550 python3[6584]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764397030.853513-381-53142263522933/source _original_basename=tmpt6e45tth follow=False checksum=0278a60fa9fcb701d8ddd2d2d748895769827669 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:12 np0005539550 python3[6632]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:12 np0005539550 python3[6658]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:12 np0005539550 python3[6738]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:17:13 np0005539550 python3[6811]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397032.6167161-451-202411813291453/source _original_basename=tmpsx2kq50z follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:14 np0005539550 python3[6862]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2a81-8810-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:17:14 np0005539550 python3[6890]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2a81-8810-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 01:17:16 np0005539550 python3[6918]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:17:38 np0005539550 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 01:18:02 np0005539550 python3[6946]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:29 np0005539550 systemd[4303]: Starting Mark boot as successful...
Nov 29 01:18:29 np0005539550 systemd[4303]: Finished Mark boot as successful.
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 01:18:44 np0005539550 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 01:18:44 np0005539550 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.0816] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:18:44 np0005539550 systemd-udevd[6948]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1062] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1112] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1116] device (eth1): carrier: link connected
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1117] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1123] policy: auto-activating connection 'Wired connection 1' (65c55b91-2eaa-3af4-816f-ae4a114df54c)
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1127] device (eth1): Activation: starting connection 'Wired connection 1' (65c55b91-2eaa-3af4-816f-ae4a114df54c)
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1128] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1131] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1134] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:18:44 np0005539550 NetworkManager[861]: <info>  [1764397124.1138] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:18:45 np0005539550 python3[6975]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-37f5-2ec2-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:18:55 np0005539550 python3[7055]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:18:55 np0005539550 python3[7128]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764397135.0000005-104-191732779463180/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c7e695621d6b0e1bc1e894ebf6f986a7a62973a3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:18:56 np0005539550 python3[7178]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 01:18:56 np0005539550 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 01:18:56 np0005539550 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 01:18:56 np0005539550 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 01:18:56 np0005539550 systemd[1]: Stopping Network Manager...
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6348] caught SIGTERM, shutting down normally.
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6369] dhcp4 (eth0): canceled DHCP transaction
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6370] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6371] dhcp4 (eth0): state changed no lease
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6375] manager: NetworkManager state is now CONNECTING
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6500] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.6502] dhcp4 (eth1): state changed no lease
Nov 29 01:18:56 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:18:56 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:18:56 np0005539550 NetworkManager[861]: <info>  [1764397136.7476] exiting (success)
Nov 29 01:18:56 np0005539550 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 01:18:56 np0005539550 systemd[1]: Stopped Network Manager.
Nov 29 01:18:56 np0005539550 systemd[1]: NetworkManager.service: Consumed 1.389s CPU time, 10.2M memory peak.
Nov 29 01:18:56 np0005539550 systemd[1]: Starting Network Manager...
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.8299] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c483fbf3-82cb-4f31-8932-5eb81827298b)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.8302] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.8376] manager[0x563f74f15070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 01:18:56 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 01:18:56 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9339] hostname: hostname: using hostnamed
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9340] hostname: static hostname changed from (none) to "np0005539550.novalocal"
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9348] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9355] manager[0x563f74f15070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9355] manager[0x563f74f15070]: rfkill: WWAN hardware radio set enabled
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9404] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9404] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9405] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9406] manager: Networking is enabled by state file
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9411] settings: Loaded settings plugin: keyfile (internal)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9417] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9465] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9482] dhcp: init: Using DHCP client 'internal'
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9488] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9499] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9515] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9533] device (lo): Activation: starting connection 'lo' (10063a61-047c-420e-af96-644d75bb9c9d)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9546] device (eth0): carrier: link connected
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9554] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9564] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9566] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9578] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9591] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9601] device (eth1): carrier: link connected
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9609] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9620] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (65c55b91-2eaa-3af4-816f-ae4a114df54c) (indicated)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9622] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9633] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9646] device (eth1): Activation: starting connection 'Wired connection 1' (65c55b91-2eaa-3af4-816f-ae4a114df54c)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9656] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 01:18:56 np0005539550 systemd[1]: Started Network Manager.
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9664] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9670] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9674] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9677] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9683] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9687] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9694] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9699] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9712] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9716] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9731] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9738] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9771] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9780] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9791] device (lo): Activation: successful, device activated.
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9803] dhcp4 (eth0): state changed new lease, address=38.102.83.190
Nov 29 01:18:56 np0005539550 NetworkManager[7195]: <info>  [1764397136.9815] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 01:18:56 np0005539550 systemd[1]: Starting Network Manager Wait Online...
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0045] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0079] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0106] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0112] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0116] device (eth0): Activation: successful, device activated.
Nov 29 01:18:57 np0005539550 NetworkManager[7195]: <info>  [1764397137.0122] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 01:18:57 np0005539550 python3[7262]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-37f5-2ec2-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:19:07 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:19:26 np0005539550 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.8814] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 01:19:41 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:19:41 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9346] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9349] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9359] device (eth1): Activation: successful, device activated.
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9368] manager: startup complete
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9370] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <warn>  [1764397181.9378] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9387] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 systemd[1]: Finished Network Manager Wait Online.
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9449] dhcp4 (eth1): canceled DHCP transaction
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9450] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9450] dhcp4 (eth1): state changed no lease
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9473] policy: auto-activating connection 'ci-private-network' (7af235b7-0485-5af2-945d-b5fcc188a690)
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9481] device (eth1): Activation: starting connection 'ci-private-network' (7af235b7-0485-5af2-945d-b5fcc188a690)
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9483] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9492] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9502] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9516] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9561] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9564] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 01:19:41 np0005539550 NetworkManager[7195]: <info>  [1764397181.9572] device (eth1): Activation: successful, device activated.
Nov 29 01:19:52 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:19:57 np0005539550 systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Nov 29 01:21:05 np0005539550 systemd-logind[788]: New session 3 of user zuul.
Nov 29 01:21:05 np0005539550 systemd[1]: Started Session 3 of User zuul.
Nov 29 01:21:06 np0005539550 python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:21:06 np0005539550 python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397266.069818-373-220552272635210/source _original_basename=tmpwfr_8i1q follow=False checksum=8b8510474e05b0641525b6d1881e4eebcca5fe7b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:21:11 np0005539550 systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Nov 29 01:21:11 np0005539550 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 01:21:11 np0005539550 systemd-logind[788]: Removed session 3.
Nov 29 01:21:29 np0005539550 systemd[4303]: Created slice User Background Tasks Slice.
Nov 29 01:21:29 np0005539550 systemd[4303]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 01:21:29 np0005539550 systemd[4303]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 01:27:16 np0005539550 systemd-logind[788]: New session 4 of user zuul.
Nov 29 01:27:16 np0005539550 systemd[1]: Started Session 4 of User zuul.
Nov 29 01:27:16 np0005539550 python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000ca8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:17 np0005539550 python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:18 np0005539550 python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:19 np0005539550 python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:19 np0005539550 python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:19 np0005539550 python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:20 np0005539550 python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:27:20 np0005539550 python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397640.0722651-366-127680917648611/source _original_basename=tmpt3dik4_q follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:27:21 np0005539550 python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 01:27:21 np0005539550 systemd[1]: Reloading.
Nov 29 01:27:21 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:27:23 np0005539550 python3[7893]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 01:27:23 np0005539550 python3[7919]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539550 python3[7947]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539550 python3[7975]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:24 np0005539550 python3[8003]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539550 python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dbf6-85dc-000000000caf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:27:25 np0005539550 python3[8060]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 01:27:28 np0005539550 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 01:27:28 np0005539550 systemd[1]: session-4.scope: Consumed 4.578s CPU time.
Nov 29 01:27:28 np0005539550 systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Nov 29 01:27:28 np0005539550 systemd-logind[788]: Removed session 4.
Nov 29 01:27:30 np0005539550 systemd-logind[788]: New session 5 of user zuul.
Nov 29 01:27:30 np0005539550 systemd[1]: Started Session 5 of User zuul.
Nov 29 01:27:30 np0005539550 python3[8093]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 01:27:52 np0005539550 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:27:52 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:03 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:13 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:15 np0005539550 setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 01:28:15 np0005539550 setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 01:28:27 np0005539550 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 01:28:27 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 01:28:49 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:28:49 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 01:28:49 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 01:28:49 np0005539550 systemd[1]: Reloading.
Nov 29 01:28:49 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 01:28:49 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 01:28:52 np0005539550 irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 01:28:52 np0005539550 irqbalance[786]: IRQ 27 affinity is now unmanaged
Nov 29 01:29:41 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 01:29:41 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 01:29:41 np0005539550 systemd[1]: man-db-cache-update.service: Consumed 56.414s CPU time.
Nov 29 01:29:41 np0005539550 systemd[1]: run-r8b479c831f1649aeb5e294bdbc8f490d.service: Deactivated successfully.
Nov 29 01:29:51 np0005539550 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 01:29:51 np0005539550 systemd[1]: session-5.scope: Consumed 1min 5.454s CPU time.
Nov 29 01:29:51 np0005539550 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Nov 29 01:29:51 np0005539550 systemd-logind[788]: Removed session 5.
Nov 29 01:30:32 np0005539550 systemd-logind[788]: New session 6 of user zuul.
Nov 29 01:30:32 np0005539550 systemd[1]: Started Session 6 of User zuul.
Nov 29 01:30:32 np0005539550 python3[29482]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-fce3-968d-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:30:33 np0005539550 kernel: evm: overlay not supported
Nov 29 01:30:33 np0005539550 systemd[4303]: Starting D-Bus User Message Bus...
Nov 29 01:30:33 np0005539550 dbus-broker-launch[29541]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 01:30:33 np0005539550 dbus-broker-launch[29541]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 01:30:33 np0005539550 systemd[4303]: Started D-Bus User Message Bus.
Nov 29 01:30:33 np0005539550 dbus-broker-lau[29541]: Ready
Nov 29 01:30:34 np0005539550 systemd[4303]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 01:30:34 np0005539550 systemd[4303]: Created slice Slice /user.
Nov 29 01:30:34 np0005539550 systemd[4303]: podman-29521.scope: unit configures an IP firewall, but not running as root.
Nov 29 01:30:34 np0005539550 systemd[4303]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 01:30:34 np0005539550 systemd[4303]: Started podman-29521.scope.
Nov 29 01:30:34 np0005539550 systemd[4303]: Started podman-pause-12fe3c51.scope.
Nov 29 01:30:36 np0005539550 python3[29568]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.89:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:30:36 np0005539550 python3[29568]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 01:30:37 np0005539550 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 01:30:37 np0005539550 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Nov 29 01:30:37 np0005539550 systemd-logind[788]: Removed session 6.
Nov 29 01:31:02 np0005539550 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 01:31:02 np0005539550 systemd-logind[788]: New session 7 of user zuul.
Nov 29 01:31:02 np0005539550 systemd[1]: Started Session 7 of User zuul.
Nov 29 01:31:02 np0005539550 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 01:31:02 np0005539550 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 01:31:02 np0005539550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 01:31:02 np0005539550 python3[29609]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:03 np0005539550 python3[29635]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:04 np0005539550 python3[29661]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539550.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 01:31:04 np0005539550 python3[29695]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJrjhnp5BPERL248LVqF1CJ8EP1wq/Z56Tsyt80ETMnfUDtvWUz47K0wXbpz5P79ut5MVJjWHtBnsg3Wj8zK7v0= zuul@np0005539549.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 01:31:05 np0005539550 python3[29773]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:31:05 np0005539550 python3[29846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764397864.7349114-167-30265317071927/source _original_basename=tmp_i7ce_6h follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:31:06 np0005539550 python3[29896]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 01:31:06 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 01:31:06 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 01:31:06 np0005539550 systemd-hostnamed[29900]: Changed pretty hostname to 'compute-0'
Nov 29 01:31:06 np0005539550 systemd-hostnamed[29900]: Hostname set to <compute-0> (static)
Nov 29 01:31:06 np0005539550 NetworkManager[7195]: <info>  [1764397866.6825] hostname: static hostname changed from "np0005539550.novalocal" to "compute-0"
Nov 29 01:31:06 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 01:31:06 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 01:31:07 np0005539550 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 01:31:07 np0005539550 systemd[1]: session-7.scope: Consumed 2.441s CPU time.
Nov 29 01:31:07 np0005539550 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Nov 29 01:31:07 np0005539550 systemd-logind[788]: Removed session 7.
Nov 29 01:31:16 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 01:31:36 np0005539550 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 01:36:12 np0005539550 systemd-logind[788]: New session 8 of user zuul.
Nov 29 01:36:12 np0005539550 systemd[1]: Started Session 8 of User zuul.
Nov 29 01:36:12 np0005539550 python3[29999]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 01:36:15 np0005539550 python3[30115]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:16 np0005539550 python3[30188]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:16 np0005539550 python3[30214]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:17 np0005539550 python3[30287]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:17 np0005539550 python3[30313]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:17 np0005539550 python3[30386]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:17 np0005539550 python3[30412]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:18 np0005539550 python3[30485]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:18 np0005539550 python3[30511]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539550 python3[30584]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:19 np0005539550 python3[30610]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:19 np0005539550 python3[30683]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:20 np0005539550 python3[30709]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 01:36:20 np0005539550 python3[30782]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764398175.5798988-34056-29084682106675/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 01:36:32 np0005539550 python3[30840]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 01:41:32 np0005539550 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 01:41:32 np0005539550 systemd[1]: session-8.scope: Consumed 5.504s CPU time.
Nov 29 01:41:32 np0005539550 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Nov 29 01:41:32 np0005539550 systemd-logind[788]: Removed session 8.
Nov 29 01:43:29 np0005539550 systemd[1]: Starting dnf makecache...
Nov 29 01:43:29 np0005539550 dnf[30846]: Failed determining last makecache time.
Nov 29 01:43:29 np0005539550 dnf[30846]: delorean-openstack-barbican-42b4c41831408a8e323 312 kB/s |  13 kB     00:00
Nov 29 01:43:29 np0005539550 dnf[30846]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.7 MB/s |  65 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-cinder-1c00d6490d88e436f26ef 895 kB/s |  32 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-python-stevedore-c4acc5639fd2329372142 3.6 MB/s | 131 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.3 MB/s |  32 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-os-net-config-9758ab42364673d01bc5014e  12 MB/s | 349 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 234 kB/s |  42 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-python-designate-tests-tempest-347fdbc 688 kB/s |  18 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-glance-1fd12c29b339f30fe823e 621 kB/s |  18 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 967 kB/s |  29 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-manila-3c01b7181572c95dac462 693 kB/s |  25 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-python-whitebox-neutron-tests-tempest- 4.1 MB/s | 154 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-octavia-ba397f07a7331190208c 760 kB/s |  26 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-watcher-c014f81a8647287f6dcc 625 kB/s |  16 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-python-tcib-1124124ec06aadbac34f0d340b 281 kB/s | 7.4 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 5.0 MB/s | 144 kB     00:00
Nov 29 01:43:30 np0005539550 dnf[30846]: delorean-openstack-swift-dc98a8463506ac520c469a 579 kB/s |  14 kB     00:00
Nov 29 01:43:31 np0005539550 dnf[30846]: delorean-python-tempestconf-8515371b7cceebd4282 2.0 MB/s |  53 kB     00:00
Nov 29 01:43:31 np0005539550 dnf[30846]: delorean-openstack-heat-ui-013accbfd179753bc3f0 3.6 MB/s |  96 kB     00:00
Nov 29 01:43:31 np0005539550 dnf[30846]: CentOS Stream 9 - BaseOS                         41 kB/s | 7.3 kB     00:00
Nov 29 01:43:31 np0005539550 dnf[30846]: CentOS Stream 9 - AppStream                      27 kB/s | 7.4 kB     00:00
Nov 29 01:43:31 np0005539550 dnf[30846]: CentOS Stream 9 - CRB                            32 kB/s | 7.2 kB     00:00
Nov 29 01:43:32 np0005539550 dnf[30846]: CentOS Stream 9 - Extras packages                86 kB/s | 8.3 kB     00:00
Nov 29 01:43:32 np0005539550 dnf[30846]: dlrn-antelope-testing                            17 MB/s | 1.1 MB     00:00
Nov 29 01:43:32 np0005539550 dnf[30846]: dlrn-antelope-build-deps                         12 MB/s | 461 kB     00:00
Nov 29 01:43:32 np0005539550 dnf[30846]: centos9-rabbitmq                                1.4 MB/s | 123 kB     00:00
Nov 29 01:43:32 np0005539550 dnf[30846]: centos9-storage                                  21 MB/s | 415 kB     00:00
Nov 29 01:43:33 np0005539550 dnf[30846]: centos9-opstools                                3.6 MB/s |  51 kB     00:00
Nov 29 01:43:33 np0005539550 dnf[30846]: NFV SIG OpenvSwitch                              11 MB/s | 456 kB     00:00
Nov 29 01:43:33 np0005539550 dnf[30846]: repo-setup-centos-appstream                      48 MB/s |  25 MB     00:00
Nov 29 01:43:40 np0005539550 dnf[30846]: repo-setup-centos-baseos                         14 MB/s | 8.8 MB     00:00
Nov 29 01:43:42 np0005539550 dnf[30846]: repo-setup-centos-highavailability               16 MB/s | 744 kB     00:00
Nov 29 01:43:43 np0005539550 dnf[30846]: repo-setup-centos-powertools                    5.3 MB/s | 7.3 MB     00:01
Nov 29 01:43:46 np0005539550 dnf[30846]: Extra Packages for Enterprise Linux 9 - x86_64   14 MB/s |  20 MB     00:01
Nov 29 01:44:02 np0005539550 dnf[30846]: Metadata cache created.
Nov 29 01:44:03 np0005539550 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 01:44:03 np0005539550 systemd[1]: Finished dnf makecache.
Nov 29 01:44:03 np0005539550 systemd[1]: dnf-makecache.service: Consumed 25.735s CPU time.
Nov 29 02:00:15 np0005539550 systemd-logind[788]: New session 9 of user zuul.
Nov 29 02:00:15 np0005539550 systemd[1]: Started Session 9 of User zuul.
Nov 29 02:00:16 np0005539550 python3.9[31110]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:17 np0005539550 python3.9[31291]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:32 np0005539550 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 02:00:32 np0005539550 systemd[1]: session-9.scope: Consumed 9.057s CPU time.
Nov 29 02:00:32 np0005539550 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Nov 29 02:00:32 np0005539550 systemd-logind[788]: Removed session 9.
Nov 29 02:00:48 np0005539550 systemd-logind[788]: New session 10 of user zuul.
Nov 29 02:00:48 np0005539550 systemd[1]: Started Session 10 of User zuul.
Nov 29 02:00:49 np0005539550 python3.9[31501]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:00:50 np0005539550 python3.9[31675]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:51 np0005539550 python3.9[31827]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:00:52 np0005539550 python3.9[31980]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:00:53 np0005539550 python3.9[32132]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:54 np0005539550 python3.9[32284]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:00:54 np0005539550 python3.9[32407]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764399653.4296825-182-120993788297346/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:00:55 np0005539550 python3.9[32559]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:00:56 np0005539550 python3.9[32715]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:57 np0005539550 python3.9[32867]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:00:58 np0005539550 python3.9[33017]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:01:02 np0005539550 python3.9[33285]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:01:03 np0005539550 python3.9[33435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:04 np0005539550 python3.9[33589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:01:06 np0005539550 python3.9[33747]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:01:07 np0005539550 python3.9[33831]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:02:13 np0005539550 systemd[1]: Reloading.
Nov 29 02:02:13 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:13 np0005539550 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 02:02:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:02:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:16 np0005539550 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 02:02:16 np0005539550 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 02:02:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:02:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:02:16 np0005539550 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 02:02:20 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:02:20 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:04:03 np0005539550 kernel: SELinux:  Converting 2719 SID table entries...
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:04:03 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:04:04 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 02:04:04 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:04:04 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:04:04 np0005539550 systemd[1]: Reloading.
Nov 29 02:04:04 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:04:04 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:04:09 np0005539550 python3.9[35349]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:11 np0005539550 python3.9[35630]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:04:12 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:04:12 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:04:12 np0005539550 systemd[1]: man-db-cache-update.service: Consumed 1.296s CPU time.
Nov 29 02:04:12 np0005539550 systemd[1]: run-rae197a04c5b74e05acbda244b377fdf4.service: Deactivated successfully.
Nov 29 02:04:12 np0005539550 python3.9[35783]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:04:15 np0005539550 python3.9[35936]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:16 np0005539550 python3.9[36088]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:04:18 np0005539550 python3.9[36240]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:21 np0005539550 python3.9[36392]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:31 np0005539550 python3.9[36516]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764399858.744252-671-193158824251208/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:32 np0005539550 python3.9[36668]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:04:33 np0005539550 python3.9[36820]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:04:34 np0005539550 python3.9[36973]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:04:35 np0005539550 python3.9[37125]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:04:35 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:04:36 np0005539550 python3.9[37279]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:39 np0005539550 python3.9[37437]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:04:41 np0005539550 python3.9[37597]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:04:42 np0005539550 python3.9[37750]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:04:44 np0005539550 python3.9[37908]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:04:45 np0005539550 python3.9[38060]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:50 np0005539550 python3.9[38213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:51 np0005539550 python3.9[38365]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:51 np0005539550 python3.9[38488]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399890.5463283-1028-114414358375943/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:52 np0005539550 python3.9[38640]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:04:52 np0005539550 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:04:52 np0005539550 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 02:04:52 np0005539550 kernel: Bridge firewalling registered
Nov 29 02:04:52 np0005539550 systemd-modules-load[38644]: Inserted module 'br_netfilter'
Nov 29 02:04:52 np0005539550 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:04:53 np0005539550 python3.9[38799]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:04:54 np0005539550 python3.9[38922]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764399893.2872488-1097-271500411002613/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:04:55 np0005539550 python3.9[39074]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:04:59 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:04:59 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:04:59 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:04:59 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:04:59 np0005539550 systemd[1]: Reloading.
Nov 29 02:04:59 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:04:59 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:05:05 np0005539550 python3.9[42848]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:05 np0005539550 python3.9[43000]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:05:06 np0005539550 python3.9[43150]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:05:07 np0005539550 python3.9[43302]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:08 np0005539550 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:08 np0005539550 systemd[1]: Starting Authorization Manager...
Nov 29 02:05:08 np0005539550 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:08 np0005539550 polkitd[43519]: Started polkitd version 0.117
Nov 29 02:05:08 np0005539550 systemd[1]: Started Authorization Manager.
Nov 29 02:05:09 np0005539550 python3.9[43689]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:09 np0005539550 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:05:09 np0005539550 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:05:09 np0005539550 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:05:09 np0005539550 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:05:10 np0005539550 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:05:11 np0005539550 python3.9[43851]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:05:11 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:05:11 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:05:11 np0005539550 systemd[1]: man-db-cache-update.service: Consumed 5.323s CPU time.
Nov 29 02:05:11 np0005539550 systemd[1]: run-r728d5b2bb2e8466faf1a46d0bb090d24.service: Deactivated successfully.
Nov 29 02:05:14 np0005539550 python3.9[44004]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:14 np0005539550 systemd[1]: Reloading.
Nov 29 02:05:14 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:15 np0005539550 python3.9[44193]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:05:15 np0005539550 systemd[1]: Reloading.
Nov 29 02:05:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:05:18 np0005539550 python3.9[44382]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539550 python3.9[44535]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:19 np0005539550 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 02:05:20 np0005539550 python3.9[44688]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:22 np0005539550 python3.9[44850]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:05:23 np0005539550 python3.9[45003]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:05:23 np0005539550 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 02:05:23 np0005539550 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 02:05:23 np0005539550 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 02:05:23 np0005539550 systemd[1]: Starting Apply Kernel Variables...
Nov 29 02:05:23 np0005539550 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 02:05:23 np0005539550 systemd[1]: Finished Apply Kernel Variables.
Nov 29 02:05:24 np0005539550 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 02:05:24 np0005539550 systemd[1]: session-10.scope: Consumed 2min 23.082s CPU time.
Nov 29 02:05:24 np0005539550 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Nov 29 02:05:24 np0005539550 systemd-logind[788]: Removed session 10.
Nov 29 02:05:29 np0005539550 systemd-logind[788]: New session 11 of user zuul.
Nov 29 02:05:29 np0005539550 systemd[1]: Started Session 11 of User zuul.
Nov 29 02:05:30 np0005539550 python3.9[45187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:05:33 np0005539550 python3.9[45343]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:05:33 np0005539550 python3.9[45496]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:05:35 np0005539550 python3.9[45654]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:05:37 np0005539550 python3.9[45814]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:05:37 np0005539550 python3.9[45898]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:05:42 np0005539550 python3.9[46063]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:05:58 np0005539550 kernel: SELinux:  Converting 2731 SID table entries...
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:05:58 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:05:58 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 02:05:58 np0005539550 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 02:06:00 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:00 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:00 np0005539550 systemd[1]: Reloading.
Nov 29 02:06:00 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:00 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:00 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:01 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:01 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:01 np0005539550 systemd[1]: run-rc62d02f6a096487581ad276e84686c3e.service: Deactivated successfully.
Nov 29 02:06:18 np0005539550 python3.9[47160]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:06:18 np0005539550 systemd[1]: Reloading.
Nov 29 02:06:18 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:18 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:19 np0005539550 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 02:06:19 np0005539550 chown[47201]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 02:06:19 np0005539550 ovs-ctl[47207]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 02:06:19 np0005539550 ovs-ctl[47207]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 02:06:19 np0005539550 ovs-ctl[47207]: Starting ovsdb-server [  OK  ]
Nov 29 02:06:19 np0005539550 ovs-vsctl[47256]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 02:06:20 np0005539550 ovs-vsctl[47272]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 02:06:20 np0005539550 ovs-ctl[47207]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 02:06:20 np0005539550 ovs-ctl[47207]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:20 np0005539550 ovs-vsctl[47282]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 02:06:20 np0005539550 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 02:06:20 np0005539550 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 02:06:20 np0005539550 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 02:06:20 np0005539550 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 02:06:20 np0005539550 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 02:06:20 np0005539550 ovs-ctl[47326]: Inserting openvswitch module [  OK  ]
Nov 29 02:06:20 np0005539550 ovs-ctl[47295]: Starting ovs-vswitchd [  OK  ]
Nov 29 02:06:20 np0005539550 ovs-vsctl[47344]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 02:06:20 np0005539550 ovs-ctl[47295]: Enabling remote OVSDB managers [  OK  ]
Nov 29 02:06:20 np0005539550 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 02:06:20 np0005539550 systemd[1]: Starting Open vSwitch...
Nov 29 02:06:20 np0005539550 systemd[1]: Finished Open vSwitch.
Nov 29 02:06:21 np0005539550 python3.9[47495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:22 np0005539550 python3.9[47647]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:06:24 np0005539550 kernel: SELinux:  Converting 2745 SID table entries...
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:06:24 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:06:26 np0005539550 python3.9[47802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:06:27 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 02:06:27 np0005539550 python3.9[47960]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:30 np0005539550 python3.9[48113]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:06:31 np0005539550 python3.9[48400]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:06:32 np0005539550 python3.9[48550]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:06:33 np0005539550 python3.9[48704]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:36 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:36 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:36 np0005539550 systemd[1]: Reloading.
Nov 29 02:06:36 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:36 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:36 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:06:41 np0005539550 python3.9[49021]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:06:41 np0005539550 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 02:06:41 np0005539550 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 02:06:41 np0005539550 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 02:06:41 np0005539550 systemd[1]: Stopping Network Manager...
Nov 29 02:06:41 np0005539550 NetworkManager[7195]: <info>  [1764400001.2444] caught SIGTERM, shutting down normally.
Nov 29 02:06:41 np0005539550 NetworkManager[7195]: <info>  [1764400001.2460] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:06:41 np0005539550 NetworkManager[7195]: <info>  [1764400001.2460] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:41 np0005539550 NetworkManager[7195]: <info>  [1764400001.2460] dhcp4 (eth0): state changed no lease
Nov 29 02:06:41 np0005539550 NetworkManager[7195]: <info>  [1764400001.2462] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:41 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:06:41 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:06:42 np0005539550 NetworkManager[7195]: <info>  [1764400002.2524] exiting (success)
Nov 29 02:06:42 np0005539550 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 02:06:42 np0005539550 systemd[1]: Stopped Network Manager.
Nov 29 02:06:42 np0005539550 systemd[1]: NetworkManager.service: Consumed 20.821s CPU time, 4.1M memory peak, read 0B from disk, written 33.0K to disk.
Nov 29 02:06:42 np0005539550 systemd[1]: Starting Network Manager...
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.3144] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:c483fbf3-82cb-4f31-8932-5eb81827298b)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.3146] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.3213] manager[0x55fa5f9a7090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 02:06:42 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 02:06:42 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4190] hostname: hostname: using hostnamed
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4191] hostname: static hostname changed from (none) to "compute-0"
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4194] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4201] manager[0x55fa5f9a7090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4201] manager[0x55fa5f9a7090]: rfkill: WWAN hardware radio set enabled
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4226] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4236] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4236] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4236] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4237] manager: Networking is enabled by state file
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4239] settings: Loaded settings plugin: keyfile (internal)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4243] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4290] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4304] dhcp: init: Using DHCP client 'internal'
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4308] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4316] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4324] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4332] device (lo): Activation: starting connection 'lo' (10063a61-047c-420e-af96-644d75bb9c9d)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4341] device (eth0): carrier: link connected
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4345] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4351] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4352] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4361] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4369] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4376] device (eth1): carrier: link connected
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4380] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4383] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (7af235b7-0485-5af2-945d-b5fcc188a690) (indicated)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4384] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4389] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4396] device (eth1): Activation: starting connection 'ci-private-network' (7af235b7-0485-5af2-945d-b5fcc188a690)
Nov 29 02:06:42 np0005539550 systemd[1]: Started Network Manager.
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4406] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4414] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4417] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4419] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4421] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4426] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4429] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4433] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4437] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4444] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4448] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4459] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4473] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4506] dhcp4 (eth0): state changed new lease, address=38.102.83.190
Nov 29 02:06:42 np0005539550 NetworkManager[49039]: <info>  [1764400002.4514] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 02:06:42 np0005539550 systemd[1]: Starting Network Manager Wait Online...
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0390] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0394] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0408] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0415] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0422] device (lo): Activation: successful, device activated.
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0426] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0429] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0432] device (eth1): Activation: successful, device activated.
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0655] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0658] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0662] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0666] device (eth0): Activation: successful, device activated.
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0671] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 02:06:43 np0005539550 NetworkManager[49039]: <info>  [1764400003.0673] manager: startup complete
Nov 29 02:06:43 np0005539550 systemd[1]: Finished Network Manager Wait Online.
Nov 29 02:06:43 np0005539550 python3.9[49209]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:06:49 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:06:49 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:06:49 np0005539550 systemd[1]: run-rc5170552506c4b17b8e850c8078de776.service: Deactivated successfully.
Nov 29 02:06:53 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:06:57 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:06:57 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:06:57 np0005539550 systemd[1]: Reloading.
Nov 29 02:06:57 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:06:58 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:06:58 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:07:08 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:07:08 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:07:08 np0005539550 systemd[1]: run-re5200f3b0c654483a82babe63d63eca0.service: Deactivated successfully.
Nov 29 02:07:09 np0005539550 python3.9[49706]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:07:11 np0005539550 python3.9[49858]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:12 np0005539550 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 02:07:12 np0005539550 python3.9[50012]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:13 np0005539550 python3.9[50167]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539550 python3.9[50319]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:14 np0005539550 python3.9[50471]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:15 np0005539550 python3.9[50623]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:16 np0005539550 python3.9[50746]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400035.1261616-652-98545538527909/.source _original_basename=.2oemh0ml follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:17 np0005539550 python3.9[50898]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:18 np0005539550 python3.9[51050]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 02:07:19 np0005539550 python3.9[51202]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:22 np0005539550 python3.9[51629]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 02:07:23 np0005539550 ansible-async_wrapper.py[51804]: Invoked with j605467963712 300 /home/zuul/.ansible/tmp/ansible-tmp-1764400042.7832847-850-179812774674395/AnsiballZ_edpm_os_net_config.py _
Nov 29 02:07:23 np0005539550 ansible-async_wrapper.py[51807]: Starting module and watcher
Nov 29 02:07:23 np0005539550 ansible-async_wrapper.py[51807]: Start watching 51808 (300)
Nov 29 02:07:23 np0005539550 ansible-async_wrapper.py[51808]: Start module (51808)
Nov 29 02:07:23 np0005539550 ansible-async_wrapper.py[51804]: Return async_wrapper task started.
Nov 29 02:07:23 np0005539550 python3.9[51809]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 02:07:24 np0005539550 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 02:07:24 np0005539550 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 02:07:24 np0005539550 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 02:07:24 np0005539550 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 02:07:24 np0005539550 kernel: cfg80211: failed to load regulatory.db
Nov 29 02:07:25 np0005539550 NetworkManager[49039]: <info>  [1764400045.9857] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51810 uid=0 result="success"
Nov 29 02:07:25 np0005539550 NetworkManager[49039]: <info>  [1764400045.9881] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0550] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0552] audit: op="connection-add" uuid="19b414be-ef17-4ade-bab4-3a93274b6fc7" name="br-ex-br" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0569] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0570] audit: op="connection-add" uuid="cf9d2efb-327b-4f43-909d-ea6b67273b7e" name="br-ex-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0590] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0591] audit: op="connection-add" uuid="5871bf38-2ed5-4fe3-a697-99f64001a4a4" name="eth1-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0603] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0605] audit: op="connection-add" uuid="f857db0b-238c-4a75-8949-5604ea2a6709" name="vlan20-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0615] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0616] audit: op="connection-add" uuid="b367b47b-67b2-4099-945d-73609e7374cd" name="vlan21-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0630] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0631] audit: op="connection-add" uuid="4b9ffde1-7721-42b9-9529-427b6ff022fb" name="vlan22-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0653] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0655] audit: op="connection-add" uuid="6b284820-f5ab-44bf-9bc1-96f4169a516f" name="vlan23-port" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0679] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0703] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0705] audit: op="connection-add" uuid="f96ec3c2-b26a-4b13-b523-0c193293397e" name="br-ex-if" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0754] audit: op="connection-update" uuid="7af235b7-0485-5af2-945d-b5fcc188a690" name="ci-private-network" args="ipv4.routes,ipv4.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,connection.port-type,connection.controller,connection.timestamp,connection.slave-type,connection.master,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.method,ipv6.routes,ovs-external-ids.data,ovs-interface.type" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0772] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0774] audit: op="connection-add" uuid="ff16cff0-fa7b-4788-976e-574faaecd0d4" name="vlan20-if" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0792] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0794] audit: op="connection-add" uuid="b3f65794-f856-4bc5-a124-58d227d8e102" name="vlan21-if" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0808] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0809] audit: op="connection-add" uuid="c34ac832-3dd4-4a85-a81f-f352e82218c6" name="vlan22-if" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0829] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0831] audit: op="connection-add" uuid="7e0b9890-0499-47e6-8226-a4733fb51907" name="vlan23-if" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0845] audit: op="connection-delete" uuid="65c55b91-2eaa-3af4-816f-ae4a114df54c" name="Wired connection 1" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0859] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0869] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0873] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (19b414be-ef17-4ade-bab4-3a93274b6fc7)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0873] audit: op="connection-activate" uuid="19b414be-ef17-4ade-bab4-3a93274b6fc7" name="br-ex-br" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0875] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0882] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0887] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (cf9d2efb-327b-4f43-909d-ea6b67273b7e)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0889] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0894] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0898] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (5871bf38-2ed5-4fe3-a697-99f64001a4a4)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0900] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0907] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0911] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f857db0b-238c-4a75-8949-5604ea2a6709)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0912] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0919] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0923] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b367b47b-67b2-4099-945d-73609e7374cd)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0924] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0931] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0935] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (4b9ffde1-7721-42b9-9529-427b6ff022fb)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0936] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0943] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0963] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (6b284820-f5ab-44bf-9bc1-96f4169a516f)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0965] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0967] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0970] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0980] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0986] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0992] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f96ec3c2-b26a-4b13-b523-0c193293397e)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0993] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0996] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.0998] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1000] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1001] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1014] device (eth1): disconnecting for new activation request.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1015] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1019] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1020] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1021] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1024] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1029] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1033] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (ff16cff0-fa7b-4788-976e-574faaecd0d4)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1034] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1036] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1037] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1038] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1040] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1044] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1047] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (b3f65794-f856-4bc5-a124-58d227d8e102)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1047] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1050] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1052] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1053] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1056] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1060] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1064] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (c34ac832-3dd4-4a85-a81f-f352e82218c6)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1065] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1068] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1070] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1071] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1074] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1079] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1085] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7e0b9890-0499-47e6-8226-a4733fb51907)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1086] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1089] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1091] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1092] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1094] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1113] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1116] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1120] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1122] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1130] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1133] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1137] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1140] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1142] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1146] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1150] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1153] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1155] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 kernel: ovs-system: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1161] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1166] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1170] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1173] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1179] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 systemd-udevd[51815]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:26 np0005539550 kernel: Timeout policy base is empty
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1183] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1188] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1190] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1195] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1200] dhcp4 (eth0): canceled DHCP transaction
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1200] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1200] dhcp4 (eth0): state changed no lease
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1202] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1214] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1218] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51810 uid=0 result="fail" reason="Device is not activated"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1266] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1272] dhcp4 (eth0): state changed new lease, address=38.102.83.190
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1280] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 02:07:26 np0005539550 kernel: br-ex: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1565] device (eth1): disconnecting for new activation request.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1566] audit: op="connection-activate" uuid="7af235b7-0485-5af2-945d-b5fcc188a690" name="ci-private-network" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1572] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1579] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1586] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 02:07:26 np0005539550 kernel: vlan22: entered promiscuous mode
Nov 29 02:07:26 np0005539550 systemd-udevd[51816]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:26 np0005539550 kernel: vlan20: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1759] device (eth1): Activation: starting connection 'ci-private-network' (7af235b7-0485-5af2-945d-b5fcc188a690)
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1764] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1765] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1766] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1767] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1772] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1773] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1774] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1791] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1794] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1804] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 kernel: vlan21: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1808] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1813] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1816] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1819] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 systemd-udevd[51814]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1822] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1826] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1832] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1835] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1839] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1842] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1847] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1851] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1855] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1862] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1869] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51810 uid=0 result="success"
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1880] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1892] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539550 kernel: vlan23: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1910] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1911] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1915] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1936] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1948] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1950] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1956] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1963] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1968] device (eth1): Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1973] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1974] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1980] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1988] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1989] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1991] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.1997] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2003] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2010] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2061] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2064] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2087] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2095] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2119] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2122] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2130] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2135] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2139] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 02:07:26 np0005539550 NetworkManager[49039]: <info>  [1764400046.2144] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 02:07:27 np0005539550 NetworkManager[49039]: <info>  [1764400047.4024] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51810 uid=0 result="success"
Nov 29 02:07:27 np0005539550 NetworkManager[49039]: <info>  [1764400047.6103] checkpoint[0x55fa5f97d950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 02:07:27 np0005539550 NetworkManager[49039]: <info>  [1764400047.6106] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51810 uid=0 result="success"
Nov 29 02:07:27 np0005539550 python3.9[52168]: ansible-ansible.legacy.async_status Invoked with jid=j605467963712.51804 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:27 np0005539550 NetworkManager[49039]: <info>  [1764400047.9414] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51810 uid=0 result="success"
Nov 29 02:07:27 np0005539550 NetworkManager[49039]: <info>  [1764400047.9426] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51810 uid=0 result="success"
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.2628] audit: op="networking-control" arg="global-dns-configuration" pid=51810 uid=0 result="success"
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.2691] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.2744] audit: op="networking-control" arg="global-dns-configuration" pid=51810 uid=0 result="success"
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.2769] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51810 uid=0 result="success"
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.4373] checkpoint[0x55fa5f97da20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 02:07:28 np0005539550 NetworkManager[49039]: <info>  [1764400048.4382] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51810 uid=0 result="success"
Nov 29 02:07:28 np0005539550 ansible-async_wrapper.py[51808]: Module complete (51808)
Nov 29 02:07:28 np0005539550 ansible-async_wrapper.py[51807]: Done in kid B.
Nov 29 02:07:31 np0005539550 python3.9[52274]: ansible-ansible.legacy.async_status Invoked with jid=j605467963712.51804 mode=status _async_dir=/root/.ansible_async
Nov 29 02:07:31 np0005539550 python3.9[52374]: ansible-ansible.legacy.async_status Invoked with jid=j605467963712.51804 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:07:32 np0005539550 python3.9[52526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:32 np0005539550 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 02:07:32 np0005539550 irqbalance[786]: IRQ 26 affinity is now unmanaged
Nov 29 02:07:33 np0005539550 python3.9[52649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400052.1672285-931-23788521709141/.source.returncode _original_basename=.5f9sw6fr follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:34 np0005539550 python3.9[52801]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:07:34 np0005539550 python3.9[52925]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400053.5870767-979-232029556508176/.source.cfg _original_basename=.op7lvip7 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:07:35 np0005539550 python3.9[53077]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:07:35 np0005539550 systemd[1]: Reloading Network Manager...
Nov 29 02:07:35 np0005539550 NetworkManager[49039]: <info>  [1764400055.7541] audit: op="reload" arg="0" pid=53081 uid=0 result="success"
Nov 29 02:07:35 np0005539550 NetworkManager[49039]: <info>  [1764400055.7552] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 02:07:35 np0005539550 systemd[1]: Reloaded Network Manager.
Nov 29 02:07:36 np0005539550 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 02:07:36 np0005539550 systemd[1]: session-11.scope: Consumed 54.028s CPU time.
Nov 29 02:07:36 np0005539550 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Nov 29 02:07:36 np0005539550 systemd-logind[788]: Removed session 11.
Nov 29 02:07:41 np0005539550 systemd-logind[788]: New session 12 of user zuul.
Nov 29 02:07:41 np0005539550 systemd[1]: Started Session 12 of User zuul.
Nov 29 02:07:42 np0005539550 python3.9[53265]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:43 np0005539550 python3.9[53419]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:45 np0005539550 python3.9[53613]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:07:45 np0005539550 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 02:07:45 np0005539550 systemd[1]: session-12.scope: Consumed 2.301s CPU time.
Nov 29 02:07:45 np0005539550 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Nov 29 02:07:45 np0005539550 systemd-logind[788]: Removed session 12.
Nov 29 02:07:45 np0005539550 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 02:07:50 np0005539550 systemd-logind[788]: New session 13 of user zuul.
Nov 29 02:07:50 np0005539550 systemd[1]: Started Session 13 of User zuul.
Nov 29 02:07:52 np0005539550 python3.9[53795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:53 np0005539550 python3.9[53949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:07:54 np0005539550 python3.9[54106]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:55 np0005539550 python3.9[54190]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:07:57 np0005539550 python3.9[54345]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:07:59 np0005539550 python3.9[54540]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:00 np0005539550 python3.9[54692]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-compat997399649-merged.mount: Deactivated successfully.
Nov 29 02:08:00 np0005539550 podman[54693]: 2025-11-29 07:08:00.56782656 +0000 UTC m=+0.446751529 system refresh
Nov 29 02:08:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:08:01 np0005539550 python3.9[54855]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:02 np0005539550 python3.9[54978]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400080.767402-202-148993316701101/.source.json follow=False _original_basename=podman_network_config.j2 checksum=285a18d75b0538a1b470912a188ac20dbc565355 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:02 np0005539550 python3.9[55130]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:03 np0005539550 python3.9[55253]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400082.3199322-247-22104087955853/.source.conf follow=False _original_basename=registries.conf.j2 checksum=197bf6e1388aca01b529f5e8d08286f263a7fb81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:04 np0005539550 python3.9[55405]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539550 python3.9[55557]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:05 np0005539550 python3.9[55712]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:06 np0005539550 python3.9[55864]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:07 np0005539550 python3.9[56016]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:10 np0005539550 python3.9[56169]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:10 np0005539550 python3.9[56323]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:11 np0005539550 python3.9[56475]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:08:12 np0005539550 python3.9[56627]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:08:13 np0005539550 python3.9[56780]: ansible-service_facts Invoked
Nov 29 02:08:13 np0005539550 network[56797]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:08:13 np0005539550 network[56798]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:08:13 np0005539550 network[56799]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:08:19 np0005539550 python3.9[57251]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:08:22 np0005539550 python3.9[57404]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:08:24 np0005539550 python3.9[57556]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:24 np0005539550 python3.9[57681]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400103.6148217-679-126740441043660/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:25 np0005539550 python3.9[57835]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:26 np0005539550 python3.9[57960]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400105.077742-724-98632139422687/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:28 np0005539550 python3.9[58114]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:29 np0005539550 python3.9[58268]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:31 np0005539550 python3.9[58352]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:08:32 np0005539550 python3.9[58506]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:08:33 np0005539550 python3.9[58590]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:08:33 np0005539550 chronyd[794]: chronyd exiting
Nov 29 02:08:33 np0005539550 systemd[1]: Stopping NTP client/server...
Nov 29 02:08:33 np0005539550 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 02:08:33 np0005539550 systemd[1]: Stopped NTP client/server.
Nov 29 02:08:33 np0005539550 systemd[1]: Starting NTP client/server...
Nov 29 02:08:33 np0005539550 chronyd[58598]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 02:08:33 np0005539550 chronyd[58598]: Frequency -24.976 +/- 0.236 ppm read from /var/lib/chrony/drift
Nov 29 02:08:33 np0005539550 chronyd[58598]: Loaded seccomp filter (level 2)
Nov 29 02:08:33 np0005539550 systemd[1]: Started NTP client/server.
Nov 29 02:08:34 np0005539550 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 02:08:34 np0005539550 systemd[1]: session-13.scope: Consumed 27.407s CPU time.
Nov 29 02:08:34 np0005539550 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Nov 29 02:08:34 np0005539550 systemd-logind[788]: Removed session 13.
Nov 29 02:08:40 np0005539550 systemd-logind[788]: New session 14 of user zuul.
Nov 29 02:08:40 np0005539550 systemd[1]: Started Session 14 of User zuul.
Nov 29 02:08:40 np0005539550 python3.9[58779]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:41 np0005539550 python3.9[58931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:42 np0005539550 python3.9[59054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400121.180688-67-198478355877626/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:43 np0005539550 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 02:08:43 np0005539550 systemd[1]: session-14.scope: Consumed 1.629s CPU time.
Nov 29 02:08:43 np0005539550 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Nov 29 02:08:43 np0005539550 systemd-logind[788]: Removed session 14.
Nov 29 02:08:48 np0005539550 systemd-logind[788]: New session 15 of user zuul.
Nov 29 02:08:48 np0005539550 systemd[1]: Started Session 15 of User zuul.
Nov 29 02:08:49 np0005539550 python3.9[59232]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:08:50 np0005539550 python3.9[59388]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:51 np0005539550 python3.9[59563]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:52 np0005539550 python3.9[59686]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764400131.1190782-88-157558927530955/.source.json _original_basename=.2oi_6ynk follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:53 np0005539550 python3.9[59838]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:54 np0005539550 python3.9[59961]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400132.9933066-157-95897060158692/.source _original_basename=.c6rjp5am follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:55 np0005539550 python3.9[60113]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:55 np0005539550 python3.9[60265]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:56 np0005539550 python3.9[60388]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400135.2300155-229-106133929591995/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:56 np0005539550 python3.9[60540]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:57 np0005539550 python3.9[60663]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764400136.396433-229-78479717542307/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:08:58 np0005539550 python3.9[60815]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:08:59 np0005539550 python3.9[60967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:08:59 np0005539550 python3.9[61090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400138.7289085-340-17270601108799/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:00 np0005539550 python3.9[61242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:01 np0005539550 python3.9[61365]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400140.1254983-385-241906163793355/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:02 np0005539550 python3.9[61517]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:02 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:02 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:02 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:02 np0005539550 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 02:09:02 np0005539550 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 02:09:03 np0005539550 python3.9[61746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:03 np0005539550 python3.9[61869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400142.9320278-454-194262895805624/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:04 np0005539550 python3.9[62021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:05 np0005539550 python3.9[62144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400144.1897929-499-7410488352860/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:06 np0005539550 python3.9[62297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:06 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:06 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:06 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:06 np0005539550 systemd[1]: Starting Create netns directory...
Nov 29 02:09:06 np0005539550 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:09:06 np0005539550 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:09:06 np0005539550 systemd[1]: Finished Create netns directory.
Nov 29 02:09:07 np0005539550 python3.9[62523]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:09:07 np0005539550 network[62540]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:09:07 np0005539550 network[62541]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:09:07 np0005539550 network[62542]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:09:12 np0005539550 python3.9[62804]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:12 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:12 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:12 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:13 np0005539550 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 02:09:13 np0005539550 iptables.init[62843]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 02:09:13 np0005539550 iptables.init[62843]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 02:09:13 np0005539550 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 02:09:13 np0005539550 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 02:09:14 np0005539550 python3.9[63040]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539550 python3.9[63194]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:09:15 np0005539550 systemd[1]: Reloading.
Nov 29 02:09:15 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:09:15 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:09:15 np0005539550 systemd[1]: Starting Netfilter Tables...
Nov 29 02:09:15 np0005539550 systemd[1]: Finished Netfilter Tables.
Nov 29 02:09:16 np0005539550 python3.9[63386]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:18 np0005539550 python3.9[63539]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:18 np0005539550 python3.9[63664]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400157.957004-706-266191907507568/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:19 np0005539550 python3.9[63817]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:09:19 np0005539550 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 02:09:19 np0005539550 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 02:09:20 np0005539550 python3.9[63973]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:21 np0005539550 python3.9[64125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:21 np0005539550 python3.9[64248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400160.9194095-799-100147971885550/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:23 np0005539550 python3.9[64400]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:09:23 np0005539550 systemd[1]: Starting Time & Date Service...
Nov 29 02:09:23 np0005539550 systemd[1]: Started Time & Date Service.
Nov 29 02:09:24 np0005539550 python3.9[64556]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:24 np0005539550 python3.9[64708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:25 np0005539550 python3.9[64831]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400164.24678-904-170422829813451/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:25 np0005539550 python3.9[64983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:26 np0005539550 python3.9[65106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764400165.4819167-949-221262350844354/.source.yaml _original_basename=.s637dxnc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:27 np0005539550 python3.9[65258]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:27 np0005539550 python3.9[65381]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400166.6798902-994-251609693019234/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:28 np0005539550 python3.9[65533]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:29 np0005539550 python3.9[65686]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:30 np0005539550 python3[65839]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:09:30 np0005539550 python3.9[65991]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:31 np0005539550 python3.9[66114]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400170.3878994-1111-222885714881836/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:32 np0005539550 python3.9[66266]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:32 np0005539550 python3.9[66389]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400171.7888432-1156-167446121882015/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:33 np0005539550 python3.9[66541]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:34 np0005539550 python3.9[66664]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400173.1381972-1201-168620098818220/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:35 np0005539550 python3.9[66816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:35 np0005539550 python3.9[66939]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400174.5667188-1246-70749123709777/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:36 np0005539550 python3.9[67091]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:09:37 np0005539550 python3.9[67214]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764400175.971033-1291-249720401667167/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:37 np0005539550 python3.9[67366]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:38 np0005539550 python3.9[67518]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:39 np0005539550 python3.9[67677]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:40 np0005539550 python3.9[67830]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:41 np0005539550 python3.9[67982]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:42 np0005539550 python3.9[68134]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:42 np0005539550 python3.9[68287]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:09:43 np0005539550 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Nov 29 02:09:43 np0005539550 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 02:09:43 np0005539550 systemd[1]: session-15.scope: Consumed 34.597s CPU time.
Nov 29 02:09:43 np0005539550 systemd-logind[788]: Removed session 15.
Nov 29 02:09:52 np0005539550 systemd-logind[788]: New session 16 of user zuul.
Nov 29 02:09:52 np0005539550 systemd[1]: Started Session 16 of User zuul.
Nov 29 02:09:53 np0005539550 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:09:54 np0005539550 python3.9[68470]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:09:54 np0005539550 python3.9[68622]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:09:56 np0005539550 python3.9[68774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:09:57 np0005539550 python3.9[68926]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012 create=True mode=0644 path=/tmp/ansible.y5yo44hw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:58 np0005539550 python3.9[69078]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y5yo44hw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:09:59 np0005539550 python3.9[69232]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y5yo44hw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:09:59 np0005539550 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Nov 29 02:09:59 np0005539550 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 02:09:59 np0005539550 systemd[1]: session-16.scope: Consumed 3.548s CPU time.
Nov 29 02:09:59 np0005539550 systemd-logind[788]: Removed session 16.
Nov 29 02:10:06 np0005539550 systemd-logind[788]: New session 17 of user zuul.
Nov 29 02:10:06 np0005539550 systemd[1]: Started Session 17 of User zuul.
Nov 29 02:10:07 np0005539550 python3.9[69410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:08 np0005539550 python3.9[69566]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:10:09 np0005539550 python3.9[69720]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:10:13 np0005539550 python3.9[69873]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:13 np0005539550 python3.9[70026]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:15 np0005539550 python3.9[70180]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:16 np0005539550 python3.9[70335]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:17 np0005539550 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 02:10:17 np0005539550 systemd[1]: session-17.scope: Consumed 4.565s CPU time.
Nov 29 02:10:17 np0005539550 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Nov 29 02:10:17 np0005539550 systemd-logind[788]: Removed session 17.
Nov 29 02:10:22 np0005539550 systemd-logind[788]: New session 18 of user zuul.
Nov 29 02:10:22 np0005539550 systemd[1]: Started Session 18 of User zuul.
Nov 29 02:10:23 np0005539550 python3.9[70513]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:24 np0005539550 python3.9[70669]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:10:25 np0005539550 python3.9[70753]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:10:28 np0005539550 python3.9[70904]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:29 np0005539550 python3.9[71055]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:10:30 np0005539550 python3.9[71205]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:30 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:10:30 np0005539550 python3.9[71356]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:10:31 np0005539550 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 02:10:31 np0005539550 systemd[1]: session-18.scope: Consumed 5.928s CPU time.
Nov 29 02:10:31 np0005539550 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Nov 29 02:10:31 np0005539550 systemd-logind[788]: Removed session 18.
Nov 29 02:10:39 np0005539550 systemd-logind[788]: New session 19 of user zuul.
Nov 29 02:10:39 np0005539550 systemd[1]: Started Session 19 of User zuul.
Nov 29 02:10:43 np0005539550 chronyd[58598]: Selected source 162.159.200.1 (pool.ntp.org)
Nov 29 02:10:47 np0005539550 python3[72122]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:10:49 np0005539550 python3[72218]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:10:50 np0005539550 python3[72245]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:10:51 np0005539550 python3[72271]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:51 np0005539550 kernel: loop: module loaded
Nov 29 02:10:51 np0005539550 kernel: loop3: detected capacity change from 0 to 14680064
Nov 29 02:10:52 np0005539550 python3[72306]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:10:52 np0005539550 lvm[72309]: PV /dev/loop3 not used.
Nov 29 02:10:52 np0005539550 lvm[72311]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:52 np0005539550 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 29 02:10:53 np0005539550 lvm[72313]:  0 logical volume(s) in volume group "ceph_vg0" now active
Nov 29 02:10:53 np0005539550 lvm[72314]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:53 np0005539550 lvm[72314]: VG ceph_vg0 finished
Nov 29 02:10:53 np0005539550 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 29 02:10:53 np0005539550 lvm[72322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:53 np0005539550 lvm[72322]: VG ceph_vg0 finished
Nov 29 02:10:53 np0005539550 python3[72400]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:10:54 np0005539550 python3[72473]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400253.5160449-36981-17182034261021/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:10:55 np0005539550 python3[72523]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:10:55 np0005539550 systemd[1]: Reloading.
Nov 29 02:10:55 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:10:55 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:10:55 np0005539550 systemd[1]: Starting Ceph OSD losetup...
Nov 29 02:10:55 np0005539550 bash[72563]: /dev/loop3: [64513]:4327942 (/var/lib/ceph-osd-0.img)
Nov 29 02:10:55 np0005539550 systemd[1]: Finished Ceph OSD losetup.
Nov 29 02:10:55 np0005539550 lvm[72565]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:10:55 np0005539550 lvm[72565]: VG ceph_vg0 finished
Nov 29 02:10:58 np0005539550 python3[72589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:11:00 np0005539550 python3[72682]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 02:11:04 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:11:04 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:11:07 np0005539550 python3[72793]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:11:07 np0005539550 python3[72821]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:08 np0005539550 python3[72884]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:08 np0005539550 python3[72910]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:09 np0005539550 python3[72988]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:09 np0005539550 python3[73061]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400268.8497102-37172-60377758979992/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:10 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:11:10 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:11:10 np0005539550 systemd[1]: run-r65ff9f5347f4415f9d5368c433e2b37e.service: Deactivated successfully.
Nov 29 02:11:10 np0005539550 python3[73163]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:11:10 np0005539550 python3[73237]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400269.9339714-37190-18945431601123/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:11:11 np0005539550 python3[73287]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:11:11 np0005539550 python3[73315]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:11:11 np0005539550 python3[73343]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:11:12 np0005539550 python3[73371]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid b66774a7-56d9-5535-bd8c-681234404870 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:11:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:12 np0005539550 systemd-logind[788]: New session 20 of user ceph-admin.
Nov 29 02:11:12 np0005539550 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:11:12 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:11:12 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:11:12 np0005539550 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:11:12 np0005539550 systemd[73389]: Queued start job for default target Main User Target.
Nov 29 02:11:12 np0005539550 systemd[73389]: Created slice User Application Slice.
Nov 29 02:11:12 np0005539550 systemd[73389]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:11:12 np0005539550 systemd[73389]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:11:12 np0005539550 systemd[73389]: Reached target Paths.
Nov 29 02:11:12 np0005539550 systemd[73389]: Reached target Timers.
Nov 29 02:11:12 np0005539550 systemd[73389]: Starting D-Bus User Message Bus Socket...
Nov 29 02:11:12 np0005539550 systemd[73389]: Starting Create User's Volatile Files and Directories...
Nov 29 02:11:12 np0005539550 systemd[73389]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:11:12 np0005539550 systemd[73389]: Reached target Sockets.
Nov 29 02:11:12 np0005539550 systemd[73389]: Finished Create User's Volatile Files and Directories.
Nov 29 02:11:12 np0005539550 systemd[73389]: Reached target Basic System.
Nov 29 02:11:12 np0005539550 systemd[73389]: Reached target Main User Target.
Nov 29 02:11:12 np0005539550 systemd[73389]: Startup finished in 118ms.
Nov 29 02:11:12 np0005539550 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:11:12 np0005539550 systemd[1]: Started Session 20 of User ceph-admin.
Nov 29 02:11:12 np0005539550 systemd[1]: session-20.scope: Deactivated successfully.
Nov 29 02:11:12 np0005539550 systemd-logind[788]: Session 20 logged out. Waiting for processes to exit.
Nov 29 02:11:12 np0005539550 systemd-logind[788]: Removed session 20.
Nov 29 02:11:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-compat321579562-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 02:11:22 np0005539550 systemd[1]: Stopping User Manager for UID 42477...
Nov 29 02:11:22 np0005539550 systemd[73389]: Activating special unit Exit the Session...
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped target Main User Target.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped target Basic System.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped target Paths.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped target Sockets.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped target Timers.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:11:22 np0005539550 systemd[73389]: Closed D-Bus User Message Bus Socket.
Nov 29 02:11:22 np0005539550 systemd[73389]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:11:22 np0005539550 systemd[73389]: Removed slice User Application Slice.
Nov 29 02:11:22 np0005539550 systemd[73389]: Reached target Shutdown.
Nov 29 02:11:22 np0005539550 systemd[73389]: Finished Exit the Session.
Nov 29 02:11:22 np0005539550 systemd[73389]: Reached target Exit the Session.
Nov 29 02:11:22 np0005539550 systemd[1]: user@42477.service: Deactivated successfully.
Nov 29 02:11:22 np0005539550 systemd[1]: Stopped User Manager for UID 42477.
Nov 29 02:11:23 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 29 02:11:23 np0005539550 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 29 02:11:23 np0005539550 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 29 02:11:23 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 29 02:11:23 np0005539550 systemd[1]: Removed slice User Slice of UID 42477.
Nov 29 02:11:47 np0005539550 podman[73442]: 2025-11-29 07:11:47.607325805 +0000 UTC m=+34.857980132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:47 np0005539550 podman[73556]: 2025-11-29 07:11:47.702357118 +0000 UTC m=+0.064226514 container create b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:11:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1966353896-merged.mount: Deactivated successfully.
Nov 29 02:11:47 np0005539550 podman[73556]: 2025-11-29 07:11:47.666614716 +0000 UTC m=+0.028484122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:47 np0005539550 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 02:11:47 np0005539550 systemd[1]: Started libpod-conmon-b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303.scope.
Nov 29 02:11:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:47 np0005539550 podman[73556]: 2025-11-29 07:11:47.97632569 +0000 UTC m=+0.338195086 container init b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:47 np0005539550 podman[73556]: 2025-11-29 07:11:47.984570566 +0000 UTC m=+0.346439952 container start b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:47 np0005539550 podman[73556]: 2025-11-29 07:11:47.996980186 +0000 UTC m=+0.358849562 container attach b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:11:48 np0005539550 happy_williams[73573]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 02:11:48 np0005539550 systemd[1]: libpod-b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303.scope: Deactivated successfully.
Nov 29 02:11:48 np0005539550 podman[73578]: 2025-11-29 07:11:48.437374615 +0000 UTC m=+0.065257230 container died b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:11:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-febff15b1de612a16e66dc9a0979e47c7a42edea17831915145b632b7bbce23a-merged.mount: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73578]: 2025-11-29 07:11:51.122300169 +0000 UTC m=+2.750182694 container remove b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303 (image=quay.io/ceph/ceph:v18, name=happy_williams, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:51 np0005539550 systemd[1]: libpod-conmon-b61ed5bb84ca219623120a61bbdf74fc59b3f12b8e56374dc1e6e92885181303.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.220623182 +0000 UTC m=+0.071649316 container create 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:11:51 np0005539550 systemd[1]: Started libpod-conmon-9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832.scope.
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.169712176 +0000 UTC m=+0.020738330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.293563097 +0000 UTC m=+0.144589261 container init 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.299186188 +0000 UTC m=+0.150212322 container start 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:51 np0005539550 jovial_brahmagupta[73611]: 167 167
Nov 29 02:11:51 np0005539550 systemd[1]: libpod-9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.348325229 +0000 UTC m=+0.199351393 container attach 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.348847392 +0000 UTC m=+0.199873516 container died 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c874ada26e3d26a1a724deca1472913dbc62a5f85106990659022f549996614e-merged.mount: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73595]: 2025-11-29 07:11:51.463173246 +0000 UTC m=+0.314199380 container remove 9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832 (image=quay.io/ceph/ceph:v18, name=jovial_brahmagupta, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:51 np0005539550 systemd[1]: libpod-conmon-9c44f6f910e4449868e296be34a1da871d719cb7eea443c8cdea455ae3c2a832.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.533761994 +0000 UTC m=+0.048557368 container create ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:51 np0005539550 systemd[1]: Started libpod-conmon-ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a.scope.
Nov 29 02:11:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.511582678 +0000 UTC m=+0.026378062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.612398513 +0000 UTC m=+0.127193897 container init ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.618097446 +0000 UTC m=+0.132892820 container start ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.625934172 +0000 UTC m=+0.140729576 container attach ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:51 np0005539550 objective_sammet[73646]: AQC3nCppRIs4JhAA1jtvVzLTv8Q4mu+aNb6RIA==
Nov 29 02:11:51 np0005539550 systemd[1]: libpod-ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.645043701 +0000 UTC m=+0.159839065 container died ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:11:51 np0005539550 podman[73630]: 2025-11-29 07:11:51.794780641 +0000 UTC m=+0.309576015 container remove ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a (image=quay.io/ceph/ceph:v18, name=objective_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:11:51 np0005539550 systemd[1]: libpod-conmon-ef780e2bd4333da910d7c7d0b2042208e97b5ce41acab4d2debfd39da904475a.scope: Deactivated successfully.
Nov 29 02:11:51 np0005539550 podman[73664]: 2025-11-29 07:11:51.839560693 +0000 UTC m=+0.024600127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:51 np0005539550 podman[73664]: 2025-11-29 07:11:51.951680722 +0000 UTC m=+0.136720136 container create 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:11:52 np0005539550 systemd[1]: Started libpod-conmon-7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55.scope.
Nov 29 02:11:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:52 np0005539550 podman[73664]: 2025-11-29 07:11:52.327519894 +0000 UTC m=+0.512559318 container init 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:11:52 np0005539550 podman[73664]: 2025-11-29 07:11:52.33251488 +0000 UTC m=+0.517554284 container start 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 29 02:11:52 np0005539550 goofy_hodgkin[73679]: AQC4nCppYBnzFBAAoU53kiXzmjAoAvWVcjSXdA==
Nov 29 02:11:52 np0005539550 systemd[1]: libpod-7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55.scope: Deactivated successfully.
Nov 29 02:11:52 np0005539550 podman[73664]: 2025-11-29 07:11:52.455192512 +0000 UTC m=+0.640231946 container attach 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:11:52 np0005539550 podman[73664]: 2025-11-29 07:11:52.455708385 +0000 UTC m=+0.640747819 container died 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c1d0344f5145c4fdcab61d50ea6afe7f8dd888888fc9c940674ad9906162f126-merged.mount: Deactivated successfully.
Nov 29 02:11:53 np0005539550 podman[73664]: 2025-11-29 07:11:53.261638261 +0000 UTC m=+1.446677675 container remove 7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55 (image=quay.io/ceph/ceph:v18, name=goofy_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:11:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.305965091 +0000 UTC m=+0.025194562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.429616608 +0000 UTC m=+0.148846049 container create 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:11:53 np0005539550 systemd[1]: Started libpod-conmon-8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54.scope.
Nov 29 02:11:53 np0005539550 systemd[1]: libpod-conmon-7dca27aaad66f4cd36524f39c2dca06ff4802b4fbabff34ee62d2a0c3a64be55.scope: Deactivated successfully.
Nov 29 02:11:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.543678125 +0000 UTC m=+0.262907596 container init 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.549650354 +0000 UTC m=+0.268879805 container start 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.560618259 +0000 UTC m=+0.279847730 container attach 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:11:53 np0005539550 youthful_maxwell[73713]: AQC5nCppEfAYIhAA6ugEWApWrYRnuxGIHlqYVA==
Nov 29 02:11:53 np0005539550 systemd[1]: libpod-8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54.scope: Deactivated successfully.
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.57582444 +0000 UTC m=+0.295053911 container died 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:11:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a8e64e0bdcda7e984850721cfef81b204c8f43346c1bf201e41b1941ad986f0-merged.mount: Deactivated successfully.
Nov 29 02:11:53 np0005539550 podman[73699]: 2025-11-29 07:11:53.617015351 +0000 UTC m=+0.336244792 container remove 8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54 (image=quay.io/ceph/ceph:v18, name=youthful_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:53 np0005539550 systemd[1]: libpod-conmon-8469f9f2a8b407e663c881ddef37d26efa59266f784951774adeed236b231d54.scope: Deactivated successfully.
Nov 29 02:11:53 np0005539550 podman[73735]: 2025-11-29 07:11:53.683701542 +0000 UTC m=+0.045042070 container create d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:11:53 np0005539550 systemd[1]: Started libpod-conmon-d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2.scope.
Nov 29 02:11:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/630ad52fa9cf7c09e8c6b43860b864b6c30e28e38aa4cba94d9b05ff306ebb69/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:53 np0005539550 podman[73735]: 2025-11-29 07:11:53.66406896 +0000 UTC m=+0.025409508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:53 np0005539550 podman[73735]: 2025-11-29 07:11:53.978192707 +0000 UTC m=+0.339533285 container init d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:53 np0005539550 podman[73735]: 2025-11-29 07:11:53.983327296 +0000 UTC m=+0.344667834 container start d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:54 np0005539550 amazing_galois[73751]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 29 02:11:54 np0005539550 amazing_galois[73751]: setting min_mon_release = pacific
Nov 29 02:11:54 np0005539550 amazing_galois[73751]: /usr/bin/monmaptool: set fsid to b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:54 np0005539550 amazing_galois[73751]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 29 02:11:54 np0005539550 systemd[1]: libpod-d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539550 podman[73735]: 2025-11-29 07:11:54.019467941 +0000 UTC m=+0.380808469 container attach d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:54 np0005539550 podman[73735]: 2025-11-29 07:11:54.020175779 +0000 UTC m=+0.381516307 container died d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:54 np0005539550 podman[73735]: 2025-11-29 07:11:54.072091579 +0000 UTC m=+0.433432107 container remove d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2 (image=quay.io/ceph/ceph:v18, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:54 np0005539550 systemd[1]: libpod-conmon-d06ac353a1088ab4f75be81eefe5c9872bae2801aedaac94b28ea51f991898b2.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.141460367 +0000 UTC m=+0.048789463 container create df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:54 np0005539550 systemd[1]: Started libpod-conmon-df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8.scope.
Nov 29 02:11:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.120559453 +0000 UTC m=+0.027888569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a787a347c55fe36e8a9e7695b7e211f3478202339223e2ee9c082f888295be/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a787a347c55fe36e8a9e7695b7e211f3478202339223e2ee9c082f888295be/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a787a347c55fe36e8a9e7695b7e211f3478202339223e2ee9c082f888295be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a787a347c55fe36e8a9e7695b7e211f3478202339223e2ee9c082f888295be/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.231212735 +0000 UTC m=+0.138541851 container init df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.237387589 +0000 UTC m=+0.144716685 container start df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.242035006 +0000 UTC m=+0.149364102 container attach df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:11:54 np0005539550 systemd[1]: libpod-df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.35319551 +0000 UTC m=+0.260524606 container died df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-92a787a347c55fe36e8a9e7695b7e211f3478202339223e2ee9c082f888295be-merged.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539550 podman[73770]: 2025-11-29 07:11:54.431185423 +0000 UTC m=+0.338514519 container remove df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8 (image=quay.io/ceph/ceph:v18, name=jolly_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:11:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:54 np0005539550 systemd[1]: libpod-conmon-df3b029e9c00ab496f6d59a450829becd88ef88e63fbab51183f524a4f6911a8.scope: Deactivated successfully.
Nov 29 02:11:54 np0005539550 systemd[1]: Reloading.
Nov 29 02:11:54 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:54 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:54 np0005539550 systemd[1]: Reloading.
Nov 29 02:11:54 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:54 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:54 np0005539550 systemd[1]: Reached target All Ceph clusters and services.
Nov 29 02:11:54 np0005539550 systemd[1]: Reloading.
Nov 29 02:11:55 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:55 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:55 np0005539550 systemd[1]: Reached target Ceph cluster b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:11:55 np0005539550 systemd[1]: Reloading.
Nov 29 02:11:55 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:55 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:55 np0005539550 systemd[1]: Reloading.
Nov 29 02:11:55 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:11:55 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:11:55 np0005539550 systemd[1]: Created slice Slice /system/ceph-b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:11:55 np0005539550 systemd[1]: Reached target System Time Set.
Nov 29 02:11:55 np0005539550 systemd[1]: Reached target System Time Synchronized.
Nov 29 02:11:55 np0005539550 systemd[1]: Starting Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:11:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:11:56 np0005539550 podman[74066]: 2025-11-29 07:11:56.016224422 +0000 UTC m=+0.021155821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:56 np0005539550 podman[74066]: 2025-11-29 07:11:56.137611882 +0000 UTC m=+0.142543281 container create a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab6da1e6a0b824450a34df387ec65d40fc94440217b3602e06fdddcf2712bf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab6da1e6a0b824450a34df387ec65d40fc94440217b3602e06fdddcf2712bf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab6da1e6a0b824450a34df387ec65d40fc94440217b3602e06fdddcf2712bf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ab6da1e6a0b824450a34df387ec65d40fc94440217b3602e06fdddcf2712bf5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:56 np0005539550 podman[74066]: 2025-11-29 07:11:56.517129717 +0000 UTC m=+0.522061136 container init a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:11:56 np0005539550 podman[74066]: 2025-11-29 07:11:56.522340358 +0000 UTC m=+0.527271767 container start a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: pidfile_write: ignore empty --pid-file
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: load: jerasure load: lrc 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Git sha 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: DB SUMMARY
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: DB Session ID:  W9FQTB2XAJ7METHOMF1J
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                                     Options.env: 0x558ed9106c40
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                                Options.info_log: 0x558eda3f8ec0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                    Options.write_buffer_manager: 0x558eda408b40
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                               Options.row_cache: None
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                              Options.wal_filter: None
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.wal_compression: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Compression algorithms supported:
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kZSTD supported: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:           Options.merge_operator: 
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:        Options.compaction_filter: None
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558eda3f8aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x558eda3f11f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.compression: NoCompression
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.num_levels: 7
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ff4fe731-8e89-4164-b95b-761a6503bf52
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400316566311, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400316731431, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "W9FQTB2XAJ7METHOMF1J", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400316731701, "job": 1, "event": "recovery_finished"}
Nov 29 02:11:56 np0005539550 ceph-mon[74085]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 29 02:11:57 np0005539550 bash[74066]: a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb
Nov 29 02:11:57 np0005539550 systemd[1]: Started Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558eda41ae00
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: rocksdb: DB pointer 0x558eda524000
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.7 total, 0.7 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.165       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.165       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.17              0.00         1    0.165       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.17              0.00         1    0.165       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.7 total, 0.7 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x558eda3f11f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@-1(???) e0 preinit fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-29T07:11:54.286109Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).mds e1 new map
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.366566762 +0000 UTC m=+0.072679810 container create 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mkfs b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.323700829 +0000 UTC m=+0.029813897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:11:57 np0005539550 systemd[1]: Started libpod-conmon-08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73.scope.
Nov 29 02:11:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f23f03a8a638987fef1a8f66cf6f0ff63783680a3c502fcc59907bf30e68994/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f23f03a8a638987fef1a8f66cf6f0ff63783680a3c502fcc59907bf30e68994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f23f03a8a638987fef1a8f66cf6f0ff63783680a3c502fcc59907bf30e68994/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.496026995 +0000 UTC m=+0.202140073 container init 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.50341078 +0000 UTC m=+0.209523828 container start 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.506981419 +0000 UTC m=+0.213094467 container attach 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 02:11:57 np0005539550 ceph-mon[74085]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824603185' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:  cluster:
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    id:     b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    health: HEALTH_OK
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]: 
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:  services:
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    mon: 1 daemons, quorum compute-0 (age 0.639597s)
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    mgr: no daemons active
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    osd: 0 osds: 0 up, 0 in
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]: 
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:  data:
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    pools:   0 pools, 0 pgs
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    objects: 0 objects, 0 B
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    usage:   0 B used, 0 B / 0 B avail
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]:    pgs:     
Nov 29 02:11:57 np0005539550 amazing_knuth[74141]: 
Nov 29 02:11:57 np0005539550 systemd[1]: libpod-08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73.scope: Deactivated successfully.
Nov 29 02:11:57 np0005539550 conmon[74141]: conmon 08a89841d9d8c1412486 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73.scope/container/memory.events
Nov 29 02:11:57 np0005539550 podman[74107]: 2025-11-29 07:11:57.982278494 +0000 UTC m=+0.688391542 container died 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:11:58 np0005539550 ceph-mon[74085]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:11:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4f23f03a8a638987fef1a8f66cf6f0ff63783680a3c502fcc59907bf30e68994-merged.mount: Deactivated successfully.
Nov 29 02:11:58 np0005539550 podman[74107]: 2025-11-29 07:11:58.996895715 +0000 UTC m=+1.703008763 container remove 08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73 (image=quay.io/ceph/ceph:v18, name=amazing_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:59 np0005539550 systemd[1]: libpod-conmon-08a89841d9d8c1412486db85ac2f5de043c35e986f2a81c24f67c95617836a73.scope: Deactivated successfully.
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.070830977 +0000 UTC m=+0.047272385 container create 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:11:59 np0005539550 systemd[1]: Started libpod-conmon-14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c.scope.
Nov 29 02:11:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.049371379 +0000 UTC m=+0.025812807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e781117d7d9462076f0935aed075564db72c98617a8f8655874db36e4f0fb2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e781117d7d9462076f0935aed075564db72c98617a8f8655874db36e4f0fb2e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e781117d7d9462076f0935aed075564db72c98617a8f8655874db36e4f0fb2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e781117d7d9462076f0935aed075564db72c98617a8f8655874db36e4f0fb2e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.168699548 +0000 UTC m=+0.145140996 container init 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.176435102 +0000 UTC m=+0.152876510 container start 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.182482173 +0000 UTC m=+0.158923611 container attach 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:11:59 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:11:59 np0005539550 ceph-mon[74085]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3146563757' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:11:59 np0005539550 ceph-mon[74085]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3146563757' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:11:59 np0005539550 interesting_kirch[74197]: 
Nov 29 02:11:59 np0005539550 interesting_kirch[74197]: [global]
Nov 29 02:11:59 np0005539550 interesting_kirch[74197]: #011fsid = b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:11:59 np0005539550 interesting_kirch[74197]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 29 02:11:59 np0005539550 systemd[1]: libpod-14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c.scope: Deactivated successfully.
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.669884051 +0000 UTC m=+0.646325469 container died 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:11:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1e781117d7d9462076f0935aed075564db72c98617a8f8655874db36e4f0fb2e-merged.mount: Deactivated successfully.
Nov 29 02:11:59 np0005539550 podman[74181]: 2025-11-29 07:11:59.722921419 +0000 UTC m=+0.699362827 container remove 14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c (image=quay.io/ceph/ceph:v18, name=interesting_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:11:59 np0005539550 systemd[1]: libpod-conmon-14f71ccd63a76d2c35fb7e4e9ca2858d35524ef2392ad54127ca6d7ded58411c.scope: Deactivated successfully.
Nov 29 02:11:59 np0005539550 podman[74234]: 2025-11-29 07:11:59.792589544 +0000 UTC m=+0.046772752 container create c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:11:59 np0005539550 systemd[1]: Started libpod-conmon-c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4.scope.
Nov 29 02:11:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:11:59 np0005539550 podman[74234]: 2025-11-29 07:11:59.768591043 +0000 UTC m=+0.022774271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8a884010da454a59df68e6eb077a6fe2f694b9a069d82449532754b9bb1a9c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8a884010da454a59df68e6eb077a6fe2f694b9a069d82449532754b9bb1a9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8a884010da454a59df68e6eb077a6fe2f694b9a069d82449532754b9bb1a9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8a884010da454a59df68e6eb077a6fe2f694b9a069d82449532754b9bb1a9c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:11:59 np0005539550 podman[74234]: 2025-11-29 07:11:59.890400114 +0000 UTC m=+0.144583332 container init c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:11:59 np0005539550 podman[74234]: 2025-11-29 07:11:59.896930167 +0000 UTC m=+0.151113365 container start c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:11:59 np0005539550 podman[74234]: 2025-11-29 07:11:59.90222464 +0000 UTC m=+0.156407838 container attach c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:11:59 np0005539550 ceph-mon[74085]: from='client.? 192.168.122.100:0/3146563757' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:11:59 np0005539550 ceph-mon[74085]: from='client.? 192.168.122.100:0/3146563757' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781219672' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:12:00 np0005539550 systemd[1]: libpod-c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4.scope: Deactivated successfully.
Nov 29 02:12:00 np0005539550 podman[74234]: 2025-11-29 07:12:00.382205492 +0000 UTC m=+0.636388720 container died c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:12:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ff8a884010da454a59df68e6eb077a6fe2f694b9a069d82449532754b9bb1a9c-merged.mount: Deactivated successfully.
Nov 29 02:12:00 np0005539550 podman[74234]: 2025-11-29 07:12:00.443189279 +0000 UTC m=+0.697372477 container remove c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4 (image=quay.io/ceph/ceph:v18, name=mystifying_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:00 np0005539550 systemd[1]: libpod-conmon-c35fa269b044f9d9952f8e0152a3c054945f24216330491ef09415f04d6562e4.scope: Deactivated successfully.
Nov 29 02:12:00 np0005539550 systemd[1]: Stopping Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: mon.compute-0@0(leader) e1 shutdown
Nov 29 02:12:00 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0[74081]: 2025-11-29T07:12:00.637+0000 7f985caf7640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 29 02:12:00 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0[74081]: 2025-11-29T07:12:00.637+0000 7f985caf7640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:12:00 np0005539550 ceph-mon[74085]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:12:00 np0005539550 podman[74317]: 2025-11-29 07:12:00.72269109 +0000 UTC m=+0.120523490 container died a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:12:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1ab6da1e6a0b824450a34df387ec65d40fc94440217b3602e06fdddcf2712bf5-merged.mount: Deactivated successfully.
Nov 29 02:12:00 np0005539550 podman[74317]: 2025-11-29 07:12:00.763800159 +0000 UTC m=+0.161632559 container remove a6287d550da372926f8b0d65f8ce34bed81517cda29f7fa3922eaca4d539edfb (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:12:00 np0005539550 bash[74317]: ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0
Nov 29 02:12:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:12:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 02:12:00 np0005539550 systemd[1]: ceph-b66774a7-56d9-5535-bd8c-681234404870@mon.compute-0.service: Deactivated successfully.
Nov 29 02:12:00 np0005539550 systemd[1]: Stopped Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:12:00 np0005539550 systemd[1]: Starting Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:12:01 np0005539550 podman[74416]: 2025-11-29 07:12:01.118423091 +0000 UTC m=+0.045738926 container create 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a9cc949e9c44d69567345cd257f909b4fc76b5dcec4f12599ede59dc12b86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a9cc949e9c44d69567345cd257f909b4fc76b5dcec4f12599ede59dc12b86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a9cc949e9c44d69567345cd257f909b4fc76b5dcec4f12599ede59dc12b86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de7a9cc949e9c44d69567345cd257f909b4fc76b5dcec4f12599ede59dc12b86/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 podman[74416]: 2025-11-29 07:12:01.180353313 +0000 UTC m=+0.107669168 container init 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:12:01 np0005539550 podman[74416]: 2025-11-29 07:12:01.186014004 +0000 UTC m=+0.113329829 container start 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:01 np0005539550 bash[74416]: 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182
Nov 29 02:12:01 np0005539550 podman[74416]: 2025-11-29 07:12:01.098635146 +0000 UTC m=+0.025951001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:01 np0005539550 systemd[1]: Started Ceph mon.compute-0 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: pidfile_write: ignore empty --pid-file
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: load: jerasure load: lrc 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Git sha 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: DB SUMMARY
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: DB Session ID:  YJG4JH9P5UPLR9LZG3U4
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54090 ; 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                                     Options.env: 0x55611d482c40
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                                Options.info_log: 0x55611eccf040
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                                 Options.wal_dir: 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                    Options.write_buffer_manager: 0x55611ecdeb40
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                               Options.row_cache: None
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                              Options.wal_filter: None
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.wal_compression: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.max_background_jobs: 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.max_total_wal_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:       Options.compaction_readahead_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Compression algorithms supported:
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kZSTD supported: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:           Options.merge_operator: 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:        Options.compaction_filter: None
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55611eccec40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55611ecc71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:        Options.write_buffer_size: 33554432
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:  Options.max_write_buffer_number: 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.compression: NoCompression
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.num_levels: 7
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ff4fe731-8e89-4164-b95b-761a6503bf52
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400321232528, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400321237709, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 53701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 135, "table_properties": {"data_size": 52256, "index_size": 151, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2906, "raw_average_key_size": 29, "raw_value_size": 49930, "raw_average_value_size": 509, "num_data_blocks": 7, "num_entries": 98, "num_filter_entries": 98, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400321, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400321237952, "job": 1, "event": "recovery_finished"}
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55611ecf0e00
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: DB pointer 0x55611ed7a000
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.34 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   54.34 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 512.00 MB usage: 0.77 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.34 KB,6.55651e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???) e1 preinit fsid b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).mds e1 new map
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.261569196 +0000 UTC m=+0.042847844 container create 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 29 02:12:01 np0005539550 systemd[1]: Started libpod-conmon-6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed.scope.
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 29 02:12:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c755b1b6b407031a121a87a5d2ab992c1fcf4a6f177e084b21607f564f85fe7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c755b1b6b407031a121a87a5d2ab992c1fcf4a6f177e084b21607f564f85fe7a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c755b1b6b407031a121a87a5d2ab992c1fcf4a6f177e084b21607f564f85fe7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.245578766 +0000 UTC m=+0.026857444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.345825037 +0000 UTC m=+0.127103705 container init 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.353902539 +0000 UTC m=+0.135181187 container start 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.357433687 +0000 UTC m=+0.138712335 container attach 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 29 02:12:01 np0005539550 systemd[1]: libpod-6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed.scope: Deactivated successfully.
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.863824771 +0000 UTC m=+0.645103419 container died 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:12:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c755b1b6b407031a121a87a5d2ab992c1fcf4a6f177e084b21607f564f85fe7a-merged.mount: Deactivated successfully.
Nov 29 02:12:01 np0005539550 podman[74436]: 2025-11-29 07:12:01.91929771 +0000 UTC m=+0.700576358 container remove 6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed (image=quay.io/ceph/ceph:v18, name=zealous_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:01 np0005539550 systemd[1]: libpod-conmon-6c0b3d735b46c85377e6555716fdc837fcb374666cef3c2055a6f42016fd61ed.scope: Deactivated successfully.
Nov 29 02:12:01 np0005539550 podman[74527]: 2025-11-29 07:12:01.983902038 +0000 UTC m=+0.042552067 container create 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:12:02 np0005539550 systemd[1]: Started libpod-conmon-3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f.scope.
Nov 29 02:12:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d4629c29b9e28090d8d1271edbb17dac6283d19b6a97687596a82ba1c3fd9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d4629c29b9e28090d8d1271edbb17dac6283d19b6a97687596a82ba1c3fd9d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d4629c29b9e28090d8d1271edbb17dac6283d19b6a97687596a82ba1c3fd9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:01.967074817 +0000 UTC m=+0.025724876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:02.064218329 +0000 UTC m=+0.122868388 container init 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:02.069981033 +0000 UTC m=+0.128631062 container start 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:02.073097721 +0000 UTC m=+0.131747780 container attach 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:12:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 29 02:12:02 np0005539550 systemd[1]: libpod-3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f.scope: Deactivated successfully.
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:02.537662887 +0000 UTC m=+0.596312916 container died 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d0d4629c29b9e28090d8d1271edbb17dac6283d19b6a97687596a82ba1c3fd9d-merged.mount: Deactivated successfully.
Nov 29 02:12:02 np0005539550 podman[74527]: 2025-11-29 07:12:02.580646733 +0000 UTC m=+0.639296762 container remove 3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f (image=quay.io/ceph/ceph:v18, name=priceless_jang, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:02 np0005539550 systemd[1]: libpod-conmon-3ffdd86a75ff0dbb8603fd71aa6a40b6d8b0c42e135e99a0eca79d46b521059f.scope: Deactivated successfully.
Nov 29 02:12:02 np0005539550 systemd[1]: Reloading.
Nov 29 02:12:02 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:02 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:02 np0005539550 systemd[1]: Reloading.
Nov 29 02:12:02 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:12:02 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:12:03 np0005539550 systemd[1]: Starting Ceph mgr.compute-0.pdhsqi for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:12:03 np0005539550 podman[74707]: 2025-11-29 07:12:03.399353229 +0000 UTC m=+0.049439929 container create df06dba8a7318e0da679793792f3c360a241f93315f9a19f0c8fbbb494447795 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada82efeb08f058e2117c4bcbc92a55377c9520fa49e92d396ef6cf34f146392/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada82efeb08f058e2117c4bcbc92a55377c9520fa49e92d396ef6cf34f146392/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada82efeb08f058e2117c4bcbc92a55377c9520fa49e92d396ef6cf34f146392/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ada82efeb08f058e2117c4bcbc92a55377c9520fa49e92d396ef6cf34f146392/merged/var/lib/ceph/mgr/ceph-compute-0.pdhsqi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 podman[74707]: 2025-11-29 07:12:03.457662639 +0000 UTC m=+0.107749339 container init df06dba8a7318e0da679793792f3c360a241f93315f9a19f0c8fbbb494447795 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:12:03 np0005539550 podman[74707]: 2025-11-29 07:12:03.464103891 +0000 UTC m=+0.114190591 container start df06dba8a7318e0da679793792f3c360a241f93315f9a19f0c8fbbb494447795 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:03 np0005539550 bash[74707]: df06dba8a7318e0da679793792f3c360a241f93315f9a19f0c8fbbb494447795
Nov 29 02:12:03 np0005539550 podman[74707]: 2025-11-29 07:12:03.378059855 +0000 UTC m=+0.028146575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:03 np0005539550 systemd[1]: Started Ceph mgr.compute-0.pdhsqi for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:12:03 np0005539550 ceph-mgr[74726]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:12:03 np0005539550 ceph-mgr[74726]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:12:03 np0005539550 ceph-mgr[74726]: pidfile_write: ignore empty --pid-file
Nov 29 02:12:03 np0005539550 podman[74727]: 2025-11-29 07:12:03.558670699 +0000 UTC m=+0.049984393 container create 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:12:03 np0005539550 systemd[1]: Started libpod-conmon-04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb.scope.
Nov 29 02:12:03 np0005539550 podman[74727]: 2025-11-29 07:12:03.539006817 +0000 UTC m=+0.030320531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8482d6b220e0e53b2ace0f9d6ef547f209e208a00ee04a0b2f26beef7ead2412/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8482d6b220e0e53b2ace0f9d6ef547f209e208a00ee04a0b2f26beef7ead2412/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8482d6b220e0e53b2ace0f9d6ef547f209e208a00ee04a0b2f26beef7ead2412/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:03 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'alerts'
Nov 29 02:12:03 np0005539550 podman[74727]: 2025-11-29 07:12:03.679522276 +0000 UTC m=+0.170835990 container init 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:12:03 np0005539550 podman[74727]: 2025-11-29 07:12:03.687432274 +0000 UTC m=+0.178745968 container start 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:12:03 np0005539550 podman[74727]: 2025-11-29 07:12:03.701919717 +0000 UTC m=+0.193233441 container attach 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:12:04 np0005539550 ceph-mgr[74726]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'balancer'
Nov 29 02:12:04 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:04.119+0000 7f4f43e42140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974355524' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]: 
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]: {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "health": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "status": "HEALTH_OK",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "checks": {},
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "mutes": []
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "election_epoch": 5,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "quorum": [
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        0
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    ],
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "quorum_names": [
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "compute-0"
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    ],
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "quorum_age": 2,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "monmap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "epoch": 1,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "min_mon_release_name": "reef",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_mons": 1
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "osdmap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "epoch": 1,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_osds": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_up_osds": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "osd_up_since": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_in_osds": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "osd_in_since": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_remapped_pgs": 0
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "pgmap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "pgs_by_state": [],
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_pgs": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_pools": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_objects": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "data_bytes": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "bytes_used": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "bytes_avail": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "bytes_total": 0
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "fsmap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "epoch": 1,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "by_rank": [],
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "up:standby": 0
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "mgrmap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "available": false,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "num_standbys": 0,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "modules": [
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:            "iostat",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:            "nfs",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:            "restful"
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        ],
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "services": {}
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "servicemap": {
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "epoch": 1,
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:        "services": {}
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    },
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]:    "progress_events": {}
Nov 29 02:12:04 np0005539550 upbeat_brahmagupta[74767]: }
Nov 29 02:12:04 np0005539550 systemd[1]: libpod-04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb.scope: Deactivated successfully.
Nov 29 02:12:04 np0005539550 podman[74727]: 2025-11-29 07:12:04.218573627 +0000 UTC m=+0.709887331 container died 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:12:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8482d6b220e0e53b2ace0f9d6ef547f209e208a00ee04a0b2f26beef7ead2412-merged.mount: Deactivated successfully.
Nov 29 02:12:04 np0005539550 podman[74727]: 2025-11-29 07:12:04.275145594 +0000 UTC m=+0.766459288 container remove 04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb (image=quay.io/ceph/ceph:v18, name=upbeat_brahmagupta, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:12:04 np0005539550 systemd[1]: libpod-conmon-04298a6a172ada553c0a9892ec3c3d22e7f0bf3e9ddcb39dc9a030b52ee478bb.scope: Deactivated successfully.
Nov 29 02:12:04 np0005539550 ceph-mgr[74726]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:12:04 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'cephadm'
Nov 29 02:12:04 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:04.462+0000 7f4f43e42140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:12:06 np0005539550 podman[74815]: 2025-11-29 07:12:06.329364034 +0000 UTC m=+0.027250964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:06 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'crash'
Nov 29 02:12:07 np0005539550 ceph-mgr[74726]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:12:07 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'dashboard'
Nov 29 02:12:07 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:07.137+0000 7f4f43e42140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:12:07 np0005539550 podman[74815]: 2025-11-29 07:12:07.908015423 +0000 UTC m=+1.605902363 container create 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:07 np0005539550 systemd[1]: Started libpod-conmon-528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3.scope.
Nov 29 02:12:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a7b8de89dde05ab0a748a4a92f09f42d14561929fa795003b8856b90207a7c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a7b8de89dde05ab0a748a4a92f09f42d14561929fa795003b8856b90207a7c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a7b8de89dde05ab0a748a4a92f09f42d14561929fa795003b8856b90207a7c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:08 np0005539550 podman[74815]: 2025-11-29 07:12:08.219453553 +0000 UTC m=+1.917340483 container init 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:12:08 np0005539550 podman[74815]: 2025-11-29 07:12:08.226867459 +0000 UTC m=+1.924754369 container start 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:12:08 np0005539550 podman[74815]: 2025-11-29 07:12:08.303443427 +0000 UTC m=+2.001330337 container attach 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:12:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4119462029' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]: 
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]: {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "health": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "status": "HEALTH_OK",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "checks": {},
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "mutes": []
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "election_epoch": 5,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "quorum": [
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        0
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    ],
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "quorum_names": [
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "compute-0"
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    ],
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "quorum_age": 7,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "monmap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "epoch": 1,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "min_mon_release_name": "reef",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_mons": 1
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "osdmap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "epoch": 1,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_osds": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_up_osds": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "osd_up_since": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_in_osds": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "osd_in_since": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_remapped_pgs": 0
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "pgmap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "pgs_by_state": [],
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_pgs": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_pools": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_objects": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "data_bytes": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "bytes_used": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "bytes_avail": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "bytes_total": 0
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "fsmap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "epoch": 1,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "by_rank": [],
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "up:standby": 0
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "mgrmap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "available": false,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "num_standbys": 0,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "modules": [
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:            "iostat",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:            "nfs",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:            "restful"
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        ],
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "services": {}
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "servicemap": {
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "epoch": 1,
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:        "services": {}
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    },
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]:    "progress_events": {}
Nov 29 02:12:08 np0005539550 dreamy_hermann[74832]: }
Nov 29 02:12:08 np0005539550 systemd[1]: libpod-528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3.scope: Deactivated successfully.
Nov 29 02:12:08 np0005539550 podman[74815]: 2025-11-29 07:12:08.682891501 +0000 UTC m=+2.380778421 container died 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:08 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:12:09 np0005539550 ceph-mgr[74726]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:12:09 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:12:09 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:09.206+0000 7f4f43e42140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:12:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7a7b8de89dde05ab0a748a4a92f09f42d14561929fa795003b8856b90207a7c8-merged.mount: Deactivated successfully.
Nov 29 02:12:09 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:12:09 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:12:09 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  from numpy import show_config as show_numpy_config
Nov 29 02:12:09 np0005539550 ceph-mgr[74726]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:12:09 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'influx'
Nov 29 02:12:09 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:09.825+0000 7f4f43e42140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:12:10 np0005539550 ceph-mgr[74726]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:12:10 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:10.098+0000 7f4f43e42140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:12:10 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'insights'
Nov 29 02:12:10 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'iostat'
Nov 29 02:12:10 np0005539550 podman[74815]: 2025-11-29 07:12:10.387204626 +0000 UTC m=+4.085091536 container remove 528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3 (image=quay.io/ceph/ceph:v18, name=dreamy_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:12:10 np0005539550 systemd[1]: libpod-conmon-528a428bcbfd45ec18821d89e078f41a08d91e8d5386ad9fbd92dc805b5797a3.scope: Deactivated successfully.
Nov 29 02:12:10 np0005539550 ceph-mgr[74726]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:12:10 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:12:10 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:10.619+0000 7f4f43e42140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.478028574 +0000 UTC m=+0.068381204 container create 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:12:12 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'localpool'
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.430009911 +0000 UTC m=+0.020362561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:12 np0005539550 systemd[1]: Started libpod-conmon-6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e.scope.
Nov 29 02:12:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6030e7e0ff7cb53d234f9933ccbeb22d5e3035bb1085ca7fbd24f83ef7305a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6030e7e0ff7cb53d234f9933ccbeb22d5e3035bb1085ca7fbd24f83ef7305a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6030e7e0ff7cb53d234f9933ccbeb22d5e3035bb1085ca7fbd24f83ef7305a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.588404808 +0000 UTC m=+0.178757458 container init 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.593633349 +0000 UTC m=+0.183985979 container start 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.615752123 +0000 UTC m=+0.206104783 container attach 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:12 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:12:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184241893' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:12 np0005539550 busy_borg[74887]: 
Nov 29 02:12:12 np0005539550 busy_borg[74887]: {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "health": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "status": "HEALTH_OK",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "checks": {},
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "mutes": []
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "election_epoch": 5,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "quorum": [
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        0
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    ],
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "quorum_names": [
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "compute-0"
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    ],
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "quorum_age": 11,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "monmap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "epoch": 1,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "min_mon_release_name": "reef",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_mons": 1
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "osdmap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "epoch": 1,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_osds": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_up_osds": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "osd_up_since": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_in_osds": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "osd_in_since": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_remapped_pgs": 0
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "pgmap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "pgs_by_state": [],
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_pgs": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_pools": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_objects": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "data_bytes": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "bytes_used": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "bytes_avail": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "bytes_total": 0
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "fsmap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "epoch": 1,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "by_rank": [],
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "up:standby": 0
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "mgrmap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "available": false,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "num_standbys": 0,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "modules": [
Nov 29 02:12:12 np0005539550 busy_borg[74887]:            "iostat",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:            "nfs",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:            "restful"
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        ],
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "services": {}
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "servicemap": {
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "epoch": 1,
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:12 np0005539550 busy_borg[74887]:        "services": {}
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    },
Nov 29 02:12:12 np0005539550 busy_borg[74887]:    "progress_events": {}
Nov 29 02:12:12 np0005539550 busy_borg[74887]: }
Nov 29 02:12:12 np0005539550 systemd[1]: libpod-6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e.scope: Deactivated successfully.
Nov 29 02:12:12 np0005539550 podman[74871]: 2025-11-29 07:12:12.996104628 +0000 UTC m=+0.586457268 container died 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e6030e7e0ff7cb53d234f9933ccbeb22d5e3035bb1085ca7fbd24f83ef7305a1-merged.mount: Deactivated successfully.
Nov 29 02:12:13 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'mirroring'
Nov 29 02:12:13 np0005539550 podman[74871]: 2025-11-29 07:12:13.779182072 +0000 UTC m=+1.369534692 container remove 6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e (image=quay.io/ceph/ceph:v18, name=busy_borg, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:13 np0005539550 systemd[1]: libpod-conmon-6d0a3782cfac67e4e4788b9d68399b7a53a637ae4d76633e6730309746b8221e.scope: Deactivated successfully.
Nov 29 02:12:13 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'nfs'
Nov 29 02:12:14 np0005539550 ceph-mgr[74726]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:12:14 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:12:14 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:14.582+0000 7f4f43e42140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:12:15 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:15.317+0000 7f4f43e42140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'osd_support'
Nov 29 02:12:15 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:15.634+0000 7f4f43e42140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:12:15 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:15.900+0000 7f4f43e42140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:12:15 np0005539550 podman[74925]: 2025-11-29 07:12:15.823080524 +0000 UTC m=+0.023617422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:16 np0005539550 ceph-mgr[74726]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:12:16 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'progress'
Nov 29 02:12:16 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:16.208+0000 7f4f43e42140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:12:16 np0005539550 ceph-mgr[74726]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:12:16 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'prometheus'
Nov 29 02:12:16 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:16.467+0000 7f4f43e42140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:12:17 np0005539550 ceph-mgr[74726]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:12:17 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:17.608+0000 7f4f43e42140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:12:17 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:12:17 np0005539550 ceph-mgr[74726]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:12:17 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'restful'
Nov 29 02:12:17 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:17.930+0000 7f4f43e42140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:12:18 np0005539550 podman[74925]: 2025-11-29 07:12:18.280254467 +0000 UTC m=+2.480791375 container create 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:18 np0005539550 systemd[1]: Started libpod-conmon-95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3.scope.
Nov 29 02:12:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436e6c48445804083437ca64d169291f33676b98cf45f7882d13e40c5a48d317/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436e6c48445804083437ca64d169291f33676b98cf45f7882d13e40c5a48d317/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436e6c48445804083437ca64d169291f33676b98cf45f7882d13e40c5a48d317/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:18 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rgw'
Nov 29 02:12:18 np0005539550 podman[74925]: 2025-11-29 07:12:18.774095585 +0000 UTC m=+2.974632483 container init 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:18 np0005539550 podman[74925]: 2025-11-29 07:12:18.779572523 +0000 UTC m=+2.980109401 container start 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:18 np0005539550 podman[74925]: 2025-11-29 07:12:18.991409678 +0000 UTC m=+3.191946606 container attach 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4032782157' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]: 
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]: {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "health": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "status": "HEALTH_OK",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "checks": {},
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "mutes": []
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "election_epoch": 5,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "quorum": [
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        0
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    ],
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "quorum_names": [
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "compute-0"
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    ],
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "quorum_age": 17,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "monmap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "epoch": 1,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "min_mon_release_name": "reef",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_mons": 1
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "osdmap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "epoch": 1,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_osds": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_up_osds": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "osd_up_since": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_in_osds": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "osd_in_since": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_remapped_pgs": 0
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "pgmap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "pgs_by_state": [],
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_pgs": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_pools": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_objects": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "data_bytes": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "bytes_used": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "bytes_avail": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "bytes_total": 0
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "fsmap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "epoch": 1,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "by_rank": [],
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "up:standby": 0
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "mgrmap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "available": false,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "num_standbys": 0,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "modules": [
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:            "iostat",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:            "nfs",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:            "restful"
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        ],
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "services": {}
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "servicemap": {
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "epoch": 1,
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:        "services": {}
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    },
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]:    "progress_events": {}
Nov 29 02:12:19 np0005539550 vibrant_edison[74941]: }
Nov 29 02:12:19 np0005539550 systemd[1]: libpod-95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3.scope: Deactivated successfully.
Nov 29 02:12:19 np0005539550 podman[74925]: 2025-11-29 07:12:19.189633593 +0000 UTC m=+3.390170491 container died 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:12:19 np0005539550 ceph-mgr[74726]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:19 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rook'
Nov 29 02:12:19 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:19.457+0000 7f4f43e42140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-436e6c48445804083437ca64d169291f33676b98cf45f7882d13e40c5a48d317-merged.mount: Deactivated successfully.
Nov 29 02:12:20 np0005539550 podman[74925]: 2025-11-29 07:12:20.14419075 +0000 UTC m=+4.344727628 container remove 95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3 (image=quay.io/ceph/ceph:v18, name=vibrant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:20 np0005539550 systemd[1]: libpod-conmon-95d10795b9eec6bf887db456417328df7aebafaf3732ea7b586a4b19af9d28c3.scope: Deactivated successfully.
Nov 29 02:12:21 np0005539550 ceph-mgr[74726]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:21 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'selftest'
Nov 29 02:12:21 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:21.900+0000 7f4f43e42140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:22 np0005539550 ceph-mgr[74726]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:22 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:22.243+0000 7f4f43e42140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:22 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.330966091 +0000 UTC m=+0.165513787 container create 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.24669003 +0000 UTC m=+0.081237756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:22 np0005539550 systemd[1]: Started libpod-conmon-148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0.scope.
Nov 29 02:12:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9d87e848b722cdc4baf52a6f4f7e9bc14f424936856f619d278ddceaa6bcee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9d87e848b722cdc4baf52a6f4f7e9bc14f424936856f619d278ddceaa6bcee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c9d87e848b722cdc4baf52a6f4f7e9bc14f424936856f619d278ddceaa6bcee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.455994252 +0000 UTC m=+0.290541978 container init 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.462010873 +0000 UTC m=+0.296558569 container start 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:12:22 np0005539550 ceph-mgr[74726]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:22 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'stats'
Nov 29 02:12:22 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:22.576+0000 7f4f43e42140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.631095898 +0000 UTC m=+0.465643614 container attach 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:12:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963769505' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]: 
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]: {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "health": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "status": "HEALTH_OK",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "checks": {},
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "mutes": []
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "election_epoch": 5,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "quorum": [
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        0
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    ],
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "quorum_names": [
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "compute-0"
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    ],
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "quorum_age": 21,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "monmap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "epoch": 1,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "min_mon_release_name": "reef",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_mons": 1
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "osdmap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "epoch": 1,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_osds": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_up_osds": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "osd_up_since": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_in_osds": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "osd_in_since": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_remapped_pgs": 0
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "pgmap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "pgs_by_state": [],
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_pgs": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_pools": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_objects": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "data_bytes": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "bytes_used": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "bytes_avail": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "bytes_total": 0
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "fsmap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "epoch": 1,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "by_rank": [],
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "up:standby": 0
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "mgrmap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "available": false,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "num_standbys": 0,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "modules": [
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:            "iostat",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:            "nfs",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:            "restful"
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        ],
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "services": {}
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "servicemap": {
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "epoch": 1,
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:        "services": {}
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    },
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]:    "progress_events": {}
Nov 29 02:12:22 np0005539550 beautiful_brahmagupta[74995]: }
Nov 29 02:12:22 np0005539550 systemd[1]: libpod-148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0.scope: Deactivated successfully.
Nov 29 02:12:22 np0005539550 podman[74979]: 2025-11-29 07:12:22.93620082 +0000 UTC m=+0.770748546 container died 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:22 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'status'
Nov 29 02:12:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1c9d87e848b722cdc4baf52a6f4f7e9bc14f424936856f619d278ddceaa6bcee-merged.mount: Deactivated successfully.
Nov 29 02:12:23 np0005539550 ceph-mgr[74726]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:23 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'telegraf'
Nov 29 02:12:23 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:23.280+0000 7f4f43e42140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:23 np0005539550 podman[74979]: 2025-11-29 07:12:23.518903653 +0000 UTC m=+1.353451349 container remove 148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0 (image=quay.io/ceph/ceph:v18, name=beautiful_brahmagupta, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:12:23 np0005539550 systemd[1]: libpod-conmon-148cd9b6cd85a58b679336210260b302f572e39ec930478ff6acc6c344e568e0.scope: Deactivated successfully.
Nov 29 02:12:23 np0005539550 ceph-mgr[74726]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:23 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:23.564+0000 7f4f43e42140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:23 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'telemetry'
Nov 29 02:12:24 np0005539550 ceph-mgr[74726]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:24 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:24.258+0000 7f4f43e42140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:24 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:12:25 np0005539550 ceph-mgr[74726]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:25 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:25.022+0000 7f4f43e42140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:25 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'volumes'
Nov 29 02:12:25 np0005539550 podman[75032]: 2025-11-29 07:12:25.563286837 +0000 UTC m=+0.022942295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:25 np0005539550 podman[75032]: 2025-11-29 07:12:25.69795742 +0000 UTC m=+0.157612858 container create 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:12:25 np0005539550 systemd[1]: Started libpod-conmon-32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac.scope.
Nov 29 02:12:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1bc5b4eca0d9c606fb698811a7643587faf5a2f6a5aa90ace7070d6591af4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1bc5b4eca0d9c606fb698811a7643587faf5a2f6a5aa90ace7070d6591af4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f1bc5b4eca0d9c606fb698811a7643587faf5a2f6a5aa90ace7070d6591af4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:25 np0005539550 podman[75032]: 2025-11-29 07:12:25.829032833 +0000 UTC m=+0.288688291 container init 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:12:25 np0005539550 podman[75032]: 2025-11-29 07:12:25.835545597 +0000 UTC m=+0.295201035 container start 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:12:25 np0005539550 podman[75032]: 2025-11-29 07:12:25.856924812 +0000 UTC m=+0.316580280 container attach 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:25 np0005539550 ceph-mgr[74726]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:25 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'zabbix'
Nov 29 02:12:25 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:25.887+0000 7f4f43e42140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:26.181+0000 7f4f43e42140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: ms_deliver_dispatch: unhandled message 0x560366cc6f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pdhsqi
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr handle_mgr_map Activating!
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.pdhsqi(active, starting, since 0.0904202s)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr handle_mgr_map I am now activating
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pdhsqi", "id": "compute-0.pdhsqi"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pdhsqi", "id": "compute-0.pdhsqi"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: balancer
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Manager daemon compute-0.pdhsqi is now available
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: crash
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer INFO root] Starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: devicehealth
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:12:26
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [balancer INFO root] No pools available
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: iostat
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: nfs
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: orchestrator
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: pg_autoscaler
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: progress
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [progress INFO root] Loading...
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [progress INFO root] No stored events to load
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [progress INFO root] Loaded [] historic events
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] recovery thread starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] starting setup
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: rbd_support
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: restful
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: status
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [restful WARNING root] server not running: no certificate configured
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] PerfHandler: starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: telemetry
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TaskHandler: starting
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] setup complete
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: Activating manager daemon compute-0.pdhsqi
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: Manager daemon compute-0.pdhsqi is now available
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:12:26 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: volumes
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390234396' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:26 np0005539550 practical_easley[75048]: 
Nov 29 02:12:26 np0005539550 practical_easley[75048]: {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "health": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "status": "HEALTH_OK",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "checks": {},
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "mutes": []
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "election_epoch": 5,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "quorum": [
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        0
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    ],
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "quorum_names": [
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "compute-0"
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    ],
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "quorum_age": 25,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "monmap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "epoch": 1,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "min_mon_release_name": "reef",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_mons": 1
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "osdmap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "epoch": 1,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_osds": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_up_osds": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "osd_up_since": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_in_osds": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "osd_in_since": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_remapped_pgs": 0
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "pgmap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "pgs_by_state": [],
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_pgs": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_pools": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_objects": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "data_bytes": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "bytes_used": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "bytes_avail": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "bytes_total": 0
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "fsmap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "epoch": 1,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "by_rank": [],
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "up:standby": 0
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "mgrmap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "available": false,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "num_standbys": 0,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "modules": [
Nov 29 02:12:26 np0005539550 practical_easley[75048]:            "iostat",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:            "nfs",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:            "restful"
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        ],
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "services": {}
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "servicemap": {
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "epoch": 1,
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:26 np0005539550 practical_easley[75048]:        "services": {}
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    },
Nov 29 02:12:26 np0005539550 practical_easley[75048]:    "progress_events": {}
Nov 29 02:12:26 np0005539550 practical_easley[75048]: }
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 29 02:12:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:26 np0005539550 systemd[1]: libpod-32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac.scope: Deactivated successfully.
Nov 29 02:12:26 np0005539550 podman[75032]: 2025-11-29 07:12:26.367301395 +0000 UTC m=+0.826956833 container died 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:12:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2f1bc5b4eca0d9c606fb698811a7643587faf5a2f6a5aa90ace7070d6591af4b-merged.mount: Deactivated successfully.
Nov 29 02:12:26 np0005539550 podman[75032]: 2025-11-29 07:12:26.420284632 +0000 UTC m=+0.879940070 container remove 32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac (image=quay.io/ceph/ceph:v18, name=practical_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:26 np0005539550 systemd[1]: libpod-conmon-32434945018ac3dc486391baed0cb48296b287f8048cf58c5742977a30ee3cac.scope: Deactivated successfully.
Nov 29 02:12:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.pdhsqi(active, since 1.23008s)
Nov 29 02:12:27 np0005539550 ceph-mon[74435]: from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"}]: dispatch
Nov 29 02:12:27 np0005539550 ceph-mon[74435]: from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:27 np0005539550 ceph-mon[74435]: from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:27 np0005539550 ceph-mon[74435]: from='mgr.14102 192.168.122.100:0/3645319745' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:28 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:12:28 np0005539550 podman[75165]: 2025-11-29 07:12:28.466615204 +0000 UTC m=+0.021041618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:29 np0005539550 podman[75165]: 2025-11-29 07:12:29.113747843 +0000 UTC m=+0.668174237 container create ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:12:29 np0005539550 systemd[1]: Started libpod-conmon-ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a.scope.
Nov 29 02:12:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca57e39d4a64adf6a2104f27f82173e552c89e54b9e62d5a130b2e7d0a8581f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca57e39d4a64adf6a2104f27f82173e552c89e54b9e62d5a130b2e7d0a8581f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dca57e39d4a64adf6a2104f27f82173e552c89e54b9e62d5a130b2e7d0a8581f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.pdhsqi(active, since 3s)
Nov 29 02:12:30 np0005539550 podman[75165]: 2025-11-29 07:12:30.104752543 +0000 UTC m=+1.659178967 container init ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:12:30 np0005539550 podman[75165]: 2025-11-29 07:12:30.109663716 +0000 UTC m=+1.664090110 container start ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:30 np0005539550 podman[75165]: 2025-11-29 07:12:30.148281613 +0000 UTC m=+1.702708007 container attach ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:30 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:12:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 02:12:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170468912' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 02:12:30 np0005539550 kind_wright[75182]: 
Nov 29 02:12:30 np0005539550 kind_wright[75182]: {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "health": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "status": "HEALTH_OK",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "checks": {},
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "mutes": []
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "election_epoch": 5,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "quorum": [
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        0
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    ],
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "quorum_names": [
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "compute-0"
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    ],
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "quorum_age": 29,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "monmap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "epoch": 1,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "min_mon_release_name": "reef",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_mons": 1
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "osdmap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "epoch": 1,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_osds": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_up_osds": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "osd_up_since": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_in_osds": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "osd_in_since": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_remapped_pgs": 0
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "pgmap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "pgs_by_state": [],
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_pgs": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_pools": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_objects": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "data_bytes": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "bytes_used": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "bytes_avail": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "bytes_total": 0
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "fsmap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "epoch": 1,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "by_rank": [],
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "up:standby": 0
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "mgrmap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "available": true,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "num_standbys": 0,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "modules": [
Nov 29 02:12:30 np0005539550 kind_wright[75182]:            "iostat",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:            "nfs",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:            "restful"
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        ],
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "services": {}
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "servicemap": {
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "epoch": 1,
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "modified": "2025-11-29T07:11:57.326155+0000",
Nov 29 02:12:30 np0005539550 kind_wright[75182]:        "services": {}
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    },
Nov 29 02:12:30 np0005539550 kind_wright[75182]:    "progress_events": {}
Nov 29 02:12:30 np0005539550 kind_wright[75182]: }
Nov 29 02:12:30 np0005539550 systemd[1]: libpod-ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a.scope: Deactivated successfully.
Nov 29 02:12:30 np0005539550 conmon[75182]: conmon ec4cddb4c71a5e505b0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a.scope/container/memory.events
Nov 29 02:12:30 np0005539550 podman[75165]: 2025-11-29 07:12:30.750047634 +0000 UTC m=+2.304474028 container died ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:12:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dca57e39d4a64adf6a2104f27f82173e552c89e54b9e62d5a130b2e7d0a8581f-merged.mount: Deactivated successfully.
Nov 29 02:12:31 np0005539550 podman[75165]: 2025-11-29 07:12:31.498305896 +0000 UTC m=+3.052732290 container remove ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a (image=quay.io/ceph/ceph:v18, name=kind_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:12:31 np0005539550 systemd[1]: libpod-conmon-ec4cddb4c71a5e505b0e8d326da012d592615791b344f0e700e5f840300f031a.scope: Deactivated successfully.
Nov 29 02:12:31 np0005539550 podman[75221]: 2025-11-29 07:12:31.60070364 +0000 UTC m=+0.081303157 container create f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:12:31 np0005539550 systemd[1]: Started libpod-conmon-f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871.scope.
Nov 29 02:12:31 np0005539550 podman[75221]: 2025-11-29 07:12:31.545703562 +0000 UTC m=+0.026303109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8683ca7389fa799176bee529b59ea3977c2a0b22791363308116bc26250b0a1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8683ca7389fa799176bee529b59ea3977c2a0b22791363308116bc26250b0a1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8683ca7389fa799176bee529b59ea3977c2a0b22791363308116bc26250b0a1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8683ca7389fa799176bee529b59ea3977c2a0b22791363308116bc26250b0a1f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:31 np0005539550 podman[75221]: 2025-11-29 07:12:31.679885293 +0000 UTC m=+0.160484840 container init f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:12:31 np0005539550 podman[75221]: 2025-11-29 07:12:31.685463103 +0000 UTC m=+0.166062630 container start f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:31 np0005539550 podman[75221]: 2025-11-29 07:12:31.693877624 +0000 UTC m=+0.174477171 container attach f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:12:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:12:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3968923834' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:12:32 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:12:32 np0005539550 systemd[1]: libpod-f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871.scope: Deactivated successfully.
Nov 29 02:12:32 np0005539550 podman[75221]: 2025-11-29 07:12:32.305160534 +0000 UTC m=+0.785760061 container died f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:12:32 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3968923834' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:12:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8683ca7389fa799176bee529b59ea3977c2a0b22791363308116bc26250b0a1f-merged.mount: Deactivated successfully.
Nov 29 02:12:32 np0005539550 podman[75221]: 2025-11-29 07:12:32.35692266 +0000 UTC m=+0.837522197 container remove f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871 (image=quay.io/ceph/ceph:v18, name=vigilant_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:12:32 np0005539550 systemd[1]: libpod-conmon-f009e6337edb3a30bede73d47d2a9b6b2d9ffe08a2f3119698cedf2f1da51871.scope: Deactivated successfully.
Nov 29 02:12:32 np0005539550 podman[75278]: 2025-11-29 07:12:32.420142604 +0000 UTC m=+0.041159692 container create 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:12:32 np0005539550 systemd[1]: Started libpod-conmon-4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d.scope.
Nov 29 02:12:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a9e7d797f25355dced79b3b7749b83817e8d132017081f198f5d452f0acb81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a9e7d797f25355dced79b3b7749b83817e8d132017081f198f5d452f0acb81/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a9e7d797f25355dced79b3b7749b83817e8d132017081f198f5d452f0acb81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:32 np0005539550 podman[75278]: 2025-11-29 07:12:32.401902237 +0000 UTC m=+0.022919345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:32 np0005539550 podman[75278]: 2025-11-29 07:12:32.560763406 +0000 UTC m=+0.181780514 container init 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:32 np0005539550 podman[75278]: 2025-11-29 07:12:32.568737936 +0000 UTC m=+0.189755024 container start 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:32 np0005539550 podman[75278]: 2025-11-29 07:12:32.621824295 +0000 UTC m=+0.242841403 container attach 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:12:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 29 02:12:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3696484009' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 02:12:33 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3696484009' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 29 02:12:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3696484009' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 02:12:33 np0005539550 ceph-mgr[74726]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 29 02:12:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.pdhsqi(active, since 7s)
Nov 29 02:12:33 np0005539550 systemd[1]: libpod-4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d.scope: Deactivated successfully.
Nov 29 02:12:33 np0005539550 podman[75278]: 2025-11-29 07:12:33.420327755 +0000 UTC m=+1.041344863 container died 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:12:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-11a9e7d797f25355dced79b3b7749b83817e8d132017081f198f5d452f0acb81-merged.mount: Deactivated successfully.
Nov 29 02:12:33 np0005539550 podman[75278]: 2025-11-29 07:12:33.481390794 +0000 UTC m=+1.102407872 container remove 4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d (image=quay.io/ceph/ceph:v18, name=sharp_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:12:33 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: ignoring --setuser ceph since I am not root
Nov 29 02:12:33 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: ignoring --setgroup ceph since I am not root
Nov 29 02:12:33 np0005539550 systemd[1]: libpod-conmon-4c1db1b3a0c7d48929b9e07aa75e547bacd9dab66ea9df2b367aa25ad1ab137d.scope: Deactivated successfully.
Nov 29 02:12:33 np0005539550 ceph-mgr[74726]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 29 02:12:33 np0005539550 ceph-mgr[74726]: pidfile_write: ignore empty --pid-file
Nov 29 02:12:33 np0005539550 podman[75333]: 2025-11-29 07:12:33.547182982 +0000 UTC m=+0.044271910 container create 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:12:33 np0005539550 systemd[1]: Started libpod-conmon-07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3.scope.
Nov 29 02:12:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7be8c11139c419bd644a37c611fb748b146f43bc1771478d6f6d8483c2b45e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7be8c11139c419bd644a37c611fb748b146f43bc1771478d6f6d8483c2b45e8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7be8c11139c419bd644a37c611fb748b146f43bc1771478d6f6d8483c2b45e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:33 np0005539550 podman[75333]: 2025-11-29 07:12:33.525278483 +0000 UTC m=+0.022367461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:33 np0005539550 podman[75333]: 2025-11-29 07:12:33.626275863 +0000 UTC m=+0.123364831 container init 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:12:33 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'alerts'
Nov 29 02:12:33 np0005539550 podman[75333]: 2025-11-29 07:12:33.632877118 +0000 UTC m=+0.129966056 container start 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:33 np0005539550 podman[75333]: 2025-11-29 07:12:33.636345965 +0000 UTC m=+0.133434903 container attach 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:12:34 np0005539550 ceph-mgr[74726]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:12:34 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'balancer'
Nov 29 02:12:34 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:34.004+0000 7f3242d0d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 29 02:12:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 02:12:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553708846' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 02:12:34 np0005539550 magical_wright[75374]: {
Nov 29 02:12:34 np0005539550 magical_wright[75374]:    "epoch": 5,
Nov 29 02:12:34 np0005539550 magical_wright[75374]:    "available": true,
Nov 29 02:12:34 np0005539550 magical_wright[75374]:    "active_name": "compute-0.pdhsqi",
Nov 29 02:12:34 np0005539550 magical_wright[75374]:    "num_standby": 0
Nov 29 02:12:34 np0005539550 magical_wright[75374]: }
Nov 29 02:12:34 np0005539550 ceph-mgr[74726]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:12:34 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'cephadm'
Nov 29 02:12:34 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:34.313+0000 7f3242d0d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 29 02:12:34 np0005539550 systemd[1]: libpod-07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3.scope: Deactivated successfully.
Nov 29 02:12:34 np0005539550 podman[75333]: 2025-11-29 07:12:34.330746816 +0000 UTC m=+0.827835754 container died 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:12:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f7be8c11139c419bd644a37c611fb748b146f43bc1771478d6f6d8483c2b45e8-merged.mount: Deactivated successfully.
Nov 29 02:12:34 np0005539550 podman[75333]: 2025-11-29 07:12:34.397394275 +0000 UTC m=+0.894483213 container remove 07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3 (image=quay.io/ceph/ceph:v18, name=magical_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:12:34 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3696484009' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 29 02:12:34 np0005539550 systemd[1]: libpod-conmon-07560620af97d8d3e89672abdff0966e7993fd96f21edb446225eb6a16ce11f3.scope: Deactivated successfully.
Nov 29 02:12:34 np0005539550 podman[75413]: 2025-11-29 07:12:34.468501386 +0000 UTC m=+0.049292065 container create 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:12:34 np0005539550 systemd[1]: Started libpod-conmon-59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5.scope.
Nov 29 02:12:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd6ad54bbccc2058a1c495003b59022fe1410271f2abe61a816eeaa49c77e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd6ad54bbccc2058a1c495003b59022fe1410271f2abe61a816eeaa49c77e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd6ad54bbccc2058a1c495003b59022fe1410271f2abe61a816eeaa49c77e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:34 np0005539550 podman[75413]: 2025-11-29 07:12:34.445297295 +0000 UTC m=+0.026087994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:34 np0005539550 podman[75413]: 2025-11-29 07:12:34.565058035 +0000 UTC m=+0.145848734 container init 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:12:34 np0005539550 podman[75413]: 2025-11-29 07:12:34.570823779 +0000 UTC m=+0.151614458 container start 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:12:34 np0005539550 podman[75413]: 2025-11-29 07:12:34.575408294 +0000 UTC m=+0.156198983 container attach 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:12:36 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'crash'
Nov 29 02:12:37 np0005539550 ceph-mgr[74726]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:12:37 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'dashboard'
Nov 29 02:12:37 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:37.029+0000 7f3242d0d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 29 02:12:38 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'devicehealth'
Nov 29 02:12:38 np0005539550 ceph-mgr[74726]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:12:38 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'diskprediction_local'
Nov 29 02:12:38 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:38.882+0000 7f3242d0d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 29 02:12:39 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 29 02:12:39 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 29 02:12:39 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  from numpy import show_config as show_numpy_config
Nov 29 02:12:39 np0005539550 ceph-mgr[74726]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:12:39 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:39.547+0000 7f3242d0d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 29 02:12:39 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'influx'
Nov 29 02:12:39 np0005539550 ceph-mgr[74726]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:12:39 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'insights'
Nov 29 02:12:39 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:39.828+0000 7f3242d0d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 29 02:12:40 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'iostat'
Nov 29 02:12:40 np0005539550 ceph-mgr[74726]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:12:40 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'k8sevents'
Nov 29 02:12:40 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:40.342+0000 7f3242d0d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 29 02:12:42 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'localpool'
Nov 29 02:12:42 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'mds_autoscaler'
Nov 29 02:12:43 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'mirroring'
Nov 29 02:12:43 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'nfs'
Nov 29 02:12:44 np0005539550 ceph-mgr[74726]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:12:44 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'orchestrator'
Nov 29 02:12:44 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:44.578+0000 7f3242d0d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 29 02:12:45 np0005539550 ceph-mgr[74726]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:45 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'osd_perf_query'
Nov 29 02:12:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:45.369+0000 7f3242d0d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:45 np0005539550 ceph-mgr[74726]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:12:45 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'osd_support'
Nov 29 02:12:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:45.731+0000 7f3242d0d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:46.028+0000 7f3242d0d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'pg_autoscaler'
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'progress'
Nov 29 02:12:46 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:46.385+0000 7f3242d0d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:12:46 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'prometheus'
Nov 29 02:12:46 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:46.696+0000 7f3242d0d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 29 02:12:47 np0005539550 ceph-mgr[74726]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:12:47 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:47.984+0000 7f3242d0d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 29 02:12:47 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rbd_support'
Nov 29 02:12:48 np0005539550 ceph-mgr[74726]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:12:48 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'restful'
Nov 29 02:12:48 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:48.341+0000 7f3242d0d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 29 02:12:49 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rgw'
Nov 29 02:12:49 np0005539550 ceph-mgr[74726]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:49 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'rook'
Nov 29 02:12:49 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:49.963+0000 7f3242d0d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'selftest'
Nov 29 02:12:52 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:52.413+0000 7f3242d0d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'snap_schedule'
Nov 29 02:12:52 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:52.690+0000 7f3242d0d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:52 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'stats'
Nov 29 02:12:52 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:52.979+0000 7f3242d0d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 29 02:12:53 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'status'
Nov 29 02:12:53 np0005539550 ceph-mgr[74726]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:53 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'telegraf'
Nov 29 02:12:53 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:53.678+0000 7f3242d0d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 29 02:12:54 np0005539550 ceph-mgr[74726]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:54 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'telemetry'
Nov 29 02:12:54 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:54.014+0000 7f3242d0d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 29 02:12:54 np0005539550 ceph-mgr[74726]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:54 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:54.749+0000 7f3242d0d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 29 02:12:54 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'test_orchestrator'
Nov 29 02:12:55 np0005539550 ceph-mgr[74726]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:55 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'volumes'
Nov 29 02:12:55 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:55.505+0000 7f3242d0d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 29 02:12:56 np0005539550 ceph-mgr[74726]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:56 np0005539550 ceph-mgr[74726]: mgr[py] Loading python module 'zabbix'
Nov 29 02:12:56 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:56.490+0000 7f3242d0d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:57 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:12:57.600+0000 7f3242d0d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: ms_deliver_dispatch: unhandled message 0x555bbc76c420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Active manager daemon compute-0.pdhsqi restarted
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.pdhsqi
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr handle_mgr_map Activating!
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr handle_mgr_map I am now activating
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.pdhsqi(active, starting, since 0.0253516s)
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.pdhsqi", "id": "compute-0.pdhsqi"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr metadata", "who": "compute-0.pdhsqi", "id": "compute-0.pdhsqi"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e1 all = 1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: balancer
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Manager daemon compute-0.pdhsqi is now available
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:12:57
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] No pools available
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: Active manager daemon compute-0.pdhsqi restarted
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: Activating manager daemon compute-0.pdhsqi
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: Manager daemon compute-0.pdhsqi is now available
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: cephadm
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: crash
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: devicehealth
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: iostat
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: nfs
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: orchestrator
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: pg_autoscaler
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: progress
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [progress INFO root] Loading...
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [progress INFO root] No stored events to load
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [progress INFO root] Loaded [] historic events
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [progress INFO root] Loaded OSDMap, ready.
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] recovery thread starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] starting setup
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: rbd_support
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: restful
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [restful INFO root] server_addr: :: server_port: 8003
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [restful WARNING root] server not running: no certificate configured
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: status
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: telemetry
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] PerfHandler: starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TaskHandler: starting
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"} v 0) v1
Nov 29 02:12:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"}]: dispatch
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] setup complete
Nov 29 02:12:57 np0005539550 ceph-mgr[74726]: mgr load Constructed class from module: volumes
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.pdhsqi(active, since 1.03317s)
Nov 29 02:12:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 29 02:12:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 29 02:12:58 np0005539550 gallant_heisenberg[75429]: {
Nov 29 02:12:58 np0005539550 gallant_heisenberg[75429]:    "mgrmap_epoch": 7,
Nov 29 02:12:58 np0005539550 gallant_heisenberg[75429]:    "initialized": true
Nov 29 02:12:58 np0005539550 gallant_heisenberg[75429]: }
Nov 29 02:12:58 np0005539550 systemd[1]: libpod-59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5.scope: Deactivated successfully.
Nov 29 02:12:58 np0005539550 podman[75413]: 2025-11-29 07:12:58.683554845 +0000 UTC m=+24.264345534 container died 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: Found migration_current of "None". Setting to last migration.
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/mirror_snapshot_schedule"}]: dispatch
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.pdhsqi/trash_purge_schedule"}]: dispatch
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:12:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-07cd6ad54bbccc2058a1c495003b59022fe1410271f2abe61a816eeaa49c77e1-merged.mount: Deactivated successfully.
Nov 29 02:12:58 np0005539550 podman[75413]: 2025-11-29 07:12:58.871296885 +0000 UTC m=+24.452087564 container remove 59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5 (image=quay.io/ceph/ceph:v18, name=gallant_heisenberg, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:58 np0005539550 systemd[1]: libpod-conmon-59d2c7d859e51280e0b4076b444e38795120370b4004633f90afda5e684f9db5.scope: Deactivated successfully.
Nov 29 02:12:59 np0005539550 podman[75587]: 2025-11-29 07:12:58.922635463 +0000 UTC m=+0.027561149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:12:59 np0005539550 podman[75587]: 2025-11-29 07:12:59.2403878 +0000 UTC m=+0.345313436 container create cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:59] ENGINE Bus STARTING
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:59] ENGINE Bus STARTING
Nov 29 02:12:59 np0005539550 systemd[1]: Started libpod-conmon-cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d.scope.
Nov 29 02:12:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:12:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47d21543c0b97b874cc1870e1c69d65636cfb1c4af48d9ca50bd67902bc59d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47d21543c0b97b874cc1870e1c69d65636cfb1c4af48d9ca50bd67902bc59d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b47d21543c0b97b874cc1870e1c69d65636cfb1c4af48d9ca50bd67902bc59d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:12:59 np0005539550 podman[75587]: 2025-11-29 07:12:59.45812084 +0000 UTC m=+0.563046476 container init cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:12:59 np0005539550 podman[75587]: 2025-11-29 07:12:59.464507334 +0000 UTC m=+0.569432970 container start cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:59] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:59] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:59] ENGINE Client ('192.168.122.100', 57780) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:59] ENGINE Client ('192.168.122.100', 57780) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:12:59 np0005539550 podman[75587]: 2025-11-29 07:12:59.572224229 +0000 UTC m=+0.677149855 container attach cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:59] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:59] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [29/Nov/2025:07:12:59] ENGINE Bus STARTED
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [29/Nov/2025:07:12:59] ENGINE Bus STARTED
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:12:59 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: [29/Nov/2025:07:12:59] ENGINE Bus STARTING
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: [29/Nov/2025:07:12:59] ENGINE Serving on https://192.168.122.100:7150
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: [29/Nov/2025:07:12:59] ENGINE Client ('192.168.122.100', 57780) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 29 02:12:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.pdhsqi(active, since 2s)
Nov 29 02:13:00 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 29 02:13:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:13:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:13:00 np0005539550 systemd[1]: libpod-cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d.scope: Deactivated successfully.
Nov 29 02:13:00 np0005539550 podman[75587]: 2025-11-29 07:13:00.131274061 +0000 UTC m=+1.236199727 container died cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2b47d21543c0b97b874cc1870e1c69d65636cfb1c4af48d9ca50bd67902bc59d-merged.mount: Deactivated successfully.
Nov 29 02:13:00 np0005539550 podman[75587]: 2025-11-29 07:13:00.179769896 +0000 UTC m=+1.284695532 container remove cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d (image=quay.io/ceph/ceph:v18, name=strange_cori, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:13:00 np0005539550 systemd[1]: libpod-conmon-cf84699ee53b8c43add7b1672f27f38f8616ac288132ba86cf558623e43cb64d.scope: Deactivated successfully.
Nov 29 02:13:00 np0005539550 podman[75666]: 2025-11-29 07:13:00.22784606 +0000 UTC m=+0.026266755 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:00 np0005539550 podman[75666]: 2025-11-29 07:13:00.340103062 +0000 UTC m=+0.138523737 container create 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:00 np0005539550 systemd[1]: Started libpod-conmon-1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc.scope.
Nov 29 02:13:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e77f5a09888948ec9e4f3bdf856d87ec039d93be6b4a0c644c4b4809a75aa77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e77f5a09888948ec9e4f3bdf856d87ec039d93be6b4a0c644c4b4809a75aa77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e77f5a09888948ec9e4f3bdf856d87ec039d93be6b4a0c644c4b4809a75aa77/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:00 np0005539550 podman[75666]: 2025-11-29 07:13:00.802232026 +0000 UTC m=+0.600652731 container init 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:00 np0005539550 podman[75666]: 2025-11-29 07:13:00.807964204 +0000 UTC m=+0.606384889 container start 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:00 np0005539550 podman[75666]: 2025-11-29 07:13:00.948545253 +0000 UTC m=+0.746965938 container attach 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: [29/Nov/2025:07:12:59] ENGINE Serving on http://192.168.122.100:8765
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: [29/Nov/2025:07:12:59] ENGINE Bus STARTED
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919748 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_user
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 29 02:13:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_config
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 29 02:13:01 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 29 02:13:01 np0005539550 awesome_shannon[75682]: ssh user set to ceph-admin. sudo will be used
Nov 29 02:13:01 np0005539550 systemd[1]: libpod-1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc.scope: Deactivated successfully.
Nov 29 02:13:01 np0005539550 podman[75666]: 2025-11-29 07:13:01.744279341 +0000 UTC m=+1.542700026 container died 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8e77f5a09888948ec9e4f3bdf856d87ec039d93be6b4a0c644c4b4809a75aa77-merged.mount: Deactivated successfully.
Nov 29 02:13:01 np0005539550 podman[75666]: 2025-11-29 07:13:01.782088932 +0000 UTC m=+1.580509607 container remove 1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc (image=quay.io/ceph/ceph:v18, name=awesome_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:13:01 np0005539550 systemd[1]: libpod-conmon-1768945a5c18e409de896bf05607d8b117076e89e9bf6d08f537619d1c4ba1dc.scope: Deactivated successfully.
Nov 29 02:13:01 np0005539550 podman[75720]: 2025-11-29 07:13:01.84236125 +0000 UTC m=+0.041428655 container create 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:01 np0005539550 systemd[1]: Started libpod-conmon-27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4.scope.
Nov 29 02:13:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:01 np0005539550 podman[75720]: 2025-11-29 07:13:01.824578143 +0000 UTC m=+0.023645568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:01 np0005539550 podman[75720]: 2025-11-29 07:13:01.921332627 +0000 UTC m=+0.120400052 container init 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:13:01 np0005539550 podman[75720]: 2025-11-29 07:13:01.929119497 +0000 UTC m=+0.128186932 container start 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:13:01 np0005539550 podman[75720]: 2025-11-29 07:13:01.932304319 +0000 UTC m=+0.131371724 container attach 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:02 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 29 02:13:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:02 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 29 02:13:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 29 02:13:02 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Set ssh private key
Nov 29 02:13:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 29 02:13:02 np0005539550 systemd[1]: libpod-27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4.scope: Deactivated successfully.
Nov 29 02:13:02 np0005539550 podman[75720]: 2025-11-29 07:13:02.545294966 +0000 UTC m=+0.744362371 container died 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:13:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-585c3951df5c1a45e966941cb7c22e85f234396cb19b508b133a753f7aae417d-merged.mount: Deactivated successfully.
Nov 29 02:13:02 np0005539550 podman[75720]: 2025-11-29 07:13:02.58636666 +0000 UTC m=+0.785434065 container remove 27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4 (image=quay.io/ceph/ceph:v18, name=jovial_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:02 np0005539550 systemd[1]: libpod-conmon-27fdd6c04db8a1848943417f4ff35e033f7cadb6e92be0b16b23e85e080ebcd4.scope: Deactivated successfully.
Nov 29 02:13:02 np0005539550 podman[75773]: 2025-11-29 07:13:02.648627259 +0000 UTC m=+0.044218267 container create ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:13:02 np0005539550 systemd[1]: Started libpod-conmon-ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8.scope.
Nov 29 02:13:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:02 np0005539550 podman[75773]: 2025-11-29 07:13:02.719626061 +0000 UTC m=+0.115217089 container init ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:02 np0005539550 podman[75773]: 2025-11-29 07:13:02.72659445 +0000 UTC m=+0.122185448 container start ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:13:02 np0005539550 podman[75773]: 2025-11-29 07:13:02.632806662 +0000 UTC m=+0.028397690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:02 np0005539550 podman[75773]: 2025-11-29 07:13:02.735313454 +0000 UTC m=+0.130904462 container attach ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:13:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:03 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 29 02:13:03 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 29 02:13:03 np0005539550 systemd[1]: libpod-ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8.scope: Deactivated successfully.
Nov 29 02:13:03 np0005539550 podman[75773]: 2025-11-29 07:13:03.342224224 +0000 UTC m=+0.737815232 container died ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:03 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: Set ssh ssh_user
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: Set ssh ssh_config
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: ssh user set to ceph-admin. sudo will be used
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: Set ssh ssh_identity_key
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: Set ssh private key
Nov 29 02:13:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d1331cce1734bf9d660878dcce3c4fbd41b1cea28819e2ab6681df03c4345af0-merged.mount: Deactivated successfully.
Nov 29 02:13:03 np0005539550 podman[75773]: 2025-11-29 07:13:03.832369878 +0000 UTC m=+1.227960886 container remove ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8 (image=quay.io/ceph/ceph:v18, name=suspicious_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:03 np0005539550 podman[75826]: 2025-11-29 07:13:03.890996503 +0000 UTC m=+0.038531590 container create 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:13:03 np0005539550 systemd[1]: libpod-conmon-ea103b5d258a986c9f4f3a2d6dca374a89be2944b67a7ea69ec5b2b8c9ebaef8.scope: Deactivated successfully.
Nov 29 02:13:03 np0005539550 systemd[1]: Started libpod-conmon-2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19.scope.
Nov 29 02:13:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94891c7d57f1dbc1d4e3937554c6236939623c5d2aaf27c3b80df81c50fbe950/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94891c7d57f1dbc1d4e3937554c6236939623c5d2aaf27c3b80df81c50fbe950/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94891c7d57f1dbc1d4e3937554c6236939623c5d2aaf27c3b80df81c50fbe950/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:03 np0005539550 podman[75826]: 2025-11-29 07:13:03.874696804 +0000 UTC m=+0.022231911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:04 np0005539550 podman[75826]: 2025-11-29 07:13:04.08330546 +0000 UTC m=+0.230840567 container init 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:04 np0005539550 podman[75826]: 2025-11-29 07:13:04.090224428 +0000 UTC m=+0.237759515 container start 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:13:04 np0005539550 podman[75826]: 2025-11-29 07:13:04.097222807 +0000 UTC m=+0.244757924 container attach 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:04 np0005539550 confident_elgamal[75843]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDe/xa2TJ7MA45QtT1Dl3ahtdCcRHYXnpdDlA4YPMLbqtuocR3HFnjB+CYXMNUTkwUlxURsA7VtvtSq5z3OxTbjCW8SJ2aA9use9+dXLqDZeN4kRQbllE71XVGbzL5zVNj5PzDkQWenKt/Wo6s36T+anlu6Cjx+aOYLQrZUXleLNGLxByU8QGOfk+enTMggQl9xh0RQrzB4uMv74kgMc1XEm+FBB/15efaXpLTMi8D9QTvibnNiyac5EIdLj05se8q0ObQH9wFweyOnZBUyz3zCwDsJWQJwNMzIh82iy83W2NZTGggzYEojeu9HN0EVwDYIPrlVjth2AvZu1s4odHy2jX+TktL/VWcKCyVYJHsdaGprJtNFUAn17etFQKHLWqWbeHvLyOIhHpqmGs82Sk7uY5J84yJYIKzM6fzobXMCtFHtVkQ3djwlrDS5EXAjn4iTQ/oJdnbQFsEl1xCsK0PEW/hYy1qb3ECcsL2sUqeGyQsuR8Gy857hZg/f7x6mW8M= zuul@controller
Nov 29 02:13:04 np0005539550 systemd[1]: libpod-2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19.scope: Deactivated successfully.
Nov 29 02:13:04 np0005539550 podman[75826]: 2025-11-29 07:13:04.704014975 +0000 UTC m=+0.851550082 container died 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:13:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-94891c7d57f1dbc1d4e3937554c6236939623c5d2aaf27c3b80df81c50fbe950-merged.mount: Deactivated successfully.
Nov 29 02:13:04 np0005539550 podman[75826]: 2025-11-29 07:13:04.771822216 +0000 UTC m=+0.919357303 container remove 2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19 (image=quay.io/ceph/ceph:v18, name=confident_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:13:04 np0005539550 ceph-mon[74435]: Set ssh ssh_identity_pub
Nov 29 02:13:04 np0005539550 systemd[1]: libpod-conmon-2421b2f89beff4d1e86fdd0d7a56d6257a36153862e0108fd5ad81110b0caa19.scope: Deactivated successfully.
Nov 29 02:13:04 np0005539550 podman[75881]: 2025-11-29 07:13:04.863294535 +0000 UTC m=+0.066646362 container create 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:13:04 np0005539550 podman[75881]: 2025-11-29 07:13:04.821691006 +0000 UTC m=+0.025042833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:04 np0005539550 systemd[1]: Started libpod-conmon-01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf.scope.
Nov 29 02:13:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6eeb67a431acd5e6b06e67251d57cac3c86192770954f9e0b4b419774eda59e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6eeb67a431acd5e6b06e67251d57cac3c86192770954f9e0b4b419774eda59e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6eeb67a431acd5e6b06e67251d57cac3c86192770954f9e0b4b419774eda59e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:05 np0005539550 podman[75881]: 2025-11-29 07:13:05.011779907 +0000 UTC m=+0.215131764 container init 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:13:05 np0005539550 podman[75881]: 2025-11-29 07:13:05.019049473 +0000 UTC m=+0.222401300 container start 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:05 np0005539550 podman[75881]: 2025-11-29 07:13:05.059717247 +0000 UTC m=+0.263069074 container attach 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:13:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:05 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:05 np0005539550 systemd[1]: Created slice User Slice of UID 42477.
Nov 29 02:13:05 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 29 02:13:05 np0005539550 systemd-logind[788]: New session 22 of user ceph-admin.
Nov 29 02:13:05 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 29 02:13:05 np0005539550 systemd[1]: Starting User Manager for UID 42477...
Nov 29 02:13:05 np0005539550 systemd[75927]: Queued start job for default target Main User Target.
Nov 29 02:13:05 np0005539550 systemd[75927]: Created slice User Application Slice.
Nov 29 02:13:05 np0005539550 systemd[75927]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:13:05 np0005539550 systemd[75927]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:13:05 np0005539550 systemd[75927]: Reached target Paths.
Nov 29 02:13:05 np0005539550 systemd[75927]: Reached target Timers.
Nov 29 02:13:05 np0005539550 systemd[75927]: Starting D-Bus User Message Bus Socket...
Nov 29 02:13:05 np0005539550 systemd[75927]: Starting Create User's Volatile Files and Directories...
Nov 29 02:13:05 np0005539550 systemd-logind[788]: New session 24 of user ceph-admin.
Nov 29 02:13:06 np0005539550 systemd[75927]: Finished Create User's Volatile Files and Directories.
Nov 29 02:13:06 np0005539550 systemd[75927]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:13:06 np0005539550 systemd[75927]: Reached target Sockets.
Nov 29 02:13:06 np0005539550 systemd[75927]: Reached target Basic System.
Nov 29 02:13:06 np0005539550 systemd[75927]: Reached target Main User Target.
Nov 29 02:13:06 np0005539550 systemd[75927]: Startup finished in 142ms.
Nov 29 02:13:06 np0005539550 systemd[1]: Started User Manager for UID 42477.
Nov 29 02:13:06 np0005539550 systemd[1]: Started Session 22 of User ceph-admin.
Nov 29 02:13:06 np0005539550 systemd[1]: Started Session 24 of User ceph-admin.
Nov 29 02:13:06 np0005539550 systemd-logind[788]: New session 25 of user ceph-admin.
Nov 29 02:13:06 np0005539550 systemd[1]: Started Session 25 of User ceph-admin.
Nov 29 02:13:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053016 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:06 np0005539550 systemd-logind[788]: New session 26 of user ceph-admin.
Nov 29 02:13:06 np0005539550 systemd[1]: Started Session 26 of User ceph-admin.
Nov 29 02:13:07 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 29 02:13:07 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 29 02:13:07 np0005539550 systemd-logind[788]: New session 27 of user ceph-admin.
Nov 29 02:13:07 np0005539550 systemd[1]: Started Session 27 of User ceph-admin.
Nov 29 02:13:07 np0005539550 systemd-logind[788]: New session 28 of user ceph-admin.
Nov 29 02:13:07 np0005539550 systemd[1]: Started Session 28 of User ceph-admin.
Nov 29 02:13:07 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:07 np0005539550 ceph-mon[74435]: Deploying cephadm binary to compute-0
Nov 29 02:13:07 np0005539550 systemd-logind[788]: New session 29 of user ceph-admin.
Nov 29 02:13:08 np0005539550 systemd[1]: Started Session 29 of User ceph-admin.
Nov 29 02:13:08 np0005539550 systemd-logind[788]: New session 30 of user ceph-admin.
Nov 29 02:13:08 np0005539550 systemd[1]: Started Session 30 of User ceph-admin.
Nov 29 02:13:08 np0005539550 systemd-logind[788]: New session 31 of user ceph-admin.
Nov 29 02:13:08 np0005539550 systemd[1]: Started Session 31 of User ceph-admin.
Nov 29 02:13:09 np0005539550 systemd-logind[788]: New session 32 of user ceph-admin.
Nov 29 02:13:09 np0005539550 systemd[1]: Started Session 32 of User ceph-admin.
Nov 29 02:13:09 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:09 np0005539550 systemd-logind[788]: New session 33 of user ceph-admin.
Nov 29 02:13:09 np0005539550 systemd[1]: Started Session 33 of User ceph-admin.
Nov 29 02:13:10 np0005539550 systemd-logind[788]: New session 34 of user ceph-admin.
Nov 29 02:13:10 np0005539550 systemd[1]: Started Session 34 of User ceph-admin.
Nov 29 02:13:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:10 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Added host compute-0
Nov 29 02:13:10 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 02:13:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:13:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:13:10 np0005539550 competent_burnell[75897]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 02:13:10 np0005539550 systemd[1]: libpod-01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf.scope: Deactivated successfully.
Nov 29 02:13:10 np0005539550 podman[75881]: 2025-11-29 07:13:10.510956543 +0000 UTC m=+5.714308380 container died 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:13:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e6eeb67a431acd5e6b06e67251d57cac3c86192770954f9e0b4b419774eda59e-merged.mount: Deactivated successfully.
Nov 29 02:13:10 np0005539550 podman[75881]: 2025-11-29 07:13:10.602902354 +0000 UTC m=+5.806254181 container remove 01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf (image=quay.io/ceph/ceph:v18, name=competent_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:10 np0005539550 systemd[1]: libpod-conmon-01471f490b431af0d9a3c670a607f1b0ce385132a0e3358ebe8fd48c9dde19cf.scope: Deactivated successfully.
Nov 29 02:13:10 np0005539550 podman[76591]: 2025-11-29 07:13:10.681161223 +0000 UTC m=+0.058337989 container create 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:10 np0005539550 systemd[1]: Started libpod-conmon-89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d.scope.
Nov 29 02:13:10 np0005539550 podman[76591]: 2025-11-29 07:13:10.652419525 +0000 UTC m=+0.029596331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55e578758208709c17b860e47a214fa8e4e7a92c0fa144b479ffcab12e07fee/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55e578758208709c17b860e47a214fa8e4e7a92c0fa144b479ffcab12e07fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55e578758208709c17b860e47a214fa8e4e7a92c0fa144b479ffcab12e07fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:10 np0005539550 podman[76591]: 2025-11-29 07:13:10.797783187 +0000 UTC m=+0.174959983 container init 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:13:10 np0005539550 podman[76591]: 2025-11-29 07:13:10.805050904 +0000 UTC m=+0.182227680 container start 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:10 np0005539550 podman[76591]: 2025-11-29 07:13:10.830931768 +0000 UTC m=+0.208108544 container attach 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.068947929 +0000 UTC m=+0.085527177 container create 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:13:11 np0005539550 systemd[1]: Started libpod-conmon-9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880.scope.
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.007018219 +0000 UTC m=+0.023597497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.131840463 +0000 UTC m=+0.148419731 container init 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.13793395 +0000 UTC m=+0.154513198 container start 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.141517482 +0000 UTC m=+0.158096760 container attach 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:13:11 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:11 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 29 02:13:11 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:11 np0005539550 optimistic_brahmagupta[76655]: Scheduled mon update...
Nov 29 02:13:11 np0005539550 systemd[1]: libpod-89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d.scope: Deactivated successfully.
Nov 29 02:13:11 np0005539550 podman[76591]: 2025-11-29 07:13:11.422221588 +0000 UTC m=+0.799398384 container died 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a55e578758208709c17b860e47a214fa8e4e7a92c0fa144b479ffcab12e07fee-merged.mount: Deactivated successfully.
Nov 29 02:13:11 np0005539550 podman[76591]: 2025-11-29 07:13:11.488282464 +0000 UTC m=+0.865459240 container remove 89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d (image=quay.io/ceph/ceph:v18, name=optimistic_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:11 np0005539550 magical_kapitsa[76707]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 29 02:13:11 np0005539550 systemd[1]: libpod-conmon-89235dcd0cc499ebfac61d3a3e7bbacd164f1a5a88840ee60569c487c49cfe5d.scope: Deactivated successfully.
Nov 29 02:13:11 np0005539550 systemd[1]: libpod-9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880.scope: Deactivated successfully.
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.509723074 +0000 UTC m=+0.526302322 container died 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: Added host compute-0
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-67fd8cf117fd811eee445c190847176861602454422c561064c31bf955ebc940-merged.mount: Deactivated successfully.
Nov 29 02:13:11 np0005539550 podman[76691]: 2025-11-29 07:13:11.580171023 +0000 UTC m=+0.596750271 container remove 9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880 (image=quay.io/ceph/ceph:v18, name=magical_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:13:11 np0005539550 podman[76747]: 2025-11-29 07:13:11.585648804 +0000 UTC m=+0.076970917 container create 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:11 np0005539550 systemd[1]: libpod-conmon-9563d8d1127e9565c6c892596e6157f37b1a543794db24473b8aa5f297169880.scope: Deactivated successfully.
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 29 02:13:11 np0005539550 systemd[1]: Started libpod-conmon-0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343.scope.
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:11 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceac2ecb3e48785b57d70d6bfb904c7b3ee0cd1a2769040497e01f2931b59bc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceac2ecb3e48785b57d70d6bfb904c7b3ee0cd1a2769040497e01f2931b59bc6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:11 np0005539550 podman[76747]: 2025-11-29 07:13:11.550166093 +0000 UTC m=+0.041488226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceac2ecb3e48785b57d70d6bfb904c7b3ee0cd1a2769040497e01f2931b59bc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:11 np0005539550 podman[76747]: 2025-11-29 07:13:11.655001534 +0000 UTC m=+0.146323667 container init 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:11 np0005539550 podman[76747]: 2025-11-29 07:13:11.661432999 +0000 UTC m=+0.152755112 container start 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:13:11 np0005539550 podman[76747]: 2025-11-29 07:13:11.665104844 +0000 UTC m=+0.156427037 container attach 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:13:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 objective_margulis[76775]: Scheduled mgr update...
Nov 29 02:13:12 np0005539550 systemd[1]: libpod-0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343.scope: Deactivated successfully.
Nov 29 02:13:12 np0005539550 podman[76747]: 2025-11-29 07:13:12.245566246 +0000 UTC m=+0.736888349 container died 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ceac2ecb3e48785b57d70d6bfb904c7b3ee0cd1a2769040497e01f2931b59bc6-merged.mount: Deactivated successfully.
Nov 29 02:13:12 np0005539550 podman[76747]: 2025-11-29 07:13:12.287553624 +0000 UTC m=+0.778875737 container remove 0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343 (image=quay.io/ceph/ceph:v18, name=objective_margulis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:13:12 np0005539550 systemd[1]: libpod-conmon-0c48e7f49e10735b9fb876944329544a605e41c3cee83b143186acdadb62c343.scope: Deactivated successfully.
Nov 29 02:13:12 np0005539550 podman[77005]: 2025-11-29 07:13:12.341809356 +0000 UTC m=+0.036634621 container create 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:12 np0005539550 systemd[1]: Started libpod-conmon-5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af.scope.
Nov 29 02:13:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fbd0a212c8d23d8071b7959b6ceeb412206514e5cfa856259656254b333a84/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fbd0a212c8d23d8071b7959b6ceeb412206514e5cfa856259656254b333a84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41fbd0a212c8d23d8071b7959b6ceeb412206514e5cfa856259656254b333a84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:12 np0005539550 podman[77005]: 2025-11-29 07:13:12.326246257 +0000 UTC m=+0.021071562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:12 np0005539550 podman[77005]: 2025-11-29 07:13:12.426398348 +0000 UTC m=+0.121223653 container init 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:12 np0005539550 podman[77005]: 2025-11-29 07:13:12.432624788 +0000 UTC m=+0.127450073 container start 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:12 np0005539550 podman[77005]: 2025-11-29 07:13:12.435851211 +0000 UTC m=+0.130676486 container attach 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: Saving service mon spec with placement count:5
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 podman[77123]: 2025-11-29 07:13:12.796280794 +0000 UTC m=+0.061180692 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service crash spec with placement *
Nov 29 02:13:12 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:13:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:12 np0005539550 confident_keller[77048]: Scheduled crash update...
Nov 29 02:13:13 np0005539550 systemd[1]: libpod-5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af.scope: Deactivated successfully.
Nov 29 02:13:13 np0005539550 podman[77005]: 2025-11-29 07:13:13.01971448 +0000 UTC m=+0.714539755 container died 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-41fbd0a212c8d23d8071b7959b6ceeb412206514e5cfa856259656254b333a84-merged.mount: Deactivated successfully.
Nov 29 02:13:13 np0005539550 podman[77005]: 2025-11-29 07:13:13.072741471 +0000 UTC m=+0.767566746 container remove 5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af (image=quay.io/ceph/ceph:v18, name=confident_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:13 np0005539550 systemd[1]: libpod-conmon-5a962e7130a6a8e8136b46ebb64ce62f71799db0c435a28536945b5982df88af.scope: Deactivated successfully.
Nov 29 02:13:13 np0005539550 podman[77123]: 2025-11-29 07:13:13.14241873 +0000 UTC m=+0.407318608 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:13:13 np0005539550 podman[77177]: 2025-11-29 07:13:13.166051217 +0000 UTC m=+0.065125483 container create d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:13 np0005539550 systemd[1]: Started libpod-conmon-d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79.scope.
Nov 29 02:13:13 np0005539550 podman[77177]: 2025-11-29 07:13:13.127568179 +0000 UTC m=+0.026642475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a358cc8e8eeb7217b4d1ce583c0b6bc0f9831919ee073270a23a945b9f2b0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a358cc8e8eeb7217b4d1ce583c0b6bc0f9831919ee073270a23a945b9f2b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a358cc8e8eeb7217b4d1ce583c0b6bc0f9831919ee073270a23a945b9f2b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:13 np0005539550 podman[77177]: 2025-11-29 07:13:13.344227581 +0000 UTC m=+0.243301857 container init d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:13:13 np0005539550 podman[77177]: 2025-11-29 07:13:13.352177675 +0000 UTC m=+0.251251941 container start d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:13:13 np0005539550 podman[77177]: 2025-11-29 07:13:13.423322832 +0000 UTC m=+0.322397128 container attach d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:13:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:13 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: Saving service mgr spec with placement count:2
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: Saving service crash spec with placement *
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2594415671' entity='client.admin' 
Nov 29 02:13:14 np0005539550 systemd[1]: libpod-d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79.scope: Deactivated successfully.
Nov 29 02:13:14 np0005539550 podman[77177]: 2025-11-29 07:13:14.161456181 +0000 UTC m=+1.060530447 container died d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-46a358cc8e8eeb7217b4d1ce583c0b6bc0f9831919ee073270a23a945b9f2b0a-merged.mount: Deactivated successfully.
Nov 29 02:13:14 np0005539550 podman[77177]: 2025-11-29 07:13:14.241503756 +0000 UTC m=+1.140578022 container remove d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79 (image=quay.io/ceph/ceph:v18, name=hungry_goldberg, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:14 np0005539550 systemd[1]: libpod-conmon-d0d48f72d40c2085cf4e9b44e64b1f4ef4a5d071bc6f3bd7eea4afcb0ae5fc79.scope: Deactivated successfully.
Nov 29 02:13:14 np0005539550 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77384 (sysctl)
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.303922838 +0000 UTC m=+0.042627245 container create c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:14 np0005539550 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 02:13:14 np0005539550 systemd[1]: Started libpod-conmon-c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28.scope.
Nov 29 02:13:14 np0005539550 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 02:13:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f75515f52f6b2886141ff8ceffbde468e900f7bd60595b92d1b200d159ca2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f75515f52f6b2886141ff8ceffbde468e900f7bd60595b92d1b200d159ca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f75515f52f6b2886141ff8ceffbde468e900f7bd60595b92d1b200d159ca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.382490855 +0000 UTC m=+0.121195282 container init c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.286668005 +0000 UTC m=+0.025372432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.389146776 +0000 UTC m=+0.127851183 container start c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.394025121 +0000 UTC m=+0.132729528 container attach c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:14 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 29 02:13:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:14 np0005539550 podman[77360]: 2025-11-29 07:13:14.980048116 +0000 UTC m=+0.718752533 container died c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:13:14 np0005539550 systemd[1]: libpod-c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e45f75515f52f6b2886141ff8ceffbde468e900f7bd60595b92d1b200d159ca2-merged.mount: Deactivated successfully.
Nov 29 02:13:15 np0005539550 podman[77360]: 2025-11-29 07:13:15.02579854 +0000 UTC m=+0.764502947 container remove c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28 (image=quay.io/ceph/ceph:v18, name=sad_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:13:15 np0005539550 systemd[1]: libpod-conmon-c43b288c87ce264dfadc612fc99f7d8faef523196b8b43c26ff186648cb91c28.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.088902381 +0000 UTC m=+0.043589531 container create 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/2594415671' entity='client.admin' 
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:15 np0005539550 systemd[1]: Started libpod-conmon-8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008.scope.
Nov 29 02:13:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041c6f5d5b654850d47099a199f3be5e8c5bac4aef57027c4c87994df8b7865d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041c6f5d5b654850d47099a199f3be5e8c5bac4aef57027c4c87994df8b7865d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/041c6f5d5b654850d47099a199f3be5e8c5bac4aef57027c4c87994df8b7865d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.069333438 +0000 UTC m=+0.024020618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.172958238 +0000 UTC m=+0.127645408 container init 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.1788464 +0000 UTC m=+0.133533550 container start 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.185009618 +0000 UTC m=+0.139696778 container attach 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:13:15 np0005539550 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.65602943 +0000 UTC m=+0.048886316 container create f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:15 np0005539550 systemd[1]: Started libpod-conmon-f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2.scope.
Nov 29 02:13:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.632200779 +0000 UTC m=+0.025057475 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.73273772 +0000 UTC m=+0.125594416 container init f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.737543033 +0000 UTC m=+0.130399699 container start f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:15 np0005539550 systemd[1]: libpod-f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 priceless_yalow[77760]: 167 167
Nov 29 02:13:15 np0005539550 conmon[77760]: conmon f179594d2fa99225c0e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2.scope/container/memory.events
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.747278523 +0000 UTC m=+0.140135189 container attach f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.747657513 +0000 UTC m=+0.140514189 container died f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c8061912ca5cafc135987f621135c61c6f9873f9f4e9cae35491d7a4713e187c-merged.mount: Deactivated successfully.
Nov 29 02:13:15 np0005539550 podman[77744]: 2025-11-29 07:13:15.787034064 +0000 UTC m=+0.179890730 container remove f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:13:15 np0005539550 systemd[1]: libpod-conmon-f179594d2fa99225c0e6e6399e6d17cfc4871a69f6671f8db0f0ed230bf3cab2.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:15 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Added label _admin to host compute-0
Nov 29 02:13:15 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 29 02:13:15 np0005539550 nervous_brown[77601]: Added label _admin to host compute-0
Nov 29 02:13:15 np0005539550 systemd[1]: libpod-8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.863330672 +0000 UTC m=+0.818017842 container died 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:13:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-041c6f5d5b654850d47099a199f3be5e8c5bac4aef57027c4c87994df8b7865d-merged.mount: Deactivated successfully.
Nov 29 02:13:15 np0005539550 podman[77561]: 2025-11-29 07:13:15.9041452 +0000 UTC m=+0.858832350 container remove 8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008 (image=quay.io/ceph/ceph:v18, name=nervous_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:15 np0005539550 systemd[1]: libpod-conmon-8bb0d092bfe0ce607e78491d359f59ec00c4b60be86cc9a42eec18b7ecdf7008.scope: Deactivated successfully.
Nov 29 02:13:15 np0005539550 podman[77791]: 2025-11-29 07:13:15.984673458 +0000 UTC m=+0.058897214 container create 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:13:16 np0005539550 systemd[1]: Started libpod-conmon-1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0.scope.
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:15.953345523 +0000 UTC m=+0.027569279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45acc1482c999d0faf255014b503ebff2969815c98bbdd7483101fab396408ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45acc1482c999d0faf255014b503ebff2969815c98bbdd7483101fab396408ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45acc1482c999d0faf255014b503ebff2969815c98bbdd7483101fab396408ca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:16.079147803 +0000 UTC m=+0.153371539 container init 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:16.086402989 +0000 UTC m=+0.160626715 container start 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:16.090440623 +0000 UTC m=+0.164664369 container attach 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 29 02:13:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3012805440' entity='client.admin' 
Nov 29 02:13:16 np0005539550 systemd[1]: libpod-1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0.scope: Deactivated successfully.
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:16.718034045 +0000 UTC m=+0.792257771 container died 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-45acc1482c999d0faf255014b503ebff2969815c98bbdd7483101fab396408ca-merged.mount: Deactivated successfully.
Nov 29 02:13:16 np0005539550 podman[77791]: 2025-11-29 07:13:16.765802631 +0000 UTC m=+0.840026357 container remove 1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0 (image=quay.io/ceph/ceph:v18, name=tender_visvesvaraya, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:16 np0005539550 systemd[1]: libpod-conmon-1b841b6fdbe700532a691a2d05db0190a1e19eae42049154476870ff1844dcc0.scope: Deactivated successfully.
Nov 29 02:13:16 np0005539550 podman[77846]: 2025-11-29 07:13:16.826874619 +0000 UTC m=+0.042393509 container create de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:16 np0005539550 systemd[1]: Started libpod-conmon-de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b.scope.
Nov 29 02:13:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e61935b2397e6c2ffd341885732576d6586e4117e364c06a19a110cd989d99f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e61935b2397e6c2ffd341885732576d6586e4117e364c06a19a110cd989d99f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e61935b2397e6c2ffd341885732576d6586e4117e364c06a19a110cd989d99f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:16 np0005539550 podman[77846]: 2025-11-29 07:13:16.807878051 +0000 UTC m=+0.023396961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:16 np0005539550 podman[77846]: 2025-11-29 07:13:16.905511928 +0000 UTC m=+0.121030818 container init de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:13:16 np0005539550 podman[77846]: 2025-11-29 07:13:16.912743023 +0000 UTC m=+0.128261913 container start de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:13:16 np0005539550 podman[77846]: 2025-11-29 07:13:16.938043893 +0000 UTC m=+0.153562783 container attach de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:13:17 np0005539550 ceph-mon[74435]: Added label _admin to host compute-0
Nov 29 02:13:17 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3012805440' entity='client.admin' 
Nov 29 02:13:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 29 02:13:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2164559028' entity='client.admin' 
Nov 29 02:13:17 np0005539550 jovial_dhawan[77863]: set mgr/dashboard/cluster/status
Nov 29 02:13:17 np0005539550 systemd[1]: libpod-de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b.scope: Deactivated successfully.
Nov 29 02:13:17 np0005539550 podman[77846]: 2025-11-29 07:13:17.63209971 +0000 UTC m=+0.847618620 container died de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:17 np0005539550 ceph-mgr[74726]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 29 02:13:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 02:13:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4e61935b2397e6c2ffd341885732576d6586e4117e364c06a19a110cd989d99f-merged.mount: Deactivated successfully.
Nov 29 02:13:17 np0005539550 podman[77846]: 2025-11-29 07:13:17.682374041 +0000 UTC m=+0.897892941 container remove de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b (image=quay.io/ceph/ceph:v18, name=jovial_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:13:17 np0005539550 systemd[1]: libpod-conmon-de56cca4d46e0554dd24e9289de51db4413aef422826c436c1e175744e91ff8b.scope: Deactivated successfully.
Nov 29 02:13:17 np0005539550 podman[77907]: 2025-11-29 07:13:17.887003454 +0000 UTC m=+0.040195713 container create 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:17 np0005539550 systemd[1]: Started libpod-conmon-6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09.scope.
Nov 29 02:13:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d024f1f65a0b79073d6386338188e8c83903f198f67bfba672ebdb527a490ff4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d024f1f65a0b79073d6386338188e8c83903f198f67bfba672ebdb527a490ff4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d024f1f65a0b79073d6386338188e8c83903f198f67bfba672ebdb527a490ff4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d024f1f65a0b79073d6386338188e8c83903f198f67bfba672ebdb527a490ff4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:17 np0005539550 podman[77907]: 2025-11-29 07:13:17.958539761 +0000 UTC m=+0.111731910 container init 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:13:17 np0005539550 podman[77907]: 2025-11-29 07:13:17.870774948 +0000 UTC m=+0.023967107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:17 np0005539550 podman[77907]: 2025-11-29 07:13:17.967712456 +0000 UTC m=+0.120904605 container start 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:13:17 np0005539550 podman[77907]: 2025-11-29 07:13:17.971084273 +0000 UTC m=+0.124276422 container attach 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:13:18 np0005539550 python3[77954]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:18 np0005539550 podman[77955]: 2025-11-29 07:13:18.285634358 +0000 UTC m=+0.051700838 container create e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:13:18 np0005539550 systemd[1]: Started libpod-conmon-e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db.scope.
Nov 29 02:13:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd58fbb18895d231542b64c13ca3ef66039108d1637f15835b83609034b1ed94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd58fbb18895d231542b64c13ca3ef66039108d1637f15835b83609034b1ed94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:18 np0005539550 podman[77955]: 2025-11-29 07:13:18.266638541 +0000 UTC m=+0.032705031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:18 np0005539550 podman[77955]: 2025-11-29 07:13:18.369655645 +0000 UTC m=+0.135722145 container init e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:18 np0005539550 podman[77955]: 2025-11-29 07:13:18.376346117 +0000 UTC m=+0.142412597 container start e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:13:18 np0005539550 podman[77955]: 2025-11-29 07:13:18.380162775 +0000 UTC m=+0.146229265 container attach e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:13:18 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/2164559028' entity='client.admin' 
Nov 29 02:13:18 np0005539550 ceph-mon[74435]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 29 02:13:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 29 02:13:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2085435861' entity='client.admin' 
Nov 29 02:13:19 np0005539550 systemd[1]: libpod-e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539550 conmon[77971]: conmon e6dbb3fbf53e89581a67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db.scope/container/memory.events
Nov 29 02:13:19 np0005539550 podman[77955]: 2025-11-29 07:13:19.022559417 +0000 UTC m=+0.788625897 container died e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:13:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dd58fbb18895d231542b64c13ca3ef66039108d1637f15835b83609034b1ed94-merged.mount: Deactivated successfully.
Nov 29 02:13:19 np0005539550 podman[77955]: 2025-11-29 07:13:19.075568108 +0000 UTC m=+0.841634588 container remove e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db (image=quay.io/ceph/ceph:v18, name=great_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:19 np0005539550 systemd[1]: libpod-conmon-e6dbb3fbf53e89581a6784b593f494ba7e3e80d65948dca3e636f88eca3a69db.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]: [
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:    {
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "available": false,
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "ceph_device": false,
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "lsm_data": {},
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "lvs": [],
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "path": "/dev/sr0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "rejected_reasons": [
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "Has a FileSystem",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "Insufficient space (<5GB)"
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        ],
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        "sys_api": {
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "actuators": null,
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "device_nodes": "sr0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "devname": "sr0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "human_readable_size": "482.00 KB",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "id_bus": "ata",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "model": "QEMU DVD-ROM",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "nr_requests": "2",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "parent": "/dev/sr0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "partitions": {},
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "path": "/dev/sr0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "removable": "1",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "rev": "2.5+",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "ro": "0",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "rotational": "1",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "sas_address": "",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "sas_device_handle": "",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "scheduler_mode": "mq-deadline",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "sectors": 0,
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "sectorsize": "2048",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "size": 493568.0,
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "support_discard": "2048",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "type": "disk",
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:            "vendor": "QEMU"
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:        }
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]:    }
Nov 29 02:13:19 np0005539550 zen_ganguly[77924]: ]
Nov 29 02:13:19 np0005539550 systemd[1]: libpod-6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539550 systemd[1]: libpod-6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09.scope: Consumed 1.345s CPU time.
Nov 29 02:13:19 np0005539550 podman[79022]: 2025-11-29 07:13:19.374477932 +0000 UTC m=+0.029722414 container died 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d024f1f65a0b79073d6386338188e8c83903f198f67bfba672ebdb527a490ff4-merged.mount: Deactivated successfully.
Nov 29 02:13:19 np0005539550 podman[79022]: 2025-11-29 07:13:19.42815084 +0000 UTC m=+0.083395302 container remove 6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_ganguly, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:19 np0005539550 systemd[1]: libpod-conmon-6f7462d42ef1542750dff7836c1685ddbc18f73103368aac3bf2310ff2f18c09.scope: Deactivated successfully.
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:19 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:13:19 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:13:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:19 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/2085435861' entity='client.admin' 
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:20 np0005539550 ceph-mon[74435]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:13:20 np0005539550 ansible-async_wrapper.py[79337]: Invoked with j210436078283 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400399.4500804-37232-152248551195473/AnsiballZ_command.py _
Nov 29 02:13:20 np0005539550 ansible-async_wrapper.py[79407]: Starting module and watcher
Nov 29 02:13:20 np0005539550 ansible-async_wrapper.py[79407]: Start watching 79411 (30)
Nov 29 02:13:20 np0005539550 ansible-async_wrapper.py[79411]: Start module (79411)
Nov 29 02:13:20 np0005539550 ansible-async_wrapper.py[79337]: Return async_wrapper task started.
Nov 29 02:13:20 np0005539550 python3[79414]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:20 np0005539550 podman[79465]: 2025-11-29 07:13:20.38522536 +0000 UTC m=+0.077873800 container create e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:13:20 np0005539550 podman[79465]: 2025-11-29 07:13:20.336069038 +0000 UTC m=+0.028717498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:20 np0005539550 systemd[1]: Started libpod-conmon-e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3.scope.
Nov 29 02:13:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61836b0625eb0ebcdfa1f433590ea4d26711c4c683b9caf1fcb28d06c86ef460/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61836b0625eb0ebcdfa1f433590ea4d26711c4c683b9caf1fcb28d06c86ef460/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:20 np0005539550 podman[79465]: 2025-11-29 07:13:20.49076065 +0000 UTC m=+0.183409110 container init e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:13:20 np0005539550 podman[79465]: 2025-11-29 07:13:20.50090141 +0000 UTC m=+0.193549850 container start e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:20 np0005539550 podman[79465]: 2025-11-29 07:13:20.505501098 +0000 UTC m=+0.198149538 container attach e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:13:20 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:13:20 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:13:21 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14170 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:13:21 np0005539550 youthful_volhard[79528]: 
Nov 29 02:13:21 np0005539550 youthful_volhard[79528]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:13:21 np0005539550 systemd[1]: libpod-e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3.scope: Deactivated successfully.
Nov 29 02:13:21 np0005539550 podman[79465]: 2025-11-29 07:13:21.179746537 +0000 UTC m=+0.872394977 container died e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-61836b0625eb0ebcdfa1f433590ea4d26711c4c683b9caf1fcb28d06c86ef460-merged.mount: Deactivated successfully.
Nov 29 02:13:21 np0005539550 podman[79465]: 2025-11-29 07:13:21.332406106 +0000 UTC m=+1.025054546 container remove e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3 (image=quay.io/ceph/ceph:v18, name=youthful_volhard, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:21 np0005539550 systemd[1]: libpod-conmon-e23a9a943b595fe3c58d0a4f3a4bed42cb058ac7d370f601fa7efa11b14c1dd3.scope: Deactivated successfully.
Nov 29 02:13:21 np0005539550 ansible-async_wrapper.py[79411]: Module complete (79411)
Nov 29 02:13:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:21 np0005539550 python3[79955]: ansible-ansible.legacy.async_status Invoked with jid=j210436078283.79337 mode=status _async_dir=/root/.ansible_async
Nov 29 02:13:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:21 np0005539550 python3[80116]: ansible-ansible.legacy.async_status Invoked with jid=j210436078283.79337 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:13:22 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:13:22 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:13:22 np0005539550 ceph-mon[74435]: Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:13:22 np0005539550 python3[80324]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 02:13:22 np0005539550 python3[80537]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.122253496 +0000 UTC m=+0.114355027 container create 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:23 np0005539550 ceph-mon[74435]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.034868363 +0000 UTC m=+0.026969914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:23 np0005539550 systemd[1]: Started libpod-conmon-30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135.scope.
Nov 29 02:13:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a33f5cd6194082bd273db578fba39ec0efcaa260bbdf61bd48ef6221cab9366/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a33f5cd6194082bd273db578fba39ec0efcaa260bbdf61bd48ef6221cab9366/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a33f5cd6194082bd273db578fba39ec0efcaa260bbdf61bd48ef6221cab9366/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.220505119 +0000 UTC m=+0.212606670 container init 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.229748496 +0000 UTC m=+0.221850027 container start 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.235553075 +0000 UTC m=+0.227654636 container attach 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:13:23 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:13:23 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:13:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:23 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:13:23 np0005539550 compassionate_dirac[80658]: 
Nov 29 02:13:23 np0005539550 compassionate_dirac[80658]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:13:23 np0005539550 systemd[1]: libpod-30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135.scope: Deactivated successfully.
Nov 29 02:13:23 np0005539550 podman[80591]: 2025-11-29 07:13:23.999150639 +0000 UTC m=+0.991252170 container died 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:13:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a33f5cd6194082bd273db578fba39ec0efcaa260bbdf61bd48ef6221cab9366-merged.mount: Deactivated successfully.
Nov 29 02:13:24 np0005539550 podman[80591]: 2025-11-29 07:13:24.060944955 +0000 UTC m=+1.053046486 container remove 30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135 (image=quay.io/ceph/ceph:v18, name=compassionate_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:13:24 np0005539550 systemd[1]: libpod-conmon-30edc4824ea9a32114fe5eab6b07dc1f93ad7af685d3606c9c4a55f474d26135.scope: Deactivated successfully.
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:24 np0005539550 python3[81118]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:24 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 352b4a76-f73b-4d2a-9923-3785d49d3d9e (Updating crash deployment (+1 -> 1))
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:24 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 29 02:13:24 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 29 02:13:24 np0005539550 podman[81171]: 2025-11-29 07:13:24.617247406 +0000 UTC m=+0.044490913 container create 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:13:24 np0005539550 systemd[1]: Started libpod-conmon-8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db.scope.
Nov 29 02:13:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883c53a2fbb2fdd5406a623ce8970fb08a02c6c59d89b18d3f754622e4216766/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883c53a2fbb2fdd5406a623ce8970fb08a02c6c59d89b18d3f754622e4216766/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/883c53a2fbb2fdd5406a623ce8970fb08a02c6c59d89b18d3f754622e4216766/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:24 np0005539550 podman[81171]: 2025-11-29 07:13:24.693561615 +0000 UTC m=+0.120805152 container init 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:13:24 np0005539550 podman[81171]: 2025-11-29 07:13:24.600037474 +0000 UTC m=+0.027281001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:24 np0005539550 podman[81171]: 2025-11-29 07:13:24.701995742 +0000 UTC m=+0.129239249 container start 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:24 np0005539550 podman[81171]: 2025-11-29 07:13:24.705627285 +0000 UTC m=+0.132870792 container attach 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:13:25 np0005539550 ansible-async_wrapper.py[79407]: Done in kid B.
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.20794041 +0000 UTC m=+0.051044711 container create 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:13:25 np0005539550 systemd[1]: Started libpod-conmon-23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980.scope.
Nov 29 02:13:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.183085942 +0000 UTC m=+0.026190263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.280140874 +0000 UTC m=+0.123245205 container init 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.287265557 +0000 UTC m=+0.130369868 container start 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:13:25 np0005539550 wonderful_bhaskara[81365]: 167 167
Nov 29 02:13:25 np0005539550 systemd[1]: libpod-23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539550 conmon[81365]: conmon 23213751277c54972c3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980.scope/container/memory.events
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.300084606 +0000 UTC m=+0.143188937 container attach 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.300574439 +0000 UTC m=+0.143678740 container died 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 29 02:13:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3341136265' entity='client.admin' 
Nov 29 02:13:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-76236e36b50c5a6c1a77e20bce9afa99634e666e9e6d216aa3af2e51dce6b82a-merged.mount: Deactivated successfully.
Nov 29 02:13:25 np0005539550 podman[81349]: 2025-11-29 07:13:25.3769786 +0000 UTC m=+0.220082911 container remove 23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhaskara, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:13:25 np0005539550 systemd[1]: libpod-8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539550 podman[81171]: 2025-11-29 07:13:25.379073274 +0000 UTC m=+0.806316781 container died 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:25 np0005539550 systemd[1]: libpod-conmon-23213751277c54972c3cfa21537012a410f3d3eaa5e3418d2fe1b7cc4b019980.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-883c53a2fbb2fdd5406a623ce8970fb08a02c6c59d89b18d3f754622e4216766-merged.mount: Deactivated successfully.
Nov 29 02:13:25 np0005539550 podman[81171]: 2025-11-29 07:13:25.427874347 +0000 UTC m=+0.855117854 container remove 8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:25 np0005539550 systemd[1]: Reloading.
Nov 29 02:13:25 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:13:25 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:13:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:25 np0005539550 systemd[1]: libpod-conmon-8a2f5a107d6e0b5a545309d887d8e167ac7977b34947dd700f6c700c2b9158db.scope: Deactivated successfully.
Nov 29 02:13:25 np0005539550 systemd[1]: Reloading.
Nov 29 02:13:25 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:13:25 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:13:25 np0005539550 python3[81460]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:25 np0005539550 podman[81499]: 2025-11-29 07:13:25.916996894 +0000 UTC m=+0.043614531 container create 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:13:25 np0005539550 podman[81499]: 2025-11-29 07:13:25.897477403 +0000 UTC m=+0.024095070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:26 np0005539550 systemd[1]: Started libpod-conmon-0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f.scope.
Nov 29 02:13:26 np0005539550 systemd[1]: Starting Ceph crash.compute-0 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:13:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567c3ca80300bfc33b7775d4746452afa397da32f86e85c064121ad59e0fd659/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567c3ca80300bfc33b7775d4746452afa397da32f86e85c064121ad59e0fd659/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/567c3ca80300bfc33b7775d4746452afa397da32f86e85c064121ad59e0fd659/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 podman[81499]: 2025-11-29 07:13:26.069021137 +0000 UTC m=+0.195638774 container init 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:13:26 np0005539550 podman[81499]: 2025-11-29 07:13:26.077263958 +0000 UTC m=+0.203881585 container start 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:26 np0005539550 podman[81499]: 2025-11-29 07:13:26.081502477 +0000 UTC m=+0.208120144 container attach 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:13:26 np0005539550 podman[81563]: 2025-11-29 07:13:26.261024406 +0000 UTC m=+0.045851968 container create ec39c57f87b0b690c2eec8c6c79d43686058a6aa67839d660c4b0add2b6a1ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8aec52b7a9e9f0382fd21d07c14ea75990017eeef9c28c142f1df2a0ec4714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8aec52b7a9e9f0382fd21d07c14ea75990017eeef9c28c142f1df2a0ec4714/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8aec52b7a9e9f0382fd21d07c14ea75990017eeef9c28c142f1df2a0ec4714/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f8aec52b7a9e9f0382fd21d07c14ea75990017eeef9c28c142f1df2a0ec4714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:26 np0005539550 podman[81563]: 2025-11-29 07:13:26.321252172 +0000 UTC m=+0.106079744 container init ec39c57f87b0b690c2eec8c6c79d43686058a6aa67839d660c4b0add2b6a1ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:13:26 np0005539550 podman[81563]: 2025-11-29 07:13:26.326693582 +0000 UTC m=+0.111521144 container start ec39c57f87b0b690c2eec8c6c79d43686058a6aa67839d660c4b0add2b6a1ccf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:13:26 np0005539550 bash[81563]: ec39c57f87b0b690c2eec8c6c79d43686058a6aa67839d660c4b0add2b6a1ccf
Nov 29 02:13:26 np0005539550 podman[81563]: 2025-11-29 07:13:26.239580395 +0000 UTC m=+0.024407977 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:26 np0005539550 systemd[1]: Started Ceph crash.compute-0 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: Deploying daemon crash.compute-0 on compute-0
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3341136265' entity='client.admin' 
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 352b4a76-f73b-4d2a-9923-3785d49d3d9e (Updating crash deployment (+1 -> 1))
Nov 29 02:13:26 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 352b4a76-f73b-4d2a-9923-3785d49d3d9e (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 67215699-eeaa-45aa-98c9-ab36e3b0e938 does not exist
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e5232742-0fdd-4818-bcaa-77483ac7ff0d does not exist
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 29 02:13:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3723887170' entity='client.admin' 
Nov 29 02:13:26 np0005539550 systemd[1]: libpod-0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f.scope: Deactivated successfully.
Nov 29 02:13:26 np0005539550 podman[81499]: 2025-11-29 07:13:26.777985678 +0000 UTC m=+0.904603345 container died 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-567c3ca80300bfc33b7775d4746452afa397da32f86e85c064121ad59e0fd659-merged.mount: Deactivated successfully.
Nov 29 02:13:26 np0005539550 podman[81499]: 2025-11-29 07:13:26.841397616 +0000 UTC m=+0.968015253 container remove 0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f (image=quay.io/ceph/ceph:v18, name=crazy_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.841+0000 7f3eb325b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.841+0000 7f3eb325b640 -1 AuthRegistry(0x7f3eac067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.842+0000 7f3eb325b640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.842+0000 7f3eb325b640 -1 AuthRegistry(0x7f3eb325a000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.843+0000 7f3eb0fd0640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: 2025-11-29T07:13:26.843+0000 7f3eb325b640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 29 02:13:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-crash-compute-0[81578]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 29 02:13:26 np0005539550 systemd[1]: libpod-conmon-0836f30ae485ed61fec34e1aa40a473131069f61e7d5c8fb44e6396ec455c93f.scope: Deactivated successfully.
Nov 29 02:13:27 np0005539550 python3[81839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:27 np0005539550 podman[81873]: 2025-11-29 07:13:27.237889394 +0000 UTC m=+0.022184780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3723887170' entity='client.admin' 
Nov 29 02:13:27 np0005539550 podman[81873]: 2025-11-29 07:13:27.388710196 +0000 UTC m=+0.173005542 container create fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:13:27 np0005539550 podman[81871]: 2025-11-29 07:13:27.388719606 +0000 UTC m=+0.181390077 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:27 np0005539550 systemd[1]: Started libpod-conmon-fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62.scope.
Nov 29 02:13:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22c69b7624594ff133f7d7c33ba080e4dc74af2245237e70ede2ae6a894fd8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22c69b7624594ff133f7d7c33ba080e4dc74af2245237e70ede2ae6a894fd8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22c69b7624594ff133f7d7c33ba080e4dc74af2245237e70ede2ae6a894fd8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:27 np0005539550 podman[81873]: 2025-11-29 07:13:27.490602732 +0000 UTC m=+0.274898078 container init fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:27 np0005539550 podman[81871]: 2025-11-29 07:13:27.495409535 +0000 UTC m=+0.288080006 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:13:27 np0005539550 podman[81873]: 2025-11-29 07:13:27.498978307 +0000 UTC m=+0.283273653 container start fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:27 np0005539550 podman[81873]: 2025-11-29 07:13:27.506922531 +0000 UTC m=+0.291217877 container attach fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 1 completed events
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3d97d199-9a20-4b12-b85a-1633878cd882 does not exist
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 23b7f54e-ec52-451d-8236-a6e003b27fe2 does not exist
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 32fe58a7-6a9d-423c-aea2-6003fbd0776f does not exist
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:27 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449837581' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.391413107 +0000 UTC m=+0.041397303 container create 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:13:28 np0005539550 systemd[1]: Started libpod-conmon-472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d.scope.
Nov 29 02:13:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.464907354 +0000 UTC m=+0.114891570 container init 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.374020931 +0000 UTC m=+0.024005147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.4733066 +0000 UTC m=+0.123290816 container start 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:13:28 np0005539550 vibrant_kapitsa[82161]: 167 167
Nov 29 02:13:28 np0005539550 systemd[1]: libpod-472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d.scope: Deactivated successfully.
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.47760055 +0000 UTC m=+0.127584776 container attach 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:13:28 np0005539550 conmon[82161]: conmon 472fe7d3bc75408ad084 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d.scope/container/memory.events
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.478516793 +0000 UTC m=+0.128500989 container died 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b1e4fd85cb1f3a961c893bd1f2cf4630c800b9ec79b01da93eba276812c0ca2f-merged.mount: Deactivated successfully.
Nov 29 02:13:28 np0005539550 podman[82144]: 2025-11-29 07:13:28.520594104 +0000 UTC m=+0.170578320 container remove 472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:13:28 np0005539550 systemd[1]: libpod-conmon-472fe7d3bc75408ad084faaba2f73e0d83e1eacd41080665ab6a33c68b1a827d.scope: Deactivated successfully.
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.pdhsqi (unknown last config time)...
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.pdhsqi (unknown last config time)...
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:13:28 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/449837581' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/449837581' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 29 02:13:28 np0005539550 hardcore_perlman[81905]: set require_min_compat_client to mimic
Nov 29 02:13:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 29 02:13:28 np0005539550 systemd[1]: libpod-fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62.scope: Deactivated successfully.
Nov 29 02:13:28 np0005539550 podman[81873]: 2025-11-29 07:13:28.937891087 +0000 UTC m=+1.722186433 container died fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:13:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c22c69b7624594ff133f7d7c33ba080e4dc74af2245237e70ede2ae6a894fd8b-merged.mount: Deactivated successfully.
Nov 29 02:13:28 np0005539550 podman[81873]: 2025-11-29 07:13:28.989524852 +0000 UTC m=+1.773820198 container remove fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62 (image=quay.io/ceph/ceph:v18, name=hardcore_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 02:13:28 np0005539550 systemd[1]: libpod-conmon-fa9d38850c7a55ada999678911bb97a7b6b50cf84f45bba3db5833ba1c7b4a62.scope: Deactivated successfully.
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.117868917 +0000 UTC m=+0.045465488 container create 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:29 np0005539550 systemd[1]: Started libpod-conmon-1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7.scope.
Nov 29 02:13:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.098990373 +0000 UTC m=+0.026586964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.197685526 +0000 UTC m=+0.125282107 container init 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.204183473 +0000 UTC m=+0.131780044 container start 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.208476343 +0000 UTC m=+0.136072944 container attach 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:29 np0005539550 youthful_ellis[82327]: 167 167
Nov 29 02:13:29 np0005539550 systemd[1]: libpod-1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7.scope: Deactivated successfully.
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.209756996 +0000 UTC m=+0.137353567 container died 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:13:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-200066700fd0aff5f848545b8c05ad2d4da2cdb0afa54c40938875c57266367d-merged.mount: Deactivated successfully.
Nov 29 02:13:29 np0005539550 podman[82311]: 2025-11-29 07:13:29.350547841 +0000 UTC m=+0.278144412 container remove 1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:29 np0005539550 systemd[1]: libpod-conmon-1983253e1648e599aedf6c0057900030170a3f0bf3619047b61abb4dd754cbe7.scope: Deactivated successfully.
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f1d4c71-ea43-49fc-80ed-483ecdf41edc does not exist
Nov 29 02:13:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7a06da86-d12a-4ee4-bea0-11b65729145d does not exist
Nov 29 02:13:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ac1dd35b-bb9f-437f-9fae-026c1d235704 does not exist
Nov 29 02:13:29 np0005539550 python3[82394]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:29 np0005539550 podman[82422]: 2025-11-29 07:13:29.683258212 +0000 UTC m=+0.046604237 container create a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:29 np0005539550 systemd[1]: Started libpod-conmon-a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212.scope.
Nov 29 02:13:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e56fca4f90eb4c66ad6475d85b7ad368dacb064d70de76d6ada3152d31786c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e56fca4f90eb4c66ad6475d85b7ad368dacb064d70de76d6ada3152d31786c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81e56fca4f90eb4c66ad6475d85b7ad368dacb064d70de76d6ada3152d31786c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:29 np0005539550 podman[82422]: 2025-11-29 07:13:29.663491475 +0000 UTC m=+0.026837510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:29 np0005539550 podman[82422]: 2025-11-29 07:13:29.770916763 +0000 UTC m=+0.134262808 container init a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:13:29 np0005539550 podman[82422]: 2025-11-29 07:13:29.777478491 +0000 UTC m=+0.140824516 container start a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:13:29 np0005539550 podman[82422]: 2025-11-29 07:13:29.785144148 +0000 UTC m=+0.148490173 container attach a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: Reconfiguring mgr.compute-0.pdhsqi (unknown last config time)...
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/449837581' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:13:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Added host compute-0
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d20a29a3-2bf7-4c30-b5b1-9a14034311c1 does not exist
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev baf2ba16-e404-4e3f-a498-4cd04bd1a280 does not exist
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f8e4fc8-ec84-40c0-9d3a-9f582a9abe7c does not exist
Nov 29 02:13:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:13:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:32 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 29 02:13:32 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 29 02:13:32 np0005539550 ceph-mon[74435]: Added host compute-0
Nov 29 02:13:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:33 np0005539550 ceph-mon[74435]: Deploying cephadm binary to compute-1
Nov 29 02:13:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:36 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Added host compute-1
Nov 29 02:13:36 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 29 02:13:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:37 np0005539550 ceph-mon[74435]: Added host compute-1
Nov 29 02:13:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:37 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 29 02:13:37 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 29 02:13:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:13:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:39 np0005539550 ceph-mon[74435]: Deploying cephadm binary to compute-2
Nov 29 02:13:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Added host compute-2
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Added host 'compute-0' with addr '192.168.122.100'
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Added host 'compute-1' with addr '192.168.122.101'
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Added host 'compute-2' with addr '192.168.122.102'
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Scheduled mon update...
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Scheduled mgr update...
Nov 29 02:13:41 np0005539550 xenodochial_northcutt[82437]: Scheduled osd.default_drive_group update...
Nov 29 02:13:41 np0005539550 systemd[1]: libpod-a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212.scope: Deactivated successfully.
Nov 29 02:13:41 np0005539550 podman[82422]: 2025-11-29 07:13:41.414463603 +0000 UTC m=+11.777809638 container died a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:13:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-81e56fca4f90eb4c66ad6475d85b7ad368dacb064d70de76d6ada3152d31786c-merged.mount: Deactivated successfully.
Nov 29 02:13:41 np0005539550 podman[82422]: 2025-11-29 07:13:41.492436845 +0000 UTC m=+11.855782870 container remove a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212 (image=quay.io/ceph/ceph:v18, name=xenodochial_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:41 np0005539550 systemd[1]: libpod-conmon-a73fa8fae57e5aab4965973f5bf483825e33c0fe9203d4538b86b7a7f958e212.scope: Deactivated successfully.
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:13:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:41 np0005539550 python3[82672]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.045913574 +0000 UTC m=+0.047808098 container create 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:13:42 np0005539550 systemd[1]: Started libpod-conmon-2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e.scope.
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.023967291 +0000 UTC m=+0.025861835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:13:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:13:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae51b3a03b4efa3eef0c3a9d87cdd7ae9fd659317b48f0566da0fafe7390876/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae51b3a03b4efa3eef0c3a9d87cdd7ae9fd659317b48f0566da0fafe7390876/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ae51b3a03b4efa3eef0c3a9d87cdd7ae9fd659317b48f0566da0fafe7390876/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.140232216 +0000 UTC m=+0.142126770 container init 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.148267902 +0000 UTC m=+0.150162426 container start 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.151729141 +0000 UTC m=+0.153623685 container attach 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Added host compute-2
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:13:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251458141' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:13:42 np0005539550 awesome_buck[82690]: 
Nov 29 02:13:42 np0005539550 awesome_buck[82690]: {"fsid":"b66774a7-56d9-5535-bd8c-681234404870","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":101,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-29T07:11:57.326155+0000","services":{}},"progress_events":{}}
Nov 29 02:13:42 np0005539550 systemd[1]: libpod-2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e.scope: Deactivated successfully.
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.868010429 +0000 UTC m=+0.869904953 container died 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:13:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1ae51b3a03b4efa3eef0c3a9d87cdd7ae9fd659317b48f0566da0fafe7390876-merged.mount: Deactivated successfully.
Nov 29 02:13:42 np0005539550 podman[82674]: 2025-11-29 07:13:42.914276287 +0000 UTC m=+0.916170811 container remove 2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e (image=quay.io/ceph/ceph:v18, name=awesome_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:13:42 np0005539550 systemd[1]: libpod-conmon-2a1f37e18c588946727c5b880a368ee1fa19023db3a81ad83d3b95464795804e.scope: Deactivated successfully.
Nov 29 02:13:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:13:57
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] No pools available
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:13:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:14:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:14:07 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:14:07 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:14:08 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:14:08 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:14:08 np0005539550 ceph-mon[74435]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:14:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:09 np0005539550 ceph-mon[74435]: Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:14:09 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:14:09 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:14:10 np0005539550 ceph-mon[74435]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:14:10 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:14:10 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:14:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:11 np0005539550 ceph-mon[74435]: Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 0fc1e0e4-b0eb-4d34-8329-cae8de9ade55 (Updating crash deployment (+1 -> 2))
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:14:12.052+0000 7f31d2719640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: service_name: mon
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: placement:
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  hosts:
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-0
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-1
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-2
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:14:12.052+0000 7f31d2719640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: service_name: mgr
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: placement:
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  hosts:
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-0
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-1
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]:  - compute-2
Nov 29 02:14:12 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 29 02:14:12 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: Deploying daemon crash.compute-1 on compute-1
Nov 29 02:14:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 02:14:13 np0005539550 python3[82754]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:13 np0005539550 podman[82756]: 2025-11-29 07:14:13.298892321 +0000 UTC m=+0.057819485 container create d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:14:13 np0005539550 systemd[1]: Started libpod-conmon-d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5.scope.
Nov 29 02:14:13 np0005539550 podman[82756]: 2025-11-29 07:14:13.27804853 +0000 UTC m=+0.036975724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc056b85e8483c2088eb457eb4e03e8982d675a49d694a2130ff8969762fb452/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc056b85e8483c2088eb457eb4e03e8982d675a49d694a2130ff8969762fb452/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc056b85e8483c2088eb457eb4e03e8982d675a49d694a2130ff8969762fb452/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:13 np0005539550 podman[82756]: 2025-11-29 07:14:13.396706531 +0000 UTC m=+0.155633715 container init d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:14:13 np0005539550 podman[82756]: 2025-11-29 07:14:13.405226265 +0000 UTC m=+0.164153429 container start d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:13 np0005539550 podman[82756]: 2025-11-29 07:14:13.409327597 +0000 UTC m=+0.168254791 container attach d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:14:14 np0005539550 ceph-mon[74435]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 29 02:14:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:14:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683017153' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:14:14 np0005539550 focused_robinson[82772]: 
Nov 29 02:14:14 np0005539550 focused_robinson[82772]: {"fsid":"b66774a7-56d9-5535-bd8c-681234404870","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false},"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":132,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:13:59.659014+0000","services":{}},"progress_events":{}}
Nov 29 02:14:14 np0005539550 systemd[1]: libpod-d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5.scope: Deactivated successfully.
Nov 29 02:14:14 np0005539550 podman[82756]: 2025-11-29 07:14:14.136565113 +0000 UTC m=+0.895492297 container died d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:14:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cc056b85e8483c2088eb457eb4e03e8982d675a49d694a2130ff8969762fb452-merged.mount: Deactivated successfully.
Nov 29 02:14:14 np0005539550 podman[82756]: 2025-11-29 07:14:14.191653972 +0000 UTC m=+0.950581136 container remove d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5 (image=quay.io/ceph/ceph:v18, name=focused_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:14:14 np0005539550 systemd[1]: libpod-conmon-d6ef5c3a44604a3840240598a4574dbf9f797eeb6a529bf81182f7c3a44d69c5.scope: Deactivated successfully.
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:15 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 0fc1e0e4-b0eb-4d34-8329-cae8de9ade55 (Updating crash deployment (+1 -> 2))
Nov 29 02:14:15 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 0fc1e0e4-b0eb-4d34-8329-cae8de9ade55 (Updating crash deployment (+1 -> 2)) in 3 seconds
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.645245898 +0000 UTC m=+0.026393744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.762823219 +0000 UTC m=+0.143971065 container create 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:15 np0005539550 systemd[1]: Started libpod-conmon-8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4.scope.
Nov 29 02:14:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.828079838 +0000 UTC m=+0.209227704 container init 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.834413871 +0000 UTC m=+0.215561717 container start 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.838606416 +0000 UTC m=+0.219754262 container attach 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:15 np0005539550 priceless_shannon[82967]: 167 167
Nov 29 02:14:15 np0005539550 systemd[1]: libpod-8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4.scope: Deactivated successfully.
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.84130499 +0000 UTC m=+0.222452846 container died 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:14:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-16345a730b84739083af245146b1f160ae04fa50599665b725faf1523e47bad5-merged.mount: Deactivated successfully.
Nov 29 02:14:15 np0005539550 podman[82950]: 2025-11-29 07:14:15.883236339 +0000 UTC m=+0.264384175 container remove 8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:15 np0005539550 systemd[1]: libpod-conmon-8f4c352079be0bc47bed254109cf5835f8cbd8815271512e35951546ee8b66e4.scope: Deactivated successfully.
Nov 29 02:14:16 np0005539550 podman[82990]: 2025-11-29 07:14:16.04716655 +0000 UTC m=+0.046021992 container create dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:16 np0005539550 systemd[1]: Started libpod-conmon-dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe.scope.
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:14:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:16 np0005539550 podman[82990]: 2025-11-29 07:14:16.028120658 +0000 UTC m=+0.026976120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:16 np0005539550 podman[82990]: 2025-11-29 07:14:16.127174182 +0000 UTC m=+0.126029644 container init dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:16 np0005539550 podman[82990]: 2025-11-29 07:14:16.137631899 +0000 UTC m=+0.136487341 container start dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:16 np0005539550 podman[82990]: 2025-11-29 07:14:16.141878695 +0000 UTC m=+0.140734137 container attach dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: --> relative data size: 1.0
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5dd67027-4f06-4800-93bd-47ed1a74c5e6
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6"} v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2447561609' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6"}]: dispatch
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2447561609' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6"}]': finished
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:17 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b9602a40-9939-419b-9b17-98d4b245553e"} v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2710152261' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9602a40-9939-419b-9b17-98d4b245553e"}]: dispatch
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2710152261' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9602a40-9939-419b-9b17-98d4b245553e"}]': finished
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:17 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:17 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 29 02:14:17 np0005539550 lvm[83053]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:14:17 np0005539550 lvm[83053]: VG ceph_vg0 finished
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:17 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 29 02:14:17 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 2 completed events
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:14:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/2447561609' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6"}]: dispatch
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/2447561609' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6"}]': finished
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.101:0/2710152261' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b9602a40-9939-419b-9b17-98d4b245553e"}]: dispatch
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.101:0/2710152261' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b9602a40-9939-419b-9b17-98d4b245553e"}]': finished
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969310874' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 02:14:18 np0005539550 confident_swanson[83006]: stderr: got monmap epoch 1
Nov 29 02:14:18 np0005539550 confident_swanson[83006]: --> Creating keyring file for osd.0
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2937418367' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 29 02:14:18 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 29 02:14:18 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 29 02:14:18 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 5dd67027-4f06-4800-93bd-47ed1a74c5e6 --setuser ceph --setgroup ceph
Nov 29 02:14:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 02:14:19 np0005539550 ceph-mon[74435]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 29 02:14:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: stderr: 2025-11-29T07:14:18.324+0000 7f285a1fd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: stderr: 2025-11-29T07:14:18.324+0000 7f285a1fd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: stderr: 2025-11-29T07:14:18.324+0000 7f285a1fd740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: stderr: 2025-11-29T07:14:18.324+0000 7f285a1fd740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 29 02:14:21 np0005539550 confident_swanson[83006]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 29 02:14:21 np0005539550 systemd[1]: libpod-dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe.scope: Deactivated successfully.
Nov 29 02:14:21 np0005539550 systemd[1]: libpod-dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe.scope: Consumed 2.755s CPU time.
Nov 29 02:14:21 np0005539550 podman[82990]: 2025-11-29 07:14:21.560411608 +0000 UTC m=+5.559267060 container died dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:14:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c3e059ed727160c5c0ed5d155788677dee2216fd05b97adfceecfe13dabc15da-merged.mount: Deactivated successfully.
Nov 29 02:14:21 np0005539550 podman[82990]: 2025-11-29 07:14:21.663832131 +0000 UTC m=+5.662687573 container remove dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:14:21 np0005539550 systemd[1]: libpod-conmon-dc6c1bd855e98bf2204184b586daca600641028d806e00474ccedfeaa8c1dabe.scope: Deactivated successfully.
Nov 29 02:14:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.271184402 +0000 UTC m=+0.069367682 container create e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.22915288 +0000 UTC m=+0.027336150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:22 np0005539550 systemd[1]: Started libpod-conmon-e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a.scope.
Nov 29 02:14:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.38026336 +0000 UTC m=+0.178446640 container init e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.389443312 +0000 UTC m=+0.187626562 container start e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:14:22 np0005539550 gifted_montalcini[84126]: 167 167
Nov 29 02:14:22 np0005539550 systemd[1]: libpod-e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a.scope: Deactivated successfully.
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.395795316 +0000 UTC m=+0.193978586 container attach e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.396445634 +0000 UTC m=+0.194628884 container died e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 02:14:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f023e5f61c868db899291c1f3bffc774a35c4d8b23d2afd5ca1bbb33bd4d3023-merged.mount: Deactivated successfully.
Nov 29 02:14:22 np0005539550 podman[84110]: 2025-11-29 07:14:22.442177927 +0000 UTC m=+0.240361177 container remove e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_montalcini, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:14:22 np0005539550 systemd[1]: libpod-conmon-e84e211b9f451245ec66b03527fe379deefe767c6f07a11669a130b104bd136a.scope: Deactivated successfully.
Nov 29 02:14:22 np0005539550 podman[84148]: 2025-11-29 07:14:22.604034472 +0000 UTC m=+0.048527001 container create bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:14:22 np0005539550 systemd[1]: Started libpod-conmon-bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f.scope.
Nov 29 02:14:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:22 np0005539550 podman[84148]: 2025-11-29 07:14:22.581095913 +0000 UTC m=+0.025588462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c30e500f1bfc93a53a163f4b74f45b26a1b913c9fe5ddb2e6e7b3a200c25b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c30e500f1bfc93a53a163f4b74f45b26a1b913c9fe5ddb2e6e7b3a200c25b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c30e500f1bfc93a53a163f4b74f45b26a1b913c9fe5ddb2e6e7b3a200c25b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96c30e500f1bfc93a53a163f4b74f45b26a1b913c9fe5ddb2e6e7b3a200c25b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:22 np0005539550 podman[84148]: 2025-11-29 07:14:22.703049534 +0000 UTC m=+0.147542093 container init bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:22 np0005539550 podman[84148]: 2025-11-29 07:14:22.710985372 +0000 UTC m=+0.155477901 container start bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:14:22 np0005539550 podman[84148]: 2025-11-29 07:14:22.836791599 +0000 UTC m=+0.281284328 container attach bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]: {
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:    "0": [
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:        {
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "devices": [
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "/dev/loop3"
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            ],
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "lv_name": "ceph_lv0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "lv_size": "7511998464",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "name": "ceph_lv0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "tags": {
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.cluster_name": "ceph",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.crush_device_class": "",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.encrypted": "0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.osd_id": "0",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.type": "block",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:                "ceph.vdo": "0"
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            },
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "type": "block",
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:            "vg_name": "ceph_vg0"
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:        }
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]:    ]
Nov 29 02:14:23 np0005539550 angry_lovelace[84165]: }
Nov 29 02:14:23 np0005539550 systemd[1]: libpod-bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f.scope: Deactivated successfully.
Nov 29 02:14:23 np0005539550 podman[84148]: 2025-11-29 07:14:23.653375783 +0000 UTC m=+1.097868332 container died bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c96c30e500f1bfc93a53a163f4b74f45b26a1b913c9fe5ddb2e6e7b3a200c25b-merged.mount: Deactivated successfully.
Nov 29 02:14:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:24 np0005539550 podman[84148]: 2025-11-29 07:14:24.106493928 +0000 UTC m=+1.550986487 container remove bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_lovelace, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:14:24 np0005539550 systemd[1]: libpod-conmon-bfed21b11f250f5887396f2dd7e65a418e049c441500ecdb004d4d7a47b9767f.scope: Deactivated successfully.
Nov 29 02:14:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 02:14:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:14:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:24 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 29 02:14:24 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 29 02:14:24 np0005539550 podman[84326]: 2025-11-29 07:14:24.716491822 +0000 UTC m=+0.022900919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.062759518 +0000 UTC m=+0.369168585 container create 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:14:25 np0005539550 systemd[1]: Started libpod-conmon-3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420.scope.
Nov 29 02:14:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.178377646 +0000 UTC m=+0.484786733 container init 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.185129831 +0000 UTC m=+0.491538908 container start 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:14:25 np0005539550 agitated_lehmann[84341]: 167 167
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.190805397 +0000 UTC m=+0.497214584 container attach 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:25 np0005539550 systemd[1]: libpod-3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420.scope: Deactivated successfully.
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.193532541 +0000 UTC m=+0.499941608 container died 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:14:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-766bf73cb35e644df21e0765249e9ce96ea34a55d94860898e256d4751236d74-merged.mount: Deactivated successfully.
Nov 29 02:14:25 np0005539550 podman[84326]: 2025-11-29 07:14:25.250360438 +0000 UTC m=+0.556769505 container remove 3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lehmann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:14:25 np0005539550 systemd[1]: libpod-conmon-3399c9cc75ddb65f031abfc16fe3bdfc40fa4df755bc7c10ce1716f99ed1a420.scope: Deactivated successfully.
Nov 29 02:14:25 np0005539550 podman[84374]: 2025-11-29 07:14:25.516750177 +0000 UTC m=+0.043298167 container create ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:14:25 np0005539550 systemd[1]: Started libpod-conmon-ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86.scope.
Nov 29 02:14:25 np0005539550 podman[84374]: 2025-11-29 07:14:25.498282291 +0000 UTC m=+0.024830311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:25 np0005539550 podman[84374]: 2025-11-29 07:14:25.744479136 +0000 UTC m=+0.271027156 container init ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:14:25 np0005539550 podman[84374]: 2025-11-29 07:14:25.754024508 +0000 UTC m=+0.280572498 container start ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:14:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:26 np0005539550 podman[84374]: 2025-11-29 07:14:26.174314744 +0000 UTC m=+0.700862734 container attach ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:14:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test[84390]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 29 02:14:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test[84390]:                            [--no-systemd] [--no-tmpfs]
Nov 29 02:14:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test[84390]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 29 02:14:26 np0005539550 systemd[1]: libpod-ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86.scope: Deactivated successfully.
Nov 29 02:14:26 np0005539550 podman[84374]: 2025-11-29 07:14:26.539666624 +0000 UTC m=+1.066214634 container died ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:14:26 np0005539550 ceph-mon[74435]: Deploying daemon osd.0 on compute-0
Nov 29 02:14:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e111afe7823daaf549c837a51d3df3f0d22ada6ad1655c6a214d8c638abddd19-merged.mount: Deactivated successfully.
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:27 np0005539550 podman[84374]: 2025-11-29 07:14:27.813950028 +0000 UTC m=+2.340498018 container remove ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:14:27 np0005539550 systemd[1]: libpod-conmon-ecb64f4814026f5930e0bc3b0066b3df54ddf24264f365b5b60fd5432db72e86.scope: Deactivated successfully.
Nov 29 02:14:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:28 np0005539550 systemd[1]: Reloading.
Nov 29 02:14:28 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:28 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:28 np0005539550 systemd[1]: Reloading.
Nov 29 02:14:28 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:14:28 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:14:28 np0005539550 systemd[1]: Starting Ceph osd.0 for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:14:29 np0005539550 podman[84549]: 2025-11-29 07:14:29.021640827 +0000 UTC m=+0.027279048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:29 np0005539550 podman[84549]: 2025-11-29 07:14:29.134649604 +0000 UTC m=+0.140287805 container create 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:14:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:29 np0005539550 podman[84549]: 2025-11-29 07:14:29.23125154 +0000 UTC m=+0.236889751 container init 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:14:29 np0005539550 podman[84549]: 2025-11-29 07:14:29.23887877 +0000 UTC m=+0.244516961 container start 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:14:29 np0005539550 podman[84549]: 2025-11-29 07:14:29.247169017 +0000 UTC m=+0.252807208 container attach 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:14:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:14:30 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Nov 29 02:14:30 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:30 np0005539550 bash[84549]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 29 02:14:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate[84564]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 02:14:30 np0005539550 bash[84549]: --> ceph-volume raw activate successful for osd ID: 0
Nov 29 02:14:30 np0005539550 systemd[1]: libpod-579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9.scope: Deactivated successfully.
Nov 29 02:14:30 np0005539550 systemd[1]: libpod-579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9.scope: Consumed 1.132s CPU time.
Nov 29 02:14:30 np0005539550 podman[84674]: 2025-11-29 07:14:30.401679609 +0000 UTC m=+0.029166700 container died 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:14:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-32c470e25c66614bbf0ef2d484f71533786d9386ea5424cf1ee871d1677ad95a-merged.mount: Deactivated successfully.
Nov 29 02:14:30 np0005539550 podman[84674]: 2025-11-29 07:14:30.464285995 +0000 UTC m=+0.091773056 container remove 579c1c1d15219e6cc260736ee8155e974c4a2af24682933b7dd58e853b3594c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:30 np0005539550 podman[84734]: 2025-11-29 07:14:30.707643533 +0000 UTC m=+0.048616333 container create 0aed04cce255c7114ee3adbdea282a87eb178d4c077df0321861e3f84fce87ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0442fd1c2b2ce636eb7d71100ead1f043695cefdf5192e8623f39e7de0dbec7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0442fd1c2b2ce636eb7d71100ead1f043695cefdf5192e8623f39e7de0dbec7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0442fd1c2b2ce636eb7d71100ead1f043695cefdf5192e8623f39e7de0dbec7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0442fd1c2b2ce636eb7d71100ead1f043695cefdf5192e8623f39e7de0dbec7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0442fd1c2b2ce636eb7d71100ead1f043695cefdf5192e8623f39e7de0dbec7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:30 np0005539550 podman[84734]: 2025-11-29 07:14:30.771268526 +0000 UTC m=+0.112241346 container init 0aed04cce255c7114ee3adbdea282a87eb178d4c077df0321861e3f84fce87ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:30 np0005539550 podman[84734]: 2025-11-29 07:14:30.77979864 +0000 UTC m=+0.120771450 container start 0aed04cce255c7114ee3adbdea282a87eb178d4c077df0321861e3f84fce87ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:14:30 np0005539550 podman[84734]: 2025-11-29 07:14:30.686631317 +0000 UTC m=+0.027604127 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:30 np0005539550 bash[84734]: 0aed04cce255c7114ee3adbdea282a87eb178d4c077df0321861e3f84fce87ae
Nov 29 02:14:30 np0005539550 systemd[1]: Started Ceph osd.0 for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: pidfile_write: ignore empty --pid-file
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d291cb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d291cb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d291cb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d291cb800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d2a00d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d2a00d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d2a00d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d2a00d800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:14:30 np0005539550 ceph-osd[84753]: bdev(0x556d2a00d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:14:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d291cb800 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:14:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:14:31 np0005539550 ceph-mon[74435]: Deploying daemon osd.1 on compute-1
Nov 29 02:14:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: load: jerasure load: lrc 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.549274203 +0000 UTC m=+0.052050707 container create b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:14:31 np0005539550 systemd[1]: Started libpod-conmon-b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701.scope.
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.52509804 +0000 UTC m=+0.027874574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:14:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.660023407 +0000 UTC m=+0.162799931 container init b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.670763581 +0000 UTC m=+0.173540085 container start b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.676078707 +0000 UTC m=+0.178855211 container attach b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:14:31 np0005539550 systemd[1]: libpod-b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701.scope: Deactivated successfully.
Nov 29 02:14:31 np0005539550 boring_jones[84928]: 167 167
Nov 29 02:14:31 np0005539550 podman[84912]: 2025-11-29 07:14:31.681567527 +0000 UTC m=+0.184344041 container died b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:14:31 np0005539550 conmon[84928]: conmon b7c88acaadadb9342287 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701.scope/container/memory.events
Nov 29 02:14:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6ccbc6ade5762c586f8ad62ff179630502c53387623b5e14fe578d43c87788af-merged.mount: Deactivated successfully.
Nov 29 02:14:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08ec00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluefs mount
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluefs mount shared_bdev_used = 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Git sha 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: DB SUMMARY
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: DB Session ID:  FRTJPA98RHXIGU0AN4HW
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                                     Options.env: 0x556d2a05fc70
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                                Options.info_log: 0x556d29248ba0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.write_buffer_manager: 0x556d2a168460
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.row_cache: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                              Options.wal_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.wal_compression: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Compression algorithms supported:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kZSTD supported: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29248600)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d292485c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d292485c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d292485c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c6aee85c-5b9e-4101-9489-3128665537aa
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400471950813, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400471951319, "job": 1, "event": "recovery_finished"}
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: freelist init
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: freelist _read_cfg
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bluefs umount
Nov 29 02:14:31 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) close
Nov 29 02:14:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:32 np0005539550 podman[84912]: 2025-11-29 07:14:32.060242643 +0000 UTC m=+0.563019147 container remove b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_jones, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:14:32 np0005539550 systemd[1]: libpod-conmon-b7c88acaadadb9342287244e7ea44d2f76051546d1114df303d3b97095b87701.scope: Deactivated successfully.
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bdev(0x556d2a08f400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bluefs mount
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bluefs mount shared_bdev_used = 4718592
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: RocksDB version: 7.9.2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Git sha 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: DB SUMMARY
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: DB Session ID:  FRTJPA98RHXIGU0AN4HX
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: CURRENT file:  CURRENT
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: IDENTITY file:  IDENTITY
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                         Options.error_if_exists: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.create_if_missing: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                         Options.paranoid_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                                     Options.env: 0x556d2928a380
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                                Options.info_log: 0x556d29225b00
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_file_opening_threads: 16
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                              Options.statistics: (nil)
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.use_fsync: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.max_log_file_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                         Options.allow_fallocate: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.use_direct_reads: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.create_missing_column_families: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                              Options.db_log_dir: 
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                                 Options.wal_dir: db.wal
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.advise_random_on_open: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.write_buffer_manager: 0x556d2a168460
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                            Options.rate_limiter: (nil)
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.unordered_write: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.row_cache: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                              Options.wal_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.allow_ingest_behind: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.two_write_queues: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.manual_wal_flush: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.wal_compression: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.atomic_flush: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.log_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.allow_data_in_errors: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.db_host_id: __hostname__
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_background_jobs: 4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_background_compactions: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_subcompactions: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.max_open_files: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.bytes_per_sync: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.max_background_flushes: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Compression algorithms supported:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kZSTD supported: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kXpressCompression supported: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kBZip2Compression supported: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kLZ4Compression supported: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kZlibCompression supported: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: #011kSnappyCompression supported: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252040)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f610#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:           Options.merge_operator: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.compaction_filter_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.sst_partitioner_factory: None
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556d29252020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556d2923f770#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.write_buffer_size: 16777216
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.max_write_buffer_number: 64
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.compression: LZ4
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.num_levels: 7
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.level: 32767
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.compression_opts.strategy: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                  Options.compression_opts.enabled: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.arena_block_size: 1048576
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.disable_auto_compactions: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.inplace_update_support: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.bloom_locality: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                    Options.max_successive_merges: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.paranoid_file_checks: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.force_consistency_checks: 1
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.report_bg_io_stats: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                               Options.ttl: 2592000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                       Options.enable_blob_files: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                           Options.min_blob_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                          Options.blob_file_size: 268435456
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb:                Options.blob_file_starting_level: 0
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: c6aee85c-5b9e-4101-9489-3128665537aa
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400472235443, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 29 02:14:32 np0005539550 podman[85151]: 2025-11-29 07:14:32.205468802 +0000 UTC m=+0.022851637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:32 np0005539550 podman[85151]: 2025-11-29 07:14:32.834836065 +0000 UTC m=+0.652218870 container create ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:32 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400472835752, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400472, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c6aee85c-5b9e-4101-9489-3128665537aa", "db_session_id": "FRTJPA98RHXIGU0AN4HX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400473581584, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400472, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c6aee85c-5b9e-4101-9489-3128665537aa", "db_session_id": "FRTJPA98RHXIGU0AN4HX", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400473588109, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400473, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "c6aee85c-5b9e-4101-9489-3128665537aa", "db_session_id": "FRTJPA98RHXIGU0AN4HX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400473594568, "job": 1, "event": "recovery_finished"}
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 29 02:14:33 np0005539550 systemd[1]: Started libpod-conmon-ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d.scope.
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556d2a22fc00
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: DB pointer 0x556d2a151a00
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.4 total, 1.4 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.4 total, 1.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.4 total, 1.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.4 total, 1.4 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 460.80 MB usag
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: _get_class not permitted to load lua
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: _get_class not permitted to load sdk
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: _get_class not permitted to load test_remote_reads
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 load_pgs
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 load_pgs opened 0 pgs
Nov 29 02:14:33 np0005539550 ceph-osd[84753]: osd.0 0 log_to_monitors true
Nov 29 02:14:33 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0[84749]: 2025-11-29T07:14:33.644+0000 7f0283e56740 -1 osd.0 0 log_to_monitors true
Nov 29 02:14:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 29 02:14:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 02:14:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e222feec2b959a161c136e65017a8103d7f6a2674f6e0ca9bb3b14e6374e037/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e222feec2b959a161c136e65017a8103d7f6a2674f6e0ca9bb3b14e6374e037/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e222feec2b959a161c136e65017a8103d7f6a2674f6e0ca9bb3b14e6374e037/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e222feec2b959a161c136e65017a8103d7f6a2674f6e0ca9bb3b14e6374e037/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:33 np0005539550 podman[85151]: 2025-11-29 07:14:33.67557521 +0000 UTC m=+1.492958035 container init ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:33 np0005539550 podman[85151]: 2025-11-29 07:14:33.684185556 +0000 UTC m=+1.501568361 container start ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:14:33 np0005539550 podman[85151]: 2025-11-29 07:14:33.689831041 +0000 UTC m=+1.507213866 container attach ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:34 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:34 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]: {
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:        "osd_id": 0,
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:        "type": "bluestore"
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]:    }
Nov 29 02:14:34 np0005539550 reverent_shamir[85349]: }
Nov 29 02:14:34 np0005539550 systemd[1]: libpod-ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d.scope: Deactivated successfully.
Nov 29 02:14:34 np0005539550 podman[85151]: 2025-11-29 07:14:34.681369258 +0000 UTC m=+2.498752083 container died ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:14:34 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 29 02:14:34 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 29 02:14:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4e222feec2b959a161c136e65017a8103d7f6a2674f6e0ca9bb3b14e6374e037-merged.mount: Deactivated successfully.
Nov 29 02:14:34 np0005539550 podman[85151]: 2025-11-29 07:14:34.755490969 +0000 UTC m=+2.572873774 container remove ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shamir, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:34 np0005539550 systemd[1]: libpod-conmon-ac0170dce357e0d4e3815b148009f5c3f64d123c29dc19596bda08024fe2356d.scope: Deactivated successfully.
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:14:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 done with init, starting boot process
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 start_boot
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 29 02:14:35 np0005539550 ceph-osd[84753]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:35 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:35 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:35 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:35 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:36 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:36 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:36 np0005539550 ceph-mon[74435]: from='osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Nov 29 02:14:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:37 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:37 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v51: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:38 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:39 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:39 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:39 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:14:39 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: from='osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 podman[85637]: 2025-11-29 07:14:40.301444843 +0000 UTC m=+0.185138104 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:40 np0005539550 podman[85637]: 2025-11-29 07:14:40.417510063 +0000 UTC m=+0.301203304 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:40 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:40 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:40 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:14:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 14.692 iops: 3761.250 elapsed_sec: 0.798
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: log_channel(cluster) log [WRN] : OSD bench result of 3761.249502 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 0 waiting for initial osdmap
Nov 29 02:14:41 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0[84749]: 2025-11-29T07:14:41.142+0000 7f027fdd6640 -1 osd.0 0 waiting for initial osdmap
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 set_numa_affinity not setting numa affinity
Nov 29 02:14:41 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-osd-0[84749]: 2025-11-29T07:14:41.548+0000 7f027b3fe640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 29 02:14:41 np0005539550 ceph-osd[84753]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 29 02:14:41 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/31126097; not ready for session (expect reconnect)
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:41 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:41 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:41 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:42 np0005539550 ceph-osd[84753]: osd.0 9 tick checking mon for new map
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.445406966 +0000 UTC m=+0.050799343 container create 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:14:42 np0005539550 systemd[1]: Started libpod-conmon-98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372.scope.
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.423772613 +0000 UTC m=+0.029165020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Nov 29 02:14:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097] boot
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:42 np0005539550 ceph-osd[84753]: osd.0 10 state: booting -> active
Nov 29 02:14:42 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: OSD bench result of 3761.249502 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.546942878 +0000 UTC m=+0.152335255 container init 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.560251812 +0000 UTC m=+0.165644189 container start 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.565318081 +0000 UTC m=+0.170710458 container attach 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:14:42 np0005539550 hopeful_yonath[86003]: 167 167
Nov 29 02:14:42 np0005539550 systemd[1]: libpod-98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372.scope: Deactivated successfully.
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.568398416 +0000 UTC m=+0.173790803 container died 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:14:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4de19dc5364471118221c9bc878e3c4598b78ff0887f3a0e6d96dcc1e8230407-merged.mount: Deactivated successfully.
Nov 29 02:14:42 np0005539550 podman[85987]: 2025-11-29 07:14:42.619640109 +0000 UTC m=+0.225032486 container remove 98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:14:42 np0005539550 systemd[1]: libpod-conmon-98aa1ea84dd8477ab091e9e1c2948b382dd240964d9c74b36bdc625756c00372.scope: Deactivated successfully.
Nov 29 02:14:42 np0005539550 podman[86027]: 2025-11-29 07:14:42.794696026 +0000 UTC m=+0.050077053 container create 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:14:42 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:42 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:42 np0005539550 systemd[1]: Started libpod-conmon-663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564.scope.
Nov 29 02:14:42 np0005539550 podman[86027]: 2025-11-29 07:14:42.773386342 +0000 UTC m=+0.028767389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:14:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a075077670387e32403704250be625d2b9c8b93449b4a576a06d52149e371ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a075077670387e32403704250be625d2b9c8b93449b4a576a06d52149e371ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a075077670387e32403704250be625d2b9c8b93449b4a576a06d52149e371ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a075077670387e32403704250be625d2b9c8b93449b4a576a06d52149e371ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:42 np0005539550 podman[86027]: 2025-11-29 07:14:42.896381872 +0000 UTC m=+0.151762929 container init 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:14:42 np0005539550 podman[86027]: 2025-11-29 07:14:42.903339863 +0000 UTC m=+0.158720900 container start 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:42 np0005539550 podman[86027]: 2025-11-29 07:14:42.912402971 +0000 UTC m=+0.167784028 container attach 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:43 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] creating mgr pool
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:14:43 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:43 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v56: 0 pgs: ; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Nov 29 02:14:44 np0005539550 gallant_pare[86044]: [
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:    {
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "available": false,
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "ceph_device": false,
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "lsm_data": {},
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "lvs": [],
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "path": "/dev/sr0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "rejected_reasons": [
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "Has a FileSystem",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "Insufficient space (<5GB)"
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        ],
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        "sys_api": {
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "actuators": null,
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "device_nodes": "sr0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "devname": "sr0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "human_readable_size": "482.00 KB",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "id_bus": "ata",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "model": "QEMU DVD-ROM",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "nr_requests": "2",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "parent": "/dev/sr0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "partitions": {},
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "path": "/dev/sr0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "removable": "1",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "rev": "2.5+",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "ro": "0",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "rotational": "1",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "sas_address": "",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "sas_device_handle": "",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "scheduler_mode": "mq-deadline",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "sectors": 0,
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "sectorsize": "2048",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "size": 493568.0,
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "support_discard": "2048",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "type": "disk",
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:            "vendor": "QEMU"
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:        }
Nov 29 02:14:44 np0005539550 gallant_pare[86044]:    }
Nov 29 02:14:44 np0005539550 gallant_pare[86044]: ]
Nov 29 02:14:44 np0005539550 systemd[1]: libpod-663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564.scope: Deactivated successfully.
Nov 29 02:14:44 np0005539550 systemd[1]: libpod-663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564.scope: Consumed 1.313s CPU time.
Nov 29 02:14:44 np0005539550 podman[86027]: 2025-11-29 07:14:44.194113568 +0000 UTC m=+1.449494605 container died 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:44 np0005539550 ceph-mon[74435]: osd.0 [v2:192.168.122.100:6802/31126097,v1:192.168.122.100:6803/31126097] boot
Nov 29 02:14:44 np0005539550 python3[87241]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:44 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/2487326386; not ready for session (expect reconnect)
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386] boot
Nov 29 02:14:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1a075077670387e32403704250be625d2b9c8b93449b4a576a06d52149e371ce-merged.mount: Deactivated successfully.
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: OSD bench result of 5934.887354 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:14:45 np0005539550 podman[86027]: 2025-11-29 07:14:45.340797826 +0000 UTC m=+2.596178853 container remove 663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:14:45 np0005539550 systemd[1]: libpod-conmon-663e67593a3c491ad6f6e7b95521fade9ae810c8470acd6d6a86c2d4418e6564.scope: Deactivated successfully.
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:14:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 02:14:45 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 02:14:45 np0005539550 podman[87243]: 2025-11-29 07:14:45.462304016 +0000 UTC m=+0.884458815 container create 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:45 np0005539550 systemd[1]: Started libpod-conmon-41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879.scope.
Nov 29 02:14:45 np0005539550 podman[87243]: 2025-11-29 07:14:45.439533202 +0000 UTC m=+0.861688031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8691b4b0a317a35f9e3bf080d595890e2865790f38eeed21644f93169c125bc7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8691b4b0a317a35f9e3bf080d595890e2865790f38eeed21644f93169c125bc7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8691b4b0a317a35f9e3bf080d595890e2865790f38eeed21644f93169c125bc7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:45 np0005539550 podman[87243]: 2025-11-29 07:14:45.554950784 +0000 UTC m=+0.977105603 container init 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:14:45 np0005539550 podman[87243]: 2025-11-29 07:14:45.564851255 +0000 UTC m=+0.987006054 container start 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:14:45 np0005539550 podman[87243]: 2025-11-29 07:14:45.570442149 +0000 UTC m=+0.992596958 container attach 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:14:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v58: 0 pgs: ; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: osd.1 [v2:192.168.122.101:6800/2487326386,v1:192.168.122.101:6801/2487326386] boot
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: Adjusting osd_memory_target on compute-1 to  5247M
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552140068' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:14:46 np0005539550 ecstatic_euclid[87266]: 
Nov 29 02:14:46 np0005539550 ecstatic_euclid[87266]: {"fsid":"b66774a7-56d9-5535-bd8c-681234404870","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":165,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1764400483,"num_in_osds":2,"osd_in_since":1764400457,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":446980096,"bytes_avail":7065018368,"bytes_total":7511998464},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-29T07:13:59.659014+0000","services":{}},"progress_events":{}}
Nov 29 02:14:46 np0005539550 systemd[1]: libpod-41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879.scope: Deactivated successfully.
Nov 29 02:14:46 np0005539550 podman[87243]: 2025-11-29 07:14:46.378134758 +0000 UTC m=+1.800289577 container died 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8691b4b0a317a35f9e3bf080d595890e2865790f38eeed21644f93169c125bc7-merged.mount: Deactivated successfully.
Nov 29 02:14:46 np0005539550 podman[87243]: 2025-11-29 07:14:46.451122868 +0000 UTC m=+1.873277667 container remove 41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879 (image=quay.io/ceph/ceph:v18, name=ecstatic_euclid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:46 np0005539550 systemd[1]: libpod-conmon-41e1b0463baae67b65ad902f03b2f728f2366fc2491762933faf1ca52856d879.scope: Deactivated successfully.
Nov 29 02:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:46 np0005539550 python3[87327]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:47 np0005539550 podman[87328]: 2025-11-29 07:14:47.04978634 +0000 UTC m=+0.042596828 container create eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:47 np0005539550 systemd[1]: Started libpod-conmon-eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b.scope.
Nov 29 02:14:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97af3a33bca1a56c03ae417387ab90e3899470f2862eed433ba4922f97bd1067/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97af3a33bca1a56c03ae417387ab90e3899470f2862eed433ba4922f97bd1067/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:47 np0005539550 podman[87328]: 2025-11-29 07:14:47.032958209 +0000 UTC m=+0.025768717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:47 np0005539550 podman[87328]: 2025-11-29 07:14:47.136847396 +0000 UTC m=+0.129657914 container init eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:14:47 np0005539550 podman[87328]: 2025-11-29 07:14:47.144495045 +0000 UTC m=+0.137305533 container start eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:14:47 np0005539550 podman[87328]: 2025-11-29 07:14:47.148722421 +0000 UTC m=+0.141533159 container attach eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 29 02:14:47 np0005539550 ceph-osd[84753]: osd.0 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 29 02:14:47 np0005539550 ceph-osd[84753]: osd.0 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 29 02:14:47 np0005539550 ceph-osd[84753]: osd.0 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 29 02:14:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] creating main.db for devicehealth
Nov 29 02:14:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:14:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3917775098' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 unknown; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 29 02:14:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.pdhsqi(active, since 111s)
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3917775098' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Nov 29 02:14:49 np0005539550 heuristic_swartz[87345]: pool 'vms' created
Nov 29 02:14:49 np0005539550 systemd[1]: libpod-eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b.scope: Deactivated successfully.
Nov 29 02:14:49 np0005539550 podman[87328]: 2025-11-29 07:14:49.031464027 +0000 UTC m=+2.024274525 container died eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3917775098' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Nov 29 02:14:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-97af3a33bca1a56c03ae417387ab90e3899470f2862eed433ba4922f97bd1067-merged.mount: Deactivated successfully.
Nov 29 02:14:49 np0005539550 podman[87328]: 2025-11-29 07:14:49.202602296 +0000 UTC m=+2.195412784 container remove eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b (image=quay.io/ceph/ceph:v18, name=heuristic_swartz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:14:49 np0005539550 systemd[1]: libpod-conmon-eddce9c95f5d349025210d0a3efe5d2a527510187120f887f9a391f53866c97b.scope: Deactivated successfully.
Nov 29 02:14:49 np0005539550 python3[87425]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:49 np0005539550 podman[87426]: 2025-11-29 07:14:49.565087877 +0000 UTC m=+0.025173730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:49 np0005539550 podman[87426]: 2025-11-29 07:14:49.730477919 +0000 UTC m=+0.190563732 container create cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:49 np0005539550 systemd[1]: Started libpod-conmon-cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d.scope.
Nov 29 02:14:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ea4f51bb842613d1ec02601adeaed04f89c4c582bd06275697bf054a8c544/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52ea4f51bb842613d1ec02601adeaed04f89c4c582bd06275697bf054a8c544/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:49 np0005539550 podman[87426]: 2025-11-29 07:14:49.819901209 +0000 UTC m=+0.279987052 container init cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:49 np0005539550 podman[87426]: 2025-11-29 07:14:49.827991411 +0000 UTC m=+0.288077224 container start cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:14:49 np0005539550 podman[87426]: 2025-11-29 07:14:49.832770632 +0000 UTC m=+0.292856475 container attach cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3917775098' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Nov 29 02:14:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 2 unknown; 0 B data, 453 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:14:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3068283182' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3068283182' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Nov 29 02:14:51 np0005539550 keen_dubinsky[87441]: pool 'volumes' created
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Nov 29 02:14:51 np0005539550 systemd[1]: libpod-cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d.scope: Deactivated successfully.
Nov 29 02:14:51 np0005539550 podman[87426]: 2025-11-29 07:14:51.343376551 +0000 UTC m=+1.803462384 container died cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3068283182' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:51 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v66: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c52ea4f51bb842613d1ec02601adeaed04f89c4c582bd06275697bf054a8c544-merged.mount: Deactivated successfully.
Nov 29 02:14:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 29 02:14:53 np0005539550 ceph-mon[74435]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:14:53 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3068283182' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:53 np0005539550 podman[87426]: 2025-11-29 07:14:53.307584669 +0000 UTC m=+3.767670482 container remove cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d (image=quay.io/ceph/ceph:v18, name=keen_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:53 np0005539550 systemd[1]: libpod-conmon-cb0f4f161f2521b8f0954cd9744d2a2d6e41979fc9e442972056af4fe16aa81d.scope: Deactivated successfully.
Nov 29 02:14:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Nov 29 02:14:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Nov 29 02:14:53 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:53 np0005539550 python3[87504]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:53 np0005539550 podman[87505]: 2025-11-29 07:14:53.703919248 +0000 UTC m=+0.039100823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v68: 3 pgs: 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:54 np0005539550 podman[87505]: 2025-11-29 07:14:54.151491659 +0000 UTC m=+0.486673204 container create 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:14:54 np0005539550 systemd[1]: Started libpod-conmon-757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de.scope.
Nov 29 02:14:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce76e3e29f78681b37b6a72c651b8ca8288cd34cc3273b4dd28c94db0873f950/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce76e3e29f78681b37b6a72c651b8ca8288cd34cc3273b4dd28c94db0873f950/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:54 np0005539550 podman[87505]: 2025-11-29 07:14:54.321336893 +0000 UTC m=+0.656518438 container init 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:54 np0005539550 podman[87505]: 2025-11-29 07:14:54.32958976 +0000 UTC m=+0.664771305 container start 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:14:54 np0005539550 podman[87505]: 2025-11-29 07:14:54.334081493 +0000 UTC m=+0.669263058 container attach 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 29 02:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Nov 29 02:14:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Nov 29 02:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 29 02:14:55 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Nov 29 02:14:55 np0005539550 zen_boyd[87520]: pool 'backups' created
Nov 29 02:14:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Nov 29 02:14:55 np0005539550 systemd[1]: libpod-757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de.scope: Deactivated successfully.
Nov 29 02:14:55 np0005539550 podman[87505]: 2025-11-29 07:14:55.793668124 +0000 UTC m=+2.128849669 container died 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:14:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ce76e3e29f78681b37b6a72c651b8ca8288cd34cc3273b4dd28c94db0873f950-merged.mount: Deactivated successfully.
Nov 29 02:14:55 np0005539550 podman[87505]: 2025-11-29 07:14:55.861191704 +0000 UTC m=+2.196373259 container remove 757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de (image=quay.io/ceph/ceph:v18, name=zen_boyd, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:14:55 np0005539550 systemd[1]: libpod-conmon-757904a9a9196f5f82e687805509889a4865ef7bc9a5eaa39644c443e75262de.scope: Deactivated successfully.
Nov 29 02:14:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 1 unknown, 1 creating+peering, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:56 np0005539550 python3[87584]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:56 np0005539550 podman[87585]: 2025-11-29 07:14:56.199512424 +0000 UTC m=+0.024335337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:56 np0005539550 podman[87585]: 2025-11-29 07:14:56.417578679 +0000 UTC m=+0.242401572 container create 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:14:56 np0005539550 systemd[1]: Started libpod-conmon-7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102.scope.
Nov 29 02:14:56 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1fa10c1074b90c2c1171dff55b9e8cbb7bb985bab7c399c4a3c0d14f9060dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d1fa10c1074b90c2c1171dff55b9e8cbb7bb985bab7c399c4a3c0d14f9060dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:56 np0005539550 podman[87585]: 2025-11-29 07:14:56.505666693 +0000 UTC m=+0.330489616 container init 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:14:56 np0005539550 podman[87585]: 2025-11-29 07:14:56.512727436 +0000 UTC m=+0.337550329 container start 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:14:56 np0005539550 podman[87585]: 2025-11-29 07:14:56.516391616 +0000 UTC m=+0.341214529 container attach 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1619350831' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Nov 29 02:14:56 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:14:57
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.250000) are unknown; try again later
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Nov 29 02:14:57 np0005539550 hopeful_wright[87600]: pool 'images' created
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Nov 29 02:14:57 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev b9ffa0e2-af15-4c2a-8e22-b9b018753723 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:57 np0005539550 systemd[1]: libpod-7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102.scope: Deactivated successfully.
Nov 29 02:14:57 np0005539550 podman[87585]: 2025-11-29 07:14:57.813239179 +0000 UTC m=+1.638062062 container died 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:14:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0d1fa10c1074b90c2c1171dff55b9e8cbb7bb985bab7c399c4a3c0d14f9060dc-merged.mount: Deactivated successfully.
Nov 29 02:14:57 np0005539550 podman[87585]: 2025-11-29 07:14:57.879936985 +0000 UTC m=+1.704759878 container remove 7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102 (image=quay.io/ceph/ceph:v18, name=hopeful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:14:57 np0005539550 systemd[1]: libpod-conmon-7a14841a6fd9f460cd5e0e2376816a4045f3876c09bf346410b0800119a3a102.scope: Deactivated successfully.
Nov 29 02:14:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v74: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:14:58 np0005539550 python3[87663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:14:58 np0005539550 podman[87664]: 2025-11-29 07:14:58.292315237 +0000 UTC m=+0.084133483 container create 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:14:58 np0005539550 podman[87664]: 2025-11-29 07:14:58.234726299 +0000 UTC m=+0.026544565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:14:58 np0005539550 systemd[1]: Started libpod-conmon-6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00.scope.
Nov 29 02:14:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:14:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6a15ef8bdbcefc89ad68c5e83dd03d050d00932c3e23b6e4eb0fc2822bf236/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6a15ef8bdbcefc89ad68c5e83dd03d050d00932c3e23b6e4eb0fc2822bf236/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:14:58 np0005539550 podman[87664]: 2025-11-29 07:14:58.394298955 +0000 UTC m=+0.186117211 container init 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:14:58 np0005539550 podman[87664]: 2025-11-29 07:14:58.402027948 +0000 UTC m=+0.193846184 container start 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:14:58 np0005539550 podman[87664]: 2025-11-29 07:14:58.407652919 +0000 UTC m=+0.199471175 container attach 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:14:58 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Nov 29 02:14:58 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 687f3a57-dbe2-4119-902f-5a3248c04b4b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1899703371' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:14:58 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Nov 29 02:14:59 np0005539550 kind_heyrovsky[87680]: pool 'cephfs.cephfs.meta' created
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:14:59 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev a50bdc51-18bb-48aa-b6be-c9c0ff7aa542 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev b9ffa0e2-af15-4c2a-8e22-b9b018753723 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event b9ffa0e2-af15-4c2a-8e22-b9b018753723 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 2 seconds
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 687f3a57-dbe2-4119-902f-5a3248c04b4b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 687f3a57-dbe2-4119-902f-5a3248c04b4b (PG autoscaler increasing pool 3 PGs from 1 to 32) in 1 seconds
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev a50bdc51-18bb-48aa-b6be-c9c0ff7aa542 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 29 02:14:59 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event a50bdc51-18bb-48aa-b6be-c9c0ff7aa542 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Nov 29 02:14:59 np0005539550 systemd[1]: libpod-6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00.scope: Deactivated successfully.
Nov 29 02:14:59 np0005539550 podman[87664]: 2025-11-29 07:14:59.840422814 +0000 UTC m=+1.632241050 container died 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:14:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8b6a15ef8bdbcefc89ad68c5e83dd03d050d00932c3e23b6e4eb0fc2822bf236-merged.mount: Deactivated successfully.
Nov 29 02:14:59 np0005539550 podman[87664]: 2025-11-29 07:14:59.881976522 +0000 UTC m=+1.673794758 container remove 6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00 (image=quay.io/ceph/ceph:v18, name=kind_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 02:14:59 np0005539550 systemd[1]: libpod-conmon-6a8206e6a41a5e9f210cec315dc68f2f1ff5903ae43bc231df85091129ddbe00.scope: Deactivated successfully.
Nov 29 02:15:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v77: 37 pgs: 33 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:00 np0005539550 python3[87744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:00 np0005539550 podman[87745]: 2025-11-29 07:15:00.254764595 +0000 UTC m=+0.053267772 container create dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:15:00 np0005539550 systemd[1]: Started libpod-conmon-dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a.scope.
Nov 29 02:15:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54ae296091bc5b90b3032556d5c10085a995b05a8db2adfd80643d56af80fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f54ae296091bc5b90b3032556d5c10085a995b05a8db2adfd80643d56af80fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:00 np0005539550 podman[87745]: 2025-11-29 07:15:00.23574284 +0000 UTC m=+0.034246047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:00 np0005539550 podman[87745]: 2025-11-29 07:15:00.335067381 +0000 UTC m=+0.133570588 container init dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:00 np0005539550 podman[87745]: 2025-11-29 07:15:00.341887002 +0000 UTC m=+0.140390209 container start dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:00 np0005539550 podman[87745]: 2025-11-29 07:15:00.346880027 +0000 UTC m=+0.145383274 container attach dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3603270334' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=24 pruub=11.947137833s) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active pruub 39.159385681s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 24 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=8.586914062s) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active pruub 35.799293518s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 24 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24 pruub=8.586914062s) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown pruub 35.799293518s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=24 pruub=11.947137833s) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown pruub 39.159385681s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 29 02:15:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:15:01 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Nov 29 02:15:01 np0005539550 laughing_euclid[87760]: pool 'cephfs.cephfs.data' created
Nov 29 02:15:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1e( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1f( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.18( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.19( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.17( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.10( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.16( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.11( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.15( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.12( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.14( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.13( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.13( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.14( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.12( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.15( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.11( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.16( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.10( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.8( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.9( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.17( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.a( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.b( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.c( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.d( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.7( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.7( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.6( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.2( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.5( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.6( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.2( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.5( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.3( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.4( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.4( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.3( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.8( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.f( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1d( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1a( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1c( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1b( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1b( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.9( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1c( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.e( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1d( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.19( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1a( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1e( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1f( empty local-lis/les=16/17 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.18( empty local-lis/les=19/20 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:01 np0005539550 systemd[1]: libpod-dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a.scope: Deactivated successfully.
Nov 29 02:15:01 np0005539550 podman[87745]: 2025-11-29 07:15:01.8908889 +0000 UTC m=+1.689392077 container died dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.19( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.17( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1e( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.11( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.12( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.12( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.16( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.17( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.b( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.0( empty local-lis/les=24/25 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.7( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.7( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.6( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.4( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.2( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.f( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[3.1f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=16/16 les/c/f=17/17/0 sis=24) [0] r=0 lpr=24 pi=[16,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 25 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=19/19 les/c/f=20/20/0 sis=24) [0] r=0 lpr=24 pi=[19,24)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6f54ae296091bc5b90b3032556d5c10085a995b05a8db2adfd80643d56af80fe-merged.mount: Deactivated successfully.
Nov 29 02:15:01 np0005539550 podman[87745]: 2025-11-29 07:15:01.952978541 +0000 UTC m=+1.751481718 container remove dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a (image=quay.io/ceph/ceph:v18, name=laughing_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:01 np0005539550 systemd[1]: libpod-conmon-dd55d24c3aab9bb6c7231976f19740de70e99a1a0dd3e9ac55758cea3545486a.scope: Deactivated successfully.
Nov 29 02:15:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v80: 100 pgs: 65 unknown, 35 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:02 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:15:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:15:02 np0005539550 python3[87825]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:02 np0005539550 podman[87826]: 2025-11-29 07:15:02.403005456 +0000 UTC m=+0.055297075 container create e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:02 np0005539550 systemd[1]: Started libpod-conmon-e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2.scope.
Nov 29 02:15:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:02 np0005539550 podman[87826]: 2025-11-29 07:15:02.378499012 +0000 UTC m=+0.030790671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a855a5398cf34c583b11b27dffb227df9d11ccbfd5ef8eedec8bd2299b3872f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a855a5398cf34c583b11b27dffb227df9d11ccbfd5ef8eedec8bd2299b3872f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:02 np0005539550 podman[87826]: 2025-11-29 07:15:02.486247979 +0000 UTC m=+0.138539618 container init e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:02 np0005539550 podman[87826]: 2025-11-29 07:15:02.494161437 +0000 UTC m=+0.146453056 container start e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:15:02 np0005539550 podman[87826]: 2025-11-29 07:15:02.498066445 +0000 UTC m=+0.150358084 container attach e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:15:02 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 5 completed events
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1623730272' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:15:03 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:15:03 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:15:03 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Nov 29 02:15:03 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Nov 29 02:15:03 np0005539550 elated_mccarthy[87841]: enabled application 'rbd' on pool 'vms'
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 29 02:15:03 np0005539550 ceph-mon[74435]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 29 02:15:03 np0005539550 systemd[1]: libpod-e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2.scope: Deactivated successfully.
Nov 29 02:15:03 np0005539550 podman[87826]: 2025-11-29 07:15:03.912072897 +0000 UTC m=+1.564364516 container died e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a855a5398cf34c583b11b27dffb227df9d11ccbfd5ef8eedec8bd2299b3872f5-merged.mount: Deactivated successfully.
Nov 29 02:15:03 np0005539550 podman[87826]: 2025-11-29 07:15:03.953272148 +0000 UTC m=+1.605563767 container remove e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:15:03 np0005539550 systemd[1]: libpod-conmon-e04b6e914a9433095c3939d2072096b7dbdced6f33bef89b09fb5e13c79a1ad2.scope: Deactivated successfully.
Nov 29 02:15:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v83: 100 pgs: 1 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:04 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:15:04 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:15:04 np0005539550 python3[87905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:04 np0005539550 podman[87906]: 2025-11-29 07:15:04.297119565 +0000 UTC m=+0.038695680 container create 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:04 np0005539550 systemd[1]: Started libpod-conmon-61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a.scope.
Nov 29 02:15:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6ce95310018577c836d08346310280b56cbe791c9907b199efd77bbd6c8609/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6ce95310018577c836d08346310280b56cbe791c9907b199efd77bbd6c8609/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:04 np0005539550 podman[87906]: 2025-11-29 07:15:04.367486026 +0000 UTC m=+0.109062141 container init 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:15:04 np0005539550 podman[87906]: 2025-11-29 07:15:04.37443388 +0000 UTC m=+0.116009995 container start 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:04 np0005539550 podman[87906]: 2025-11-29 07:15:04.281513894 +0000 UTC m=+0.023090039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:04 np0005539550 podman[87906]: 2025-11-29 07:15:04.378258345 +0000 UTC m=+0.119834480 container attach 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1 deep-scrub starts
Nov 29 02:15:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1 deep-scrub ok
Nov 29 02:15:04 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3234575411' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 29 02:15:04 np0005539550 ceph-mon[74435]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.client.admin.keyring
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v84: 100 pgs: 1 unknown, 99 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:05 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev a5166ccf-0712-485c-9ebf-73728f97e503 (Updating mon deployment (+2 -> 3))
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 29 02:15:05 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 29 02:15:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 29 02:15:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: Deploying daemon mon.compute-2 on compute-2
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Nov 29 02:15:05 np0005539550 loving_villani[87921]: enabled application 'rbd' on pool 'volumes'
Nov 29 02:15:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Nov 29 02:15:05 np0005539550 systemd[1]: libpod-61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a.scope: Deactivated successfully.
Nov 29 02:15:05 np0005539550 podman[87906]: 2025-11-29 07:15:05.965268616 +0000 UTC m=+1.706844731 container died 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bf6ce95310018577c836d08346310280b56cbe791c9907b199efd77bbd6c8609-merged.mount: Deactivated successfully.
Nov 29 02:15:06 np0005539550 systemd[75927]: Starting Mark boot as successful...
Nov 29 02:15:06 np0005539550 systemd[75927]: Finished Mark boot as successful.
Nov 29 02:15:06 np0005539550 podman[87906]: 2025-11-29 07:15:06.164146914 +0000 UTC m=+1.905723029 container remove 61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a (image=quay.io/ceph/ceph:v18, name=loving_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:15:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:15:06 np0005539550 systemd[1]: libpod-conmon-61fb48b8558ff74dcf3679aeff9bff12113fa5a57a0bfa4abdd164e4a12aa15a.scope: Deactivated successfully.
Nov 29 02:15:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 29 02:15:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 29 02:15:06 np0005539550 python3[87983]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:06 np0005539550 podman[87984]: 2025-11-29 07:15:06.533352725 +0000 UTC m=+0.026544785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v86: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:15:07 np0005539550 podman[87984]: 2025-11-29 07:15:07.423370872 +0000 UTC m=+0.916562902 container create d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:15:07 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/86337951' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 29 02:15:07 np0005539550 ceph-mon[74435]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 29 02:15:07 np0005539550 systemd[1]: Started libpod-conmon-d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c.scope.
Nov 29 02:15:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e118858d040b64910d0dbe1cf4011355575ebcc94b9ba4f80d92a33ceb8e77c3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e118858d040b64910d0dbe1cf4011355575ebcc94b9ba4f80d92a33ceb8e77c3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 29 02:15:08 np0005539550 podman[87984]: 2025-11-29 07:15:08.010289272 +0000 UTC m=+1.503481332 container init d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:08 np0005539550 podman[87984]: 2025-11-29 07:15:08.017708108 +0000 UTC m=+1.510900148 container start d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539550 podman[87984]: 2025-11-29 07:15:08.043650947 +0000 UTC m=+1.536842987 container attach d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.856090546s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272483826s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1f( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.856025696s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272483826s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855689049s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272205353s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855598450s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272205353s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855285645s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272418976s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855535507s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272701263s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855496407s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272674561s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855470657s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272674561s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.13( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855502129s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272701263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855167389s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272418976s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855865479s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273220062s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855472565s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272827148s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.15( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855813026s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273220062s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855385780s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272827148s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855308533s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272907257s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.8( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855288506s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272907257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855221748s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272850037s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855190277s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272850037s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855195045s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272872925s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855170250s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272872925s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855497360s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273250580s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.9( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855473518s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273250580s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855319977s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273132324s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855259895s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273136139s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855297089s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273132324s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855208397s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273136139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855161667s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273181915s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855124474s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273181915s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855417252s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273525238s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855232239s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273353577s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855391502s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273525238s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855206490s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273353577s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855078697s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273273468s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855056763s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273273468s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855101585s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273475647s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855136871s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273529053s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855113983s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273529053s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855072021s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273475647s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855108261s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273540497s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.5( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855080605s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273540497s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855023384s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273555756s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854803085s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272747040s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855004311s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273555756s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854162216s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272747040s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855088234s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273731232s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855207443s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273876190s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855063438s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273731232s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854955673s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273677826s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854937553s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273677826s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.e( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.855187416s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273876190s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854871750s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273735046s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854896545s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273792267s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854810715s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273715973s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1b( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854822159s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273735046s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.1a( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854874611s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273792267s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854786873s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273715973s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854804993s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273826599s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854787827s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273826599s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854816437s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.273929596s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.853143692s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active pruub 44.272274017s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[4.18( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.854795456s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.273929596s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=29 pruub=9.853102684s) [1] r=-1 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 44.272274017s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.15( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.1b( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.13( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.10( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.e( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.d( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.c( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.1( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.4( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.6( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.9( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.a( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.1e( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.1f( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 29 pg[2.19( empty local-lis/les=0/0 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 29 02:15:08 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 29 02:15:08 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:08 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 29 02:15:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/206725884' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:15:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 1 active+clean+scrubbing, 99 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:09 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:09 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:10 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:10 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:15:10 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:10 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Nov 29 02:15:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Nov 29 02:15:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 31 peering, 1 active+clean+scrubbing, 68 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:11 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:11 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:15:11 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:11 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:12 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:12 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:15:12 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:12 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,31 pgs not in active + clean state
Nov 29 02:15:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v90: 100 pgs: 46 peering, 54 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:13 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:13 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:15:13 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:13 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.pdhsqi(active, since 2m)
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 29 02:15:14 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 29 02:15:14 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3477575173; not ready for session (expect reconnect)
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/206725884' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Nov 29 02:15:14 np0005539550 romantic_ptolemy[87999]: enabled application 'rbd' on pool 'backups'
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 29 02:15:14 np0005539550 systemd[1]: libpod-d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c.scope: Deactivated successfully.
Nov 29 02:15:14 np0005539550 podman[87984]: 2025-11-29 07:15:14.540297533 +0000 UTC m=+8.033489573 container died d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: Deploying daemon mon.compute-1 on compute-1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/206725884' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:14 np0005539550 ceph-mon[74435]:    application not enabled on pool 'backups'
Nov 29 02:15:14 np0005539550 ceph-mon[74435]:    application not enabled on pool 'images'
Nov 29 02:15:14 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:14 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:14 np0005539550 ceph-mon[74435]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=22/22 les/c/f=24/24/0 sis=29) [0] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 29 02:15:14 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:15:14 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 29 02:15:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e118858d040b64910d0dbe1cf4011355575ebcc94b9ba4f80d92a33ceb8e77c3-merged.mount: Deactivated successfully.
Nov 29 02:15:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:15:15 np0005539550 podman[87984]: 2025-11-29 07:15:15.067361725 +0000 UTC m=+8.560553765 container remove d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c (image=quay.io/ceph/ceph:v18, name=romantic_ptolemy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:15 np0005539550 systemd[1]: libpod-conmon-d4da3d95eb6138a05a38ecf996849dd402aa095ce5e02dee40bfd3ce675ff74c.scope: Deactivated successfully.
Nov 29 02:15:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v92: 100 pgs: 46 peering, 54 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:15 np0005539550 python3[88062]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:15 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:15 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:15 np0005539550 podman[88063]: 2025-11-29 07:15:15.444306679 +0000 UTC m=+0.024845233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:15 np0005539550 podman[88063]: 2025-11-29 07:15:15.81032309 +0000 UTC m=+0.390861614 container create 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 02:15:16 np0005539550 systemd[1]: Started libpod-conmon-2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164.scope.
Nov 29 02:15:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0438750ffe9eb52a292513e3d9c25a572111876caf33f09462619690c8f2787/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0438750ffe9eb52a292513e3d9c25a572111876caf33f09462619690c8f2787/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:16 np0005539550 podman[88063]: 2025-11-29 07:15:16.383742023 +0000 UTC m=+0.964280567 container init 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:16 np0005539550 podman[88063]: 2025-11-29 07:15:16.389474806 +0000 UTC m=+0.970013330 container start 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:16 np0005539550 podman[88063]: 2025-11-29 07:15:16.394896052 +0000 UTC m=+0.975434576 container attach 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:15:16 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:16 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v93: 100 pgs: 15 peering, 85 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:17 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:17 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:17 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 29 02:15:17 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 29 02:15:17 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event aa0bdd21-9348-4f4f-ab54-1bebe7dfd606 (Global Recovery Event) in 5 seconds
Nov 29 02:15:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:18 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:18 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:19 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:19 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:19 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 29 02:15:19 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 29 02:15:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.pdhsqi(active, since 2m)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev a5166ccf-0712-485c-9ebf-73728f97e503 (Updating mon deployment (+2 -> 3))
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event a5166ccf-0712-485c-9ebf-73728f97e503 (Updating mon deployment (+2 -> 3)) in 15 seconds
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev ab4ac8e1-2b04-4601-9d51-ce575f4855b0 (Updating mgr deployment (+2 -> 3))
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.zfrvoq on compute-2
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.zfrvoq on compute-2
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:20 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 02:15:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v95: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:21 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:21 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Nov 29 02:15:21 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:21 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:15:21 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(13) init, last seen epoch 13, mid-election, bumping
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_auth_request failed to assign global_id
Nov 29 02:15:22 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:22 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:22 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 29 02:15:22 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 29 02:15:22 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 7 completed events
Nov 29 02:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:15:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:23 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:23 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:24 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:24 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v97: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:25 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:25 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:26 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:26 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:26 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 29 02:15:26 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v98: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 29 02:15:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 29 02:15:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    application not enabled on pool 'backups'
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    application not enabled on pool 'images'
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:27 np0005539550 ceph-mon[74435]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zfrvoq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: Deploying daemon mgr.compute-2.zfrvoq on compute-2
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap 
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.pdhsqi(active, since 2m)
Nov 29 02:15:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:28 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.fchyan on compute-1
Nov 29 02:15:28 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.fchyan on compute-1
Nov 29 02:15:28 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2127868890; not ready for session (expect reconnect)
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 29 02:15:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 29 02:15:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 29 02:15:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v99: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 02:15:29 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T07:15:29.425+0000 7f31e0735640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-1 calling monitor election
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 29 02:15:29 np0005539550 ceph-mon[74435]:    application not enabled on pool 'images'
Nov 29 02:15:29 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 29 02:15:29 np0005539550 ceph-mon[74435]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 29 02:15:29 np0005539550 ceph-mon[74435]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.fchyan", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: Deploying daemon mgr.compute-1.fchyan on compute-1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Nov 29 02:15:29 np0005539550 dazzling_borg[88078]: enabled application 'rbd' on pool 'images'
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 systemd[1]: libpod-2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164.scope: Deactivated successfully.
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:15:29 np0005539550 podman[88063]: 2025-11-29 07:15:29.872881265 +0000 UTC m=+14.453419809 container died 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev ab4ac8e1-2b04-4601-9d51-ce575f4855b0 (Updating mgr deployment (+2 -> 3))
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event ab4ac8e1-2b04-4601-9d51-ce575f4855b0 (Updating mgr deployment (+2 -> 3)) in 10 seconds
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 2dcbce46-fa19-44bc-8c4e-90866a3705f4 (Updating crash deployment (+1 -> 3))
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:15:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b0438750ffe9eb52a292513e3d9c25a572111876caf33f09462619690c8f2787-merged.mount: Deactivated successfully.
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 29 02:15:29 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 29 02:15:29 np0005539550 podman[88063]: 2025-11-29 07:15:29.934933439 +0000 UTC m=+14.515471953 container remove 2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164 (image=quay.io/ceph/ceph:v18, name=dazzling_borg, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:29 np0005539550 systemd[1]: libpod-conmon-2447150fb45d06afdebb275950461cbbe440ee08ba978ab50873fefe0042d164.scope: Deactivated successfully.
Nov 29 02:15:30 np0005539550 python3[88139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:30 np0005539550 podman[88140]: 2025-11-29 07:15:30.344042878 +0000 UTC m=+0.050600247 container create c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:15:30 np0005539550 systemd[1]: Started libpod-conmon-c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218.scope.
Nov 29 02:15:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:30 np0005539550 podman[88140]: 2025-11-29 07:15:30.319077424 +0000 UTC m=+0.025634803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d003f0e5f950ca7da2477868986b3f1384e863a4da1432d43825595a9cc2b10d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d003f0e5f950ca7da2477868986b3f1384e863a4da1432d43825595a9cc2b10d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:30 np0005539550 podman[88140]: 2025-11-29 07:15:30.446661337 +0000 UTC m=+0.153218716 container init c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 02:15:30 np0005539550 podman[88140]: 2025-11-29 07:15:30.455844397 +0000 UTC m=+0.162401766 container start c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:15:30 np0005539550 podman[88140]: 2025-11-29 07:15:30.460348989 +0000 UTC m=+0.166906489 container attach c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:15:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.f scrub starts
Nov 29 02:15:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.f scrub ok
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/665227318' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:15:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 29 02:15:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 29 02:15:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:15:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v101: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Nov 29 02:15:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Nov 29 02:15:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Nov 29 02:15:32 np0005539550 thirsty_mclaren[88155]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: Deploying daemon crash.compute-2 on compute-2
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Nov 29 02:15:32 np0005539550 systemd[1]: libpod-c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218.scope: Deactivated successfully.
Nov 29 02:15:32 np0005539550 podman[88140]: 2025-11-29 07:15:32.033119255 +0000 UTC m=+1.739676624 container died c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:15:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d003f0e5f950ca7da2477868986b3f1384e863a4da1432d43825595a9cc2b10d-merged.mount: Deactivated successfully.
Nov 29 02:15:32 np0005539550 podman[88140]: 2025-11-29 07:15:32.09325912 +0000 UTC m=+1.799816489 container remove c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218 (image=quay.io/ceph/ceph:v18, name=thirsty_mclaren, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:15:32 np0005539550 systemd[1]: libpod-conmon-c95ab82f82e9193f4081f1409546b8b80b5ccc9d99d200eac07941e29dfc4218.scope: Deactivated successfully.
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:32 np0005539550 python3[88218]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:32 np0005539550 podman[88219]: 2025-11-29 07:15:32.471315873 +0000 UTC m=+0.049305765 container create 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:15:32 np0005539550 systemd[1]: Started libpod-conmon-7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e.scope.
Nov 29 02:15:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb12f3529495ec83bd21cee8bb0f6ee9030fe7a0b9b1a81d5dcee1603e813f08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb12f3529495ec83bd21cee8bb0f6ee9030fe7a0b9b1a81d5dcee1603e813f08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:32 np0005539550 podman[88219]: 2025-11-29 07:15:32.451912857 +0000 UTC m=+0.029902769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:32 np0005539550 podman[88219]: 2025-11-29 07:15:32.551250374 +0000 UTC m=+0.129240266 container init 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:32 np0005539550 podman[88219]: 2025-11-29 07:15:32.557419528 +0000 UTC m=+0.135409410 container start 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:32 np0005539550 podman[88219]: 2025-11-29 07:15:32.56267883 +0000 UTC m=+0.140668742 container attach 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:15:32 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:33 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 8 completed events
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Nov 29 02:15:33 np0005539550 elastic_elgamal[88235]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 29 02:15:33 np0005539550 systemd[1]: libpod-7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e.scope: Deactivated successfully.
Nov 29 02:15:33 np0005539550 podman[88219]: 2025-11-29 07:15:33.803749511 +0000 UTC m=+1.381739393 container died 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:15:33 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/790893646' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bb12f3529495ec83bd21cee8bb0f6ee9030fe7a0b9b1a81d5dcee1603e813f08-merged.mount: Deactivated successfully.
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 2dcbce46-fa19-44bc-8c4e-90866a3705f4 (Updating crash deployment (+1 -> 3))
Nov 29 02:15:34 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 2dcbce46-fa19-44bc-8c4e-90866a3705f4 (Updating crash deployment (+1 -> 3)) in 5 seconds
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 29 02:15:34 np0005539550 podman[88219]: 2025-11-29 07:15:34.461258169 +0000 UTC m=+2.039248041 container remove 7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e (image=quay.io/ceph/ceph:v18, name=elastic_elgamal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:34 np0005539550 systemd[1]: libpod-conmon-7cd49ff51cdca2eb6a48e6efc352f0fe530b8c742c4e3914a6b8b13bfb1af74e.scope: Deactivated successfully.
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/1712256818' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.025294207 +0000 UTC m=+0.023005627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.154194683 +0000 UTC m=+0.151906073 container create ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:15:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:35 np0005539550 systemd[1]: Started libpod-conmon-ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a.scope.
Nov 29 02:15:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.247459928 +0000 UTC m=+0.245171338 container init ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.254271718 +0000 UTC m=+0.251983108 container start ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:35 np0005539550 cranky_dirac[88449]: 167 167
Nov 29 02:15:35 np0005539550 systemd[1]: libpod-ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a.scope: Deactivated successfully.
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.259462098 +0000 UTC m=+0.257173508 container attach ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.260529195 +0000 UTC m=+0.258240585 container died ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:15:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-807dacb91dff729c34312d5ef5454f45d59758bf3ab8f1f617e610252b804a45-merged.mount: Deactivated successfully.
Nov 29 02:15:35 np0005539550 podman[88413]: 2025-11-29 07:15:35.303214353 +0000 UTC m=+0.300925743 container remove ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dirac, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:15:35 np0005539550 systemd[1]: libpod-conmon-ce7942593fc3613055ca6ca6e643e65d810028e7275308d5d616ce0da1b4506a.scope: Deactivated successfully.
Nov 29 02:15:35 np0005539550 podman[88529]: 2025-11-29 07:15:35.484702666 +0000 UTC m=+0.057789608 container create dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:15:35 np0005539550 python3[88523]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:15:35 np0005539550 systemd[1]: Started libpod-conmon-dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87.scope.
Nov 29 02:15:35 np0005539550 podman[88529]: 2025-11-29 07:15:35.463579087 +0000 UTC m=+0.036666059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:35 np0005539550 podman[88529]: 2025-11-29 07:15:35.588572436 +0000 UTC m=+0.161659468 container init dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:15:35 np0005539550 podman[88529]: 2025-11-29 07:15:35.600821132 +0000 UTC m=+0.173908084 container start dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:35 np0005539550 podman[88529]: 2025-11-29 07:15:35.60512221 +0000 UTC m=+0.178209152 container attach dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:15:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:15:35 np0005539550 python3[88620]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400535.2144551-37347-201100525754732/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:36 np0005539550 hungry_agnesi[88545]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:15:36 np0005539550 hungry_agnesi[88545]: --> relative data size: 1.0
Nov 29 02:15:36 np0005539550 hungry_agnesi[88545]: --> All data devices are unavailable
Nov 29 02:15:36 np0005539550 podman[88529]: 2025-11-29 07:15:36.61434439 +0000 UTC m=+1.187431332 container died dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:36 np0005539550 systemd[1]: libpod-dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87.scope: Deactivated successfully.
Nov 29 02:15:36 np0005539550 python3[88728]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:15:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e75ff597e2b91266ad5f1bd2fb50e3a2490cd091e80e1850aada3d4ca17cb5bf-merged.mount: Deactivated successfully.
Nov 29 02:15:36 np0005539550 podman[88529]: 2025-11-29 07:15:36.685341617 +0000 UTC m=+1.258428579 container remove dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:15:36 np0005539550 systemd[1]: libpod-conmon-dbe76ffcade609753a1ae9d999f17c784aed0b83c27f51a0e16d52020f53cf87.scope: Deactivated successfully.
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"} v 0) v1
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]': finished
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:36 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:36 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/1211735059' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]: dispatch
Nov 29 02:15:37 np0005539550 python3[88894]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400536.2914407-37361-46851948121429/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=e046847c6989badca0bbc73e162103e2c21ce925 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.36310108 +0000 UTC m=+0.049727916 container create 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:37 np0005539550 systemd[1]: Started libpod-conmon-1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485.scope.
Nov 29 02:15:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.342965666 +0000 UTC m=+0.029592532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.446513488 +0000 UTC m=+0.133140334 container init 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.454066067 +0000 UTC m=+0.140692903 container start 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.457987495 +0000 UTC m=+0.144614331 container attach 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:15:37 np0005539550 stoic_wiles[89030]: 167 167
Nov 29 02:15:37 np0005539550 systemd[1]: libpod-1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485.scope: Deactivated successfully.
Nov 29 02:15:37 np0005539550 conmon[89030]: conmon 1df051b1c9f1ec2c9fc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485.scope/container/memory.events
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.460717223 +0000 UTC m=+0.147344059 container died 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:37 np0005539550 python3[89015]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-eda1d9b3f484977e456641918a4a5d0f59cccc5f26c88d295dbf63ce84103466-merged.mount: Deactivated successfully.
Nov 29 02:15:37 np0005539550 podman[89012]: 2025-11-29 07:15:37.493674928 +0000 UTC m=+0.180301764 container remove 1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:15:37 np0005539550 systemd[1]: libpod-conmon-1df051b1c9f1ec2c9fc47441cd9f1c6acae7cda7fa91a59b8c940897b7e8a485.scope: Deactivated successfully.
Nov 29 02:15:37 np0005539550 podman[89036]: 2025-11-29 07:15:37.530464259 +0000 UTC m=+0.049513650 container create a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:15:37 np0005539550 systemd[1]: Started libpod-conmon-a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd.scope.
Nov 29 02:15:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd7472a7ba0cb7e82a4bbbb023f6a2b213666244a5f930c104446a4af2348f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd7472a7ba0cb7e82a4bbbb023f6a2b213666244a5f930c104446a4af2348f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd7472a7ba0cb7e82a4bbbb023f6a2b213666244a5f930c104446a4af2348f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 podman[89036]: 2025-11-29 07:15:37.510702895 +0000 UTC m=+0.029752296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:37 np0005539550 podman[89036]: 2025-11-29 07:15:37.620156194 +0000 UTC m=+0.139205605 container init a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:15:37 np0005539550 podman[89036]: 2025-11-29 07:15:37.626492123 +0000 UTC m=+0.145541504 container start a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:37 np0005539550 podman[89036]: 2025-11-29 07:15:37.634616176 +0000 UTC m=+0.153665577 container attach a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:37 np0005539550 podman[89071]: 2025-11-29 07:15:37.650014051 +0000 UTC m=+0.046164886 container create fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:37 np0005539550 systemd[1]: Started libpod-conmon-fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b.scope.
Nov 29 02:15:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869a28e48b0b3499dea2214226663d3ec4964a2d6c60e1f8a33655b9e5016277/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869a28e48b0b3499dea2214226663d3ec4964a2d6c60e1f8a33655b9e5016277/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869a28e48b0b3499dea2214226663d3ec4964a2d6c60e1f8a33655b9e5016277/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/869a28e48b0b3499dea2214226663d3ec4964a2d6c60e1f8a33655b9e5016277/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:37 np0005539550 podman[89071]: 2025-11-29 07:15:37.631028536 +0000 UTC m=+0.027179401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:37 np0005539550 podman[89071]: 2025-11-29 07:15:37.728722111 +0000 UTC m=+0.124872966 container init fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:15:37 np0005539550 podman[89071]: 2025-11-29 07:15:37.736850805 +0000 UTC m=+0.133001640 container start fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:37 np0005539550 podman[89071]: 2025-11-29 07:15:37.740725172 +0000 UTC m=+0.136876007 container attach fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]: {
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:    "0": [
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:        {
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "devices": [
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "/dev/loop3"
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            ],
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "lv_name": "ceph_lv0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "lv_size": "7511998464",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "name": "ceph_lv0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "tags": {
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.cluster_name": "ceph",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.crush_device_class": "",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.encrypted": "0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.osd_id": "0",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.type": "block",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:                "ceph.vdo": "0"
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            },
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "type": "block",
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:            "vg_name": "ceph_vg0"
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:        }
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]:    ]
Nov 29 02:15:38 np0005539550 nifty_swirles[89089]: }
Nov 29 02:15:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 29 02:15:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:15:38 np0005539550 systemd[1]: libpod-fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b.scope: Deactivated successfully.
Nov 29 02:15:38 np0005539550 podman[89071]: 2025-11-29 07:15:38.598613884 +0000 UTC m=+0.994764729 container died fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:15:38 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "681bc90e-5cd2-4106-9be9-9995623a17e0"}]': finished
Nov 29 02:15:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-869a28e48b0b3499dea2214226663d3ec4964a2d6c60e1f8a33655b9e5016277-merged.mount: Deactivated successfully.
Nov 29 02:15:38 np0005539550 podman[89071]: 2025-11-29 07:15:38.654524364 +0000 UTC m=+1.050675199 container remove fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:38 np0005539550 systemd[1]: libpod-conmon-fef9534da45f98fa96e70a0409a31738427c432ee595bb31ba351ae3cbb1b78b.scope: Deactivated successfully.
Nov 29 02:15:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:15:38 np0005539550 stoic_borg[89064]: 
Nov 29 02:15:38 np0005539550 stoic_borg[89064]: [global]
Nov 29 02:15:38 np0005539550 stoic_borg[89064]: #011fsid = b66774a7-56d9-5535-bd8c-681234404870
Nov 29 02:15:38 np0005539550 stoic_borg[89064]: #011mon_host = 192.168.122.100
Nov 29 02:15:38 np0005539550 systemd[1]: libpod-a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd.scope: Deactivated successfully.
Nov 29 02:15:38 np0005539550 podman[89036]: 2025-11-29 07:15:38.953778274 +0000 UTC m=+1.472827655 container died a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:15:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4cd7472a7ba0cb7e82a4bbbb023f6a2b213666244a5f930c104446a4af2348f8-merged.mount: Deactivated successfully.
Nov 29 02:15:39 np0005539550 podman[89036]: 2025-11-29 07:15:39.019018837 +0000 UTC m=+1.538068218 container remove a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd (image=quay.io/ceph/ceph:v18, name=stoic_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:39 np0005539550 systemd[1]: libpod-conmon-a0ce1ff8894b3ee5c7a801f11e8bf0afef843a4e3c8e4ce13ece7764628c5cfd.scope: Deactivated successfully.
Nov 29 02:15:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:39 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 9 completed events
Nov 29 02:15:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:15:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.296941093 +0000 UTC m=+0.037554521 container create 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:39 np0005539550 python3[89291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:39 np0005539550 systemd[1]: Started libpod-conmon-1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa.scope.
Nov 29 02:15:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.280515172 +0000 UTC m=+0.021128620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.386528816 +0000 UTC m=+0.127142264 container init 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:39 np0005539550 podman[89325]: 2025-11-29 07:15:39.392818493 +0000 UTC m=+0.045715975 container create 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.39310895 +0000 UTC m=+0.133722378 container start 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.396940066 +0000 UTC m=+0.137553514 container attach 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:39 np0005539550 heuristic_chaum[89326]: 167 167
Nov 29 02:15:39 np0005539550 systemd[1]: libpod-1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa.scope: Deactivated successfully.
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.401717256 +0000 UTC m=+0.142330694 container died 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:15:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-09e31ae38a02216f5aa70748c48bc6fbd22ae0f59bdd99a96834159db2578f7b-merged.mount: Deactivated successfully.
Nov 29 02:15:39 np0005539550 podman[89308]: 2025-11-29 07:15:39.442309022 +0000 UTC m=+0.182922450 container remove 1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_chaum, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:39 np0005539550 systemd[1]: Started libpod-conmon-3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853.scope.
Nov 29 02:15:39 np0005539550 systemd[1]: libpod-conmon-1aac7e1e42c8e6f57d939d970bc48cca5c3ea44ef5930f1096f86a38b12cc0fa.scope: Deactivated successfully.
Nov 29 02:15:39 np0005539550 podman[89325]: 2025-11-29 07:15:39.372695099 +0000 UTC m=+0.025592611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6db544124dd470d3d5c1c5264709a1a4b2330e90a10e2a9eedc81257d4cf33d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6db544124dd470d3d5c1c5264709a1a4b2330e90a10e2a9eedc81257d4cf33d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6db544124dd470d3d5c1c5264709a1a4b2330e90a10e2a9eedc81257d4cf33d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 podman[89325]: 2025-11-29 07:15:39.507633607 +0000 UTC m=+0.160531109 container init 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:15:39 np0005539550 podman[89325]: 2025-11-29 07:15:39.513881903 +0000 UTC m=+0.166779385 container start 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:15:39 np0005539550 podman[89325]: 2025-11-29 07:15:39.519607186 +0000 UTC m=+0.172504758 container attach 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:39 np0005539550 podman[89368]: 2025-11-29 07:15:39.608226915 +0000 UTC m=+0.048424203 container create 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:15:39 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 29 02:15:39 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/534318413' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 29 02:15:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:39 np0005539550 systemd[1]: Started libpod-conmon-04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af.scope.
Nov 29 02:15:39 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 29 02:15:39 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 29 02:15:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cc065dcde4f9b2f25d362119f7bcba1b740bd94519cfdc5dc9ebc2d50a8c28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cc065dcde4f9b2f25d362119f7bcba1b740bd94519cfdc5dc9ebc2d50a8c28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cc065dcde4f9b2f25d362119f7bcba1b740bd94519cfdc5dc9ebc2d50a8c28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0cc065dcde4f9b2f25d362119f7bcba1b740bd94519cfdc5dc9ebc2d50a8c28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:39 np0005539550 podman[89368]: 2025-11-29 07:15:39.587941637 +0000 UTC m=+0.028138945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:15:39 np0005539550 podman[89368]: 2025-11-29 07:15:39.686431592 +0000 UTC m=+0.126628900 container init 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:39 np0005539550 podman[89368]: 2025-11-29 07:15:39.697179591 +0000 UTC m=+0.137376879 container start 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:15:39 np0005539550 podman[89368]: 2025-11-29 07:15:39.701667223 +0000 UTC m=+0.141864541 container attach 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:15:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]: {
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:        "osd_id": 0,
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:        "type": "bluestore"
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]:    }
Nov 29 02:15:40 np0005539550 nifty_hoover[89385]: }
Nov 29 02:15:40 np0005539550 systemd[1]: libpod-04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af.scope: Deactivated successfully.
Nov 29 02:15:40 np0005539550 podman[89368]: 2025-11-29 07:15:40.869040241 +0000 UTC m=+1.309237539 container died 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:40 np0005539550 systemd[1]: libpod-04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af.scope: Consumed 1.176s CPU time.
Nov 29 02:15:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/278235459' entity='client.admin' 
Nov 29 02:15:40 np0005539550 musing_curran[89356]: set ssl_option
Nov 29 02:15:41 np0005539550 systemd[1]: libpod-3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853.scope: Deactivated successfully.
Nov 29 02:15:41 np0005539550 podman[89325]: 2025-11-29 07:15:41.106595917 +0000 UTC m=+1.759493399 container died 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:15:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c0cc065dcde4f9b2f25d362119f7bcba1b740bd94519cfdc5dc9ebc2d50a8c28-merged.mount: Deactivated successfully.
Nov 29 02:15:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a6db544124dd470d3d5c1c5264709a1a4b2330e90a10e2a9eedc81257d4cf33d-merged.mount: Deactivated successfully.
Nov 29 02:15:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:41 np0005539550 podman[89325]: 2025-11-29 07:15:41.197524763 +0000 UTC m=+1.850422245 container remove 3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853 (image=quay.io/ceph/ceph:v18, name=musing_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:15:41 np0005539550 podman[89368]: 2025-11-29 07:15:41.192109878 +0000 UTC m=+1.632307166 container remove 04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:41 np0005539550 systemd[1]: libpod-conmon-3d9a4caffc12beccc7bea5b111842c879f02a37591d704352a177a6816d98853.scope: Deactivated successfully.
Nov 29 02:15:41 np0005539550 systemd[1]: libpod-conmon-04e471cfb86e3bb20774e43748449cc1938510d8e78d7fd59f08227494dbd3af.scope: Deactivated successfully.
Nov 29 02:15:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:15:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:15:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:41 np0005539550 python3[89479]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:41 np0005539550 podman[89480]: 2025-11-29 07:15:41.639587737 +0000 UTC m=+0.070646049 container create eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:41 np0005539550 podman[89480]: 2025-11-29 07:15:41.596107009 +0000 UTC m=+0.027165351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:41 np0005539550 systemd[1]: Started libpod-conmon-eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc.scope.
Nov 29 02:15:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21427c5d78d63b85eb613069cb76c6af4fe6993ecca795132abafabd7e7e534f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21427c5d78d63b85eb613069cb76c6af4fe6993ecca795132abafabd7e7e534f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21427c5d78d63b85eb613069cb76c6af4fe6993ecca795132abafabd7e7e534f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:41 np0005539550 podman[89480]: 2025-11-29 07:15:41.786477574 +0000 UTC m=+0.217535906 container init eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:41 np0005539550 podman[89480]: 2025-11-29 07:15:41.792759861 +0000 UTC m=+0.223818173 container start eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:15:41 np0005539550 podman[89480]: 2025-11-29 07:15:41.798476994 +0000 UTC m=+0.229535326 container attach eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/278235459' entity='client.admin' 
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:42 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:15:42 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:42 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:15:42 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 29 02:15:42 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:15:42 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:15:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:42 np0005539550 inspiring_moore[89496]: Scheduled rgw.rgw update...
Nov 29 02:15:42 np0005539550 inspiring_moore[89496]: Scheduled ingress.rgw.default update...
Nov 29 02:15:42 np0005539550 systemd[1]: libpod-eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc.scope: Deactivated successfully.
Nov 29 02:15:42 np0005539550 podman[89480]: 2025-11-29 07:15:42.83707791 +0000 UTC m=+1.268136222 container died eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:15:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v110: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:43 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-2.zfrvoq 192.168.122.102:0/3150393083; not ready for session (expect reconnect)
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zfrvoq started
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:15:43 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 29 02:15:43 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-21427c5d78d63b85eb613069cb76c6af4fe6993ecca795132abafabd7e7e534f-merged.mount: Deactivated successfully.
Nov 29 02:15:43 np0005539550 podman[89480]: 2025-11-29 07:15:43.481264384 +0000 UTC m=+1.912322696 container remove eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc (image=quay.io/ceph/ceph:v18, name=inspiring_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:43 np0005539550 systemd[1]: libpod-conmon-eaadacae3fbf4d39492f2ff455c87a6fddd10ab3cb079e357fc121439ad9cbcc.scope: Deactivated successfully.
Nov 29 02:15:43 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 29 02:15:43 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.pdhsqi(active, since 2m), standbys: compute-2.zfrvoq
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zfrvoq", "id": "compute-2.zfrvoq"} v 0) v1
Nov 29 02:15:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zfrvoq", "id": "compute-2.zfrvoq"}]: dispatch
Nov 29 02:15:44 np0005539550 ceph-mon[74435]: Saving service ingress.rgw.default spec with placement count:2
Nov 29 02:15:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 29 02:15:44 np0005539550 ceph-mon[74435]: Deploying daemon osd.2 on compute-2
Nov 29 02:15:44 np0005539550 python3[89608]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:15:45 np0005539550 python3[89679]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400544.5519426-37402-104023832424234/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v111: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:45 np0005539550 python3[89729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:45 np0005539550 podman[89730]: 2025-11-29 07:15:45.766255055 +0000 UTC m=+0.048088245 container create aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:45 np0005539550 systemd[1]: Started libpod-conmon-aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba.scope.
Nov 29 02:15:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177132a78ec7dddf90daa42af00a2bb3c15c328c7e2ccc0e9d64d462a723b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177132a78ec7dddf90daa42af00a2bb3c15c328c7e2ccc0e9d64d462a723b6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc177132a78ec7dddf90daa42af00a2bb3c15c328c7e2ccc0e9d64d462a723b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:45 np0005539550 podman[89730]: 2025-11-29 07:15:45.74329398 +0000 UTC m=+0.025127220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:45 np0005539550 podman[89730]: 2025-11-29 07:15:45.848178365 +0000 UTC m=+0.130011575 container init aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:45 np0005539550 podman[89730]: 2025-11-29 07:15:45.854106254 +0000 UTC m=+0.135939474 container start aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:45 np0005539550 podman[89730]: 2025-11-29 07:15:45.858642627 +0000 UTC m=+0.140475827 container attach aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:46 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14274 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:15:46 np0005539550 ceph-mgr[74726]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:15:46 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0[74431]: 2025-11-29T07:15:46.468+0000 7fcbc815b640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e2 new map
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:46 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:46 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:46 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:15:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v113: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 29 02:15:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:48 np0005539550 ceph-mgr[74726]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 29 02:15:48 np0005539550 systemd[1]: libpod-aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba.scope: Deactivated successfully.
Nov 29 02:15:48 np0005539550 podman[89730]: 2025-11-29 07:15:48.132013168 +0000 UTC m=+2.413846428 container died aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fc177132a78ec7dddf90daa42af00a2bb3c15c328c7e2ccc0e9d64d462a723b6-merged.mount: Deactivated successfully.
Nov 29 02:15:48 np0005539550 podman[89730]: 2025-11-29 07:15:48.187796214 +0000 UTC m=+2.469629394 container remove aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba (image=quay.io/ceph/ceph:v18, name=competent_hermann, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:15:48 np0005539550 systemd[1]: libpod-conmon-aa25639dc70d5c59e8bd1d5ade49cf273349bb2dcef31f3edb108b6bae6ac7ba.scope: Deactivated successfully.
Nov 29 02:15:48 np0005539550 python3[89807]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:48 np0005539550 podman[89808]: 2025-11-29 07:15:48.601789216 +0000 UTC m=+0.048552366 container create 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:48 np0005539550 systemd[1]: Started libpod-conmon-3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398.scope.
Nov 29 02:15:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa09905be61844dadb3ea04b818dd8ab69501108d060280f0e3060c7a422650f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa09905be61844dadb3ea04b818dd8ab69501108d060280f0e3060c7a422650f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa09905be61844dadb3ea04b818dd8ab69501108d060280f0e3060c7a422650f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:48 np0005539550 podman[89808]: 2025-11-29 07:15:48.578888063 +0000 UTC m=+0.025651223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:48 np0005539550 podman[89808]: 2025-11-29 07:15:48.682082756 +0000 UTC m=+0.128845936 container init 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:48 np0005539550 podman[89808]: 2025-11-29 07:15:48.689584423 +0000 UTC m=+0.136347573 container start 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:15:48 np0005539550 podman[89808]: 2025-11-29 07:15:48.695610744 +0000 UTC m=+0.142373894 container attach 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 29 02:15:49 np0005539550 ceph-mon[74435]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v114: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:49 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14280 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 02:15:49 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:49 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:15:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:49 np0005539550 nostalgic_meitner[89824]: Scheduled mds.cephfs update...
Nov 29 02:15:49 np0005539550 systemd[1]: libpod-3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398.scope: Deactivated successfully.
Nov 29 02:15:49 np0005539550 podman[89808]: 2025-11-29 07:15:49.488644733 +0000 UTC m=+0.935407883 container died 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fa09905be61844dadb3ea04b818dd8ab69501108d060280f0e3060c7a422650f-merged.mount: Deactivated successfully.
Nov 29 02:15:49 np0005539550 podman[89808]: 2025-11-29 07:15:49.544013729 +0000 UTC m=+0.990776879 container remove 3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398 (image=quay.io/ceph/ceph:v18, name=nostalgic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:15:49 np0005539550 systemd[1]: libpod-conmon-3b795f7b16164f78d5e6972ffc461e64aa27b7e5359b3cc9b4c5ea41221a4398.scope: Deactivated successfully.
Nov 29 02:15:49 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 29 02:15:49 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 29 02:15:50 np0005539550 python3[89936]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:50 np0005539550 python3[90009]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764400550.0243554-37432-37189219622692/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=ced193c31d6b83611be924c31eabde34732ad5bc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:15:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v115: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:51 np0005539550 python3[90059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.fchyan started
Nov 29 02:15:51 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:15:51 np0005539550 podman[90060]: 2025-11-29 07:15:51.349322654 +0000 UTC m=+0.043026928 container create 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:15:51 np0005539550 systemd[1]: Started libpod-conmon-7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5.scope.
Nov 29 02:15:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f1455b1b0e4473af2dadde60e8422d47c985a071be648aec31e0f8bc664554/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f1455b1b0e4473af2dadde60e8422d47c985a071be648aec31e0f8bc664554/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:51 np0005539550 podman[90060]: 2025-11-29 07:15:51.331996331 +0000 UTC m=+0.025700615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:51 np0005539550 podman[90060]: 2025-11-29 07:15:51.438881946 +0000 UTC m=+0.132586240 container init 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:15:51 np0005539550 podman[90060]: 2025-11-29 07:15:51.445562783 +0000 UTC m=+0.139267057 container start 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:15:51 np0005539550 podman[90060]: 2025-11-29 07:15:51.450347603 +0000 UTC m=+0.144051887 container attach 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:15:52 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 2m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:52 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.fchyan", "id": "compute-1.fchyan"} v 0) v1
Nov 29 02:15:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr metadata", "who": "compute-1.fchyan", "id": "compute-1.fchyan"}]: dispatch
Nov 29 02:15:52 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 29 02:15:52 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 29 02:15:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v117: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:15:53 np0005539550 systemd[1]: libpod-7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5.scope: Deactivated successfully.
Nov 29 02:15:53 np0005539550 podman[90060]: 2025-11-29 07:15:53.497555694 +0000 UTC m=+2.191259988 container died 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e36 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Nov 29 02:15:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e6f1455b1b0e4473af2dadde60e8422d47c985a071be648aec31e0f8bc664554-merged.mount: Deactivated successfully.
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 29 02:15:53 np0005539550 ceph-mon[74435]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 29 02:15:53 np0005539550 podman[90060]: 2025-11-29 07:15:53.644281606 +0000 UTC m=+2.337985880 container remove 7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5 (image=quay.io/ceph/ceph:v18, name=flamboyant_khorana, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:15:53 np0005539550 systemd[1]: libpod-conmon-7ecf6ba78a346af60eb50e1afd531c4322cd821ee01a00e8be09a8e2602e37a5.scope: Deactivated successfully.
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 29 02:15:54 np0005539550 python3[90139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:54 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:54 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:54 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:54 np0005539550 podman[90191]: 2025-11-29 07:15:54.573283979 +0000 UTC m=+0.048382722 container create ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:15:54 np0005539550 systemd[1]: Started libpod-conmon-ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7.scope.
Nov 29 02:15:54 np0005539550 podman[90191]: 2025-11-29 07:15:54.549671558 +0000 UTC m=+0.024770311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e7a51aca65e3d105f12a6a4216652075b919b71202cac9c17f74463a8a5af7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34e7a51aca65e3d105f12a6a4216652075b919b71202cac9c17f74463a8a5af7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:54 np0005539550 podman[90191]: 2025-11-29 07:15:54.665706501 +0000 UTC m=+0.140805274 container init ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/927647266' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:54 np0005539550 ceph-mon[74435]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Nov 29 02:15:54 np0005539550 podman[90191]: 2025-11-29 07:15:54.674233954 +0000 UTC m=+0.149332697 container start ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:54 np0005539550 podman[90191]: 2025-11-29 07:15:54.678400579 +0000 UTC m=+0.153499352 container attach ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:15:54 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 29 02:15:54 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.694079399s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177742004s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.790192604s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.273910522s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.694079399s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177742004s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.790192604s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.273910522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.10( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693737030s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177764893s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.10( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693737030s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177764893s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693554878s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177757263s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693568230s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177757263s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693554878s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789824486s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274101257s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693364143s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177619934s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693568230s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789824486s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274101257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693364143s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177619934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=15.693207741s) [] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177673340s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=37 pruub=15.693207741s) [] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177673340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693195343s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177749634s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789692879s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274345398s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789601326s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274269104s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789692879s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274345398s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789601326s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274269104s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789566040s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274330139s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789735794s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274513245s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789566040s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274330139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789735794s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274513245s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789638519s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274543762s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.692702293s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 active pruub 97.177627563s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789638519s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274543762s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.692702293s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177627563s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789354324s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274406433s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789538383s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274597168s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789354324s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274406433s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789538383s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274597168s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789499283s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 active pruub 92.274604797s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=37 pruub=10.789499283s) [] r=-1 lpr=37 pi=[24,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274604797s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 37 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=37 pruub=15.693195343s) [] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177749634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v119: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:15:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1230072446' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:15:55 np0005539550 upbeat_villani[90208]: 
Nov 29 02:15:55 np0005539550 upbeat_villani[90208]: {"fsid":"b66774a7-56d9-5535-bd8c-681234404870","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":16,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":33,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":37,"num_osds":3,"num_up_osds":2,"osd_up_since":1764400483,"num_in_osds":3,"osd_in_since":1764400536,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":100}],"num_pgs":100,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":56115200,"bytes_avail":14967881728,"bytes_total":15023996928},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-29T07:15:47.193555+0000","services":{"mgr":{"daemons":{"summary":"","compute-2.zfrvoq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Nov 29 02:15:55 np0005539550 systemd[1]: libpod-ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7.scope: Deactivated successfully.
Nov 29 02:15:55 np0005539550 conmon[90208]: conmon ac63937c696c091be3c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7.scope/container/memory.events
Nov 29 02:15:55 np0005539550 podman[90191]: 2025-11-29 07:15:55.450556215 +0000 UTC m=+0.925654968 container died ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:15:55 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:55 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-34e7a51aca65e3d105f12a6a4216652075b919b71202cac9c17f74463a8a5af7-merged.mount: Deactivated successfully.
Nov 29 02:15:55 np0005539550 podman[90191]: 2025-11-29 07:15:55.643418003 +0000 UTC m=+1.118516746 container remove ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7 (image=quay.io/ceph/ceph:v18, name=upbeat_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:15:55 np0005539550 systemd[1]: libpod-conmon-ac63937c696c091be3c855905b9e4bbfa875d4986762d6a845421c98f8104be7.scope: Deactivated successfully.
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 29 02:15:55 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 29 02:15:56 np0005539550 python3[90400]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:15:56 np0005539550 podman[90401]: 2025-11-29 07:15:56.110978465 +0000 UTC m=+0.052746441 container create 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:15:56 np0005539550 systemd[1]: Started libpod-conmon-57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54.scope.
Nov 29 02:15:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1d78331a0110d559398df4c1b0cf76b944840255174900efdc63eded9f38ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1d78331a0110d559398df4c1b0cf76b944840255174900efdc63eded9f38ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:56 np0005539550 podman[90401]: 2025-11-29 07:15:56.091254252 +0000 UTC m=+0.033022248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:56 np0005539550 podman[90401]: 2025-11-29 07:15:56.200529267 +0000 UTC m=+0.142297273 container init 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:15:56 np0005539550 podman[90401]: 2025-11-29 07:15:56.210568928 +0000 UTC m=+0.152336914 container start 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:15:56 np0005539550 podman[90401]: 2025-11-29 07:15:56.215778628 +0000 UTC m=+0.157546634 container attach 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:15:56 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:56 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:15:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 29 02:15:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613184922' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:15:57 np0005539550 suspicious_perlman[90416]: 
Nov 29 02:15:57 np0005539550 suspicious_perlman[90416]: {"epoch":3,"fsid":"b66774a7-56d9-5535-bd8c-681234404870","modified":"2025-11-29T07:15:14.418328Z","created":"2025-11-29T07:11:54.010207Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 29 02:15:57 np0005539550 suspicious_perlman[90416]: dumped monmap epoch 3
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v120: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:15:57 np0005539550 systemd[1]: libpod-57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54.scope: Deactivated successfully.
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:57 np0005539550 podman[90441]: 2025-11-29 07:15:57.264314743 +0000 UTC m=+0.030168236 container died 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:15:57
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms', 'volumes']
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 4/10 changes
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] Executing plan auto_2025-11-29_07:15:57
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] ceph osd pg-upmap-items 3.4 mappings [{'from': 0, 'to': 2}]
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] ceph osd pg-upmap-items 3.d mappings [{'from': 1, 'to': 2}]
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] ceph osd pg-upmap-items 3.18 mappings [{'from': 0, 'to': 2}]
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [balancer INFO root] ceph osd pg-upmap-items 4.10 mappings [{'from': 0, 'to': 2}]
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]: dispatch
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]: dispatch
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]: dispatch
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]} v 0) v1
Nov 29 02:15:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]: dispatch
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:15:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7a1d78331a0110d559398df4c1b0cf76b944840255174900efdc63eded9f38ff-merged.mount: Deactivated successfully.
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 29 02:15:58 np0005539550 podman[90441]: 2025-11-29 07:15:58.286446535 +0000 UTC m=+1.052299998 container remove 57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54 (image=quay.io/ceph/ceph:v18, name=suspicious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:15:58 np0005539550 systemd[1]: libpod-conmon-57a01284b09a1a3205a371716f5182d49881156764dab27e8ef4b65fc3aa6e54.scope: Deactivated successfully.
Nov 29 02:15:58 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:58 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]': finished
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]': finished
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]': finished
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]': finished
Nov 29 02:15:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Nov 29 02:15:59 np0005539550 python3[90480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 crush map has features 3314933000854323200, adjusting msgr requires
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 crush map has features 432629239337189376, adjusting msgr requires
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v122: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:15:59 np0005539550 podman[90481]: 2025-11-29 07:15:59.212525294 +0000 UTC m=+0.076216069 container create 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.4", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.d", "id": [1, 2]}]': finished
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "3.18", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "4.10", "id": [0, 2]}]': finished
Nov 29 02:15:59 np0005539550 podman[90481]: 2025-11-29 07:15:59.166970934 +0000 UTC m=+0.030661729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:15:59 np0005539550 systemd[1]: Started libpod-conmon-3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05.scope.
Nov 29 02:15:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:15:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48fd5a5d08793b5cd457b422eb27c8ae528186265ea518b52954fc1847f6f16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48fd5a5d08793b5cd457b422eb27c8ae528186265ea518b52954fc1847f6f16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:15:59 np0005539550 podman[90481]: 2025-11-29 07:15:59.425099455 +0000 UTC m=+0.288790230 container init 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:15:59 np0005539550 podman[90481]: 2025-11-29 07:15:59.43566516 +0000 UTC m=+0.299355935 container start 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:15:59 np0005539550 podman[90481]: 2025-11-29 07:15:59.440826949 +0000 UTC m=+0.304517744 container attach 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 38 crush map has features 432629239337189376, adjusting msgr requires for clients
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 38 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 38 crush map has features 3314933000854323200, adjusting msgr requires for osds
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.116237640s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active pruub 100.256729126s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.133989334s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active pruub 100.274482727s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.116237640s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.256729126s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.133989334s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274482727s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.133496284s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 active pruub 100.274833679s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:15:59 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 38 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=38 pruub=14.133496284s) [] r=-1 lpr=38 pi=[24,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274833679s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.009242174413735343 quantized to 1 (current 1)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Nov 29 02:15:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:15:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 29 02:16:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3782324994' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:16:00 np0005539550 nervous_buck[90496]: [client.openstack]
Nov 29 02:16:00 np0005539550 nervous_buck[90496]: #011key = AQCDnCppAAAAABAAoofevHYw/FgxyqNnqVNNOg==
Nov 29 02:16:00 np0005539550 nervous_buck[90496]: #011caps mgr = "allow *"
Nov 29 02:16:00 np0005539550 nervous_buck[90496]: #011caps mon = "profile rbd"
Nov 29 02:16:00 np0005539550 nervous_buck[90496]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 29 02:16:00 np0005539550 systemd[1]: libpod-3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05.scope: Deactivated successfully.
Nov 29 02:16:00 np0005539550 podman[90481]: 2025-11-29 07:16:00.206988805 +0000 UTC m=+1.070679610 container died 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 29 02:16:00 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:16:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:00 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v123: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:16:01 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:01 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a48fd5a5d08793b5cd457b422eb27c8ae528186265ea518b52954fc1847f6f16-merged.mount: Deactivated successfully.
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:16:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 29 02:16:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:01 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/3782324994' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 43a5b29e-5d61-4713-9dcf-563b5acb9ade (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:16:02 np0005539550 podman[90481]: 2025-11-29 07:16:02.121813351 +0000 UTC m=+2.985504126 container remove 3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05 (image=quay.io/ceph/ceph:v18, name=nervous_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:02 np0005539550 systemd[1]: libpod-conmon-3d37c69394f634f231878653b0062bfda370fc3b5176eac5266625aa6a599a05.scope: Deactivated successfully.
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: Adjusting osd_memory_target on compute-2 to 127.9M
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: Updating compute-0:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: Updating compute-1:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: Updating compute-2:/etc/ceph/ceph.conf
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev d3ccf2c9-eaf4-49a1-a6c1-641635e52eb7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 02:16:02 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:16:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v126: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 14 GiB / 14 GiB avail
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2082573902; not ready for session (expect reconnect)
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902] boot
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.028974533s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274482727s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.011162758s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.256729126s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.18( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.028863907s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274482727s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.10( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.011096954s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.256729126s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.028080463s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.273910522s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931861877s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177749634s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931988716s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177742004s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931590557s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177764893s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.10( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931483269s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177764893s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931279182s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.13( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931261539s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177749634s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.c( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.931223869s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.14( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.027532101s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.273910522s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.930958271s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177619934s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.d( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.930849075s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177619934s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.930974007s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.a( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.930879116s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177757263s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=6.930663109s) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177673340s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.027076244s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274101257s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=8.972483635s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active pruub 99.219558716s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.0( empty local-lis/les=24/25 n=0 ec=16/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026969433s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274101257s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026965618s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274269104s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.027039051s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274345398s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.6( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026988983s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274345398s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.027305603s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274833679s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.4( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=10.027255058s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 100.274833679s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026628494s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274330139s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.3( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026604176s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274330139s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41 pruub=8.972483635s) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown pruub 99.219558716s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026681423s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274513245s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.8( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026661158s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274513245s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.929630756s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177627563s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.1b( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.929602623s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177627563s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.2( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026835680s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274269104s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026375532s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274543762s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[2.15( empty local-lis/les=29/30 n=0 ec=22/14 lis/c=29/29 les/c/f=30/30/0 sis=41 pruub=6.930830002s) [2] r=-1 lpr=41 pi=[29,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177742004s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026065826s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274406433s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.1c( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026042938s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274406433s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.1d( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026323557s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274543762s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026054382s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274597168s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[3.1b( empty local-lis/les=24/25 n=0 ec=24/16 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.026027441s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274597168s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.025933743s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274604797s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[4.19( empty local-lis/les=24/25 n=0 ec=24/19 lis/c=24/24 les/c/f=25/25/0 sis=41 pruub=2.025914192s) [2] r=-1 lpr=41 pi=[24,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 92.274604797s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 41 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=41 pruub=6.928646564s) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.177673340s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 9c0cc369-4f8d-4fd3-a164-c3638c29916a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 43a5b29e-5d61-4713-9dcf-563b5acb9ade (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 43a5b29e-5d61-4713-9dcf-563b5acb9ade (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev d3ccf2c9-eaf4-49a1-a6c1-641635e52eb7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event d3ccf2c9-eaf4-49a1-a6c1-641635e52eb7 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 1 seconds
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 9c0cc369-4f8d-4fd3-a164-c3638c29916a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 29 02:16:03 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 9c0cc369-4f8d-4fd3-a164-c3638c29916a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: Updating compute-1:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: Updating compute-2:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: Updating compute-0:/var/lib/ceph/b66774a7-56d9-5535-bd8c-681234404870/config/ceph.conf
Nov 29 02:16:03 np0005539550 ceph-mon[74435]: OSD bench result of 5535.863781 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 29 02:16:04 np0005539550 ansible-async_wrapper.py[91252]: Invoked with j228090854594 30 /home/zuul/.ansible/tmp/ansible-tmp-1764400563.4774938-37505-14629709716790/AnsiballZ_command.py _
Nov 29 02:16:04 np0005539550 ansible-async_wrapper.py[91313]: Starting module and watcher
Nov 29 02:16:04 np0005539550 ansible-async_wrapper.py[91313]: Start watching 91316 (30)
Nov 29 02:16:04 np0005539550 ansible-async_wrapper.py[91316]: Start module (91316)
Nov 29 02:16:04 np0005539550 ansible-async_wrapper.py[91252]: Return async_wrapper task started.
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:16:04 np0005539550 python3[91322]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 12 completed events
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:16:04 np0005539550 podman[91383]: 2025-11-29 07:16:04.280471801 +0000 UTC m=+0.026730980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:16:04 np0005539550 podman[91383]: 2025-11-29 07:16:04.461750539 +0000 UTC m=+0.208009708 container create da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:16:04 np0005539550 systemd[1]: Started libpod-conmon-da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a.scope.
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e4a9b494378dad9577ab4020a5d90db662b5e25e5de56d722738dd724c1ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338e4a9b494378dad9577ab4020a5d90db662b5e25e5de56d722738dd724c1ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:04 np0005539550 podman[91383]: 2025-11-29 07:16:04.545047744 +0000 UTC m=+0.291306923 container init da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:16:04 np0005539550 podman[91383]: 2025-11-29 07:16:04.551966817 +0000 UTC m=+0.298225976 container start da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:16:04 np0005539550 podman[91383]: 2025-11-29 07:16:04.556017158 +0000 UTC m=+0.302276317 container attach da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ee7dfe65-5eab-4cc0-ba66-2ba8c75ef34c does not exist
Nov 29 02:16:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e74e2df8-3ae3-4c35-bb88-51e5a4fecc07 does not exist
Nov 29 02:16:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6df44593-6c0a-48a6-8711-aa66610b6819 does not exist
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.8( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.11( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1f( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.10( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.15( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.13( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.12( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.14( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.17( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.16( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.b( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.a( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.9( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.6( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.3( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.7( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.4( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.2( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.e( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=23/24 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.5( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1c( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.1d( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.18( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[5.19( empty local-lis/les=21/22 n=0 ec=41/21 lis/c=21/21 les/c/f=22/22/0 sis=41) [2] r=-1 lpr=41 pi=[21,41)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.c( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.f( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.b( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.9( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.0( empty local-lis/les=41/42 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.4( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.6( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 42 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=23/23 les/c/f=24/24/0 sis=41) [0] r=0 lpr=41 pi=[23,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: osd.2 [v2:192.168.122.102:6800/2082573902,v1:192.168.122.102:6801/2082573902] boot
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:16:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v129: 146 pgs: 21 peering, 46 unknown, 79 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:16:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14310 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:16:05 np0005539550 strange_villani[91497]: 
Nov 29 02:16:05 np0005539550 strange_villani[91497]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:16:05 np0005539550 systemd[1]: libpod-da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a.scope: Deactivated successfully.
Nov 29 02:16:05 np0005539550 podman[91383]: 2025-11-29 07:16:05.267610149 +0000 UTC m=+1.013869328 container died da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:05 np0005539550 python3[91732]: ansible-ansible.legacy.async_status Invoked with jid=j228090854594.91252 mode=status _async_dir=/root/.ansible_async
Nov 29 02:16:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-338e4a9b494378dad9577ab4020a5d90db662b5e25e5de56d722738dd724c1ef-merged.mount: Deactivated successfully.
Nov 29 02:16:05 np0005539550 podman[91383]: 2025-11-29 07:16:05.880133479 +0000 UTC m=+1.626392638 container remove da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a (image=quay.io/ceph/ceph:v18, name=strange_villani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:16:05 np0005539550 ansible-async_wrapper.py[91316]: Module complete (91316)
Nov 29 02:16:05 np0005539550 systemd[1]: libpod-conmon-da7c7c2f968d473c8a4465ea90decdfb07e71042aa86c75c78ca28bd0d76661a.scope: Deactivated successfully.
Nov 29 02:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 29 02:16:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.071269263 +0000 UTC m=+0.048062964 container create 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 29 02:16:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 29 02:16:06 np0005539550 systemd[1]: Started libpod-conmon-6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338.scope.
Nov 29 02:16:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.050614406 +0000 UTC m=+0.027408117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.283498515 +0000 UTC m=+0.260292236 container init 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.292546912 +0000 UTC m=+0.269340613 container start 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.297027604 +0000 UTC m=+0.273821335 container attach 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:06 np0005539550 youthful_kalam[91788]: 167 167
Nov 29 02:16:06 np0005539550 systemd[1]: libpod-6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338.scope: Deactivated successfully.
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.300536302 +0000 UTC m=+0.277330003 container died 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9e36e05716f8ef460f21e42d22f03f16d2c38b4193e360d6122ef389f8d6b6ed-merged.mount: Deactivated successfully.
Nov 29 02:16:06 np0005539550 podman[91773]: 2025-11-29 07:16:06.423417797 +0000 UTC m=+0.400211498 container remove 6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:06 np0005539550 systemd[1]: libpod-conmon-6cbaae5e16263c6173d7448da1f073b1abafb7603f0783b9272733e46ca6d338.scope: Deactivated successfully.
Nov 29 02:16:06 np0005539550 podman[91840]: 2025-11-29 07:16:06.592312535 +0000 UTC m=+0.050961397 container create ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:16:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 29 02:16:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 29 02:16:06 np0005539550 systemd[1]: Started libpod-conmon-ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f.scope.
Nov 29 02:16:06 np0005539550 podman[91840]: 2025-11-29 07:16:06.568407796 +0000 UTC m=+0.027056678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:06 np0005539550 podman[91840]: 2025-11-29 07:16:06.707031036 +0000 UTC m=+0.165679918 container init ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:06 np0005539550 podman[91840]: 2025-11-29 07:16:06.716816631 +0000 UTC m=+0.175465493 container start ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:16:06 np0005539550 podman[91840]: 2025-11-29 07:16:06.721269962 +0000 UTC m=+0.179918844 container attach ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:16:06 np0005539550 python3[91878]: ansible-ansible.legacy.async_status Invoked with jid=j228090854594.91252 mode=status _async_dir=/root/.ansible_async
Nov 29 02:16:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 29 02:16:07 np0005539550 python3[91935]: ansible-ansible.legacy.async_status Invoked with jid=j228090854594.91252 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 02:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v131: 177 pgs: 21 peering, 77 unknown, 79 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 29 02:16:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 29 02:16:07 np0005539550 pedantic_rosalind[91882]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:16:07 np0005539550 pedantic_rosalind[91882]: --> relative data size: 1.0
Nov 29 02:16:07 np0005539550 pedantic_rosalind[91882]: --> All data devices are unavailable
Nov 29 02:16:07 np0005539550 systemd[1]: libpod-ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f.scope: Deactivated successfully.
Nov 29 02:16:07 np0005539550 podman[91840]: 2025-11-29 07:16:07.711188513 +0000 UTC m=+1.169837375 container died ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c5b0ed2de1b6c480b201c5fbdce0b126ef86e34adba9a6a836463a92c2cb17c0-merged.mount: Deactivated successfully.
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:16:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:16:07 np0005539550 podman[91840]: 2025-11-29 07:16:07.787954035 +0000 UTC m=+1.246602897 container remove ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:16:07 np0005539550 systemd[1]: libpod-conmon-ff4cd4c1dafe9be4f6dd6685efac52c024d5ff64efde269f79f15cf0e346d93f.scope: Deactivated successfully.
Nov 29 02:16:07 np0005539550 python3[91969]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:07 np0005539550 podman[91983]: 2025-11-29 07:16:07.885710354 +0000 UTC m=+0.048384278 container create 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:07 np0005539550 systemd[1]: Started libpod-conmon-3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0.scope.
Nov 29 02:16:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3c657b7c79a4d1269e37d584a1bcc940cb24749b31b15cc0b9f73a78a70c58/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3c657b7c79a4d1269e37d584a1bcc940cb24749b31b15cc0b9f73a78a70c58/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:07 np0005539550 podman[91983]: 2025-11-29 07:16:07.865336953 +0000 UTC m=+0.028010907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:07 np0005539550 podman[91983]: 2025-11-29 07:16:07.973752995 +0000 UTC m=+0.136426949 container init 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:07 np0005539550 podman[91983]: 2025-11-29 07:16:07.9825517 +0000 UTC m=+0.145225634 container start 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:16:07 np0005539550 podman[91983]: 2025-11-29 07:16:07.987077906 +0000 UTC m=+0.149751840 container attach 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.509906243 +0000 UTC m=+0.048771868 container create 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:16:08 np0005539550 systemd[1]: Started libpod-conmon-48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c.scope.
Nov 29 02:16:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.488317151 +0000 UTC m=+0.027182786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.596151198 +0000 UTC m=+0.135016843 container init 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.603376393 +0000 UTC m=+0.142242018 container start 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:16:08 np0005539550 dazzling_engelbart[92174]: 167 167
Nov 29 02:16:08 np0005539550 systemd[1]: libpod-48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c.scope: Deactivated successfully.
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.609324645 +0000 UTC m=+0.148190270 container attach 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.609732856 +0000 UTC m=+0.148598481 container died 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:16:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b8e8b2ad8ca41d1b56641802976e8c377223d399c6a7f714325e19ee2255fb94-merged.mount: Deactivated successfully.
Nov 29 02:16:08 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14316 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:16:08 np0005539550 zen_colden[92030]: 
Nov 29 02:16:08 np0005539550 zen_colden[92030]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 29 02:16:08 np0005539550 podman[92158]: 2025-11-29 07:16:08.660055872 +0000 UTC m=+0.198921497 container remove 48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:16:08 np0005539550 systemd[1]: libpod-3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0.scope: Deactivated successfully.
Nov 29 02:16:08 np0005539550 podman[91983]: 2025-11-29 07:16:08.670341095 +0000 UTC m=+0.833015029 container died 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:16:08 np0005539550 systemd[1]: libpod-conmon-48c00a32449b1b6485099ce85ea38953efbb2b020419a3871c8d899d0dacfe0c.scope: Deactivated successfully.
Nov 29 02:16:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cb3c657b7c79a4d1269e37d584a1bcc940cb24749b31b15cc0b9f73a78a70c58-merged.mount: Deactivated successfully.
Nov 29 02:16:08 np0005539550 podman[91983]: 2025-11-29 07:16:08.720798545 +0000 UTC m=+0.883472489 container remove 3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0 (image=quay.io/ceph/ceph:v18, name=zen_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:16:08 np0005539550 systemd[1]: libpod-conmon-3d4f2b41be3cb0b05f35f7c6b96c0d3cb780d398961709694e8f6d046726eca0.scope: Deactivated successfully.
Nov 29 02:16:08 np0005539550 podman[92209]: 2025-11-29 07:16:08.831340601 +0000 UTC m=+0.049844735 container create ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:16:08 np0005539550 systemd[1]: Started libpod-conmon-ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1.scope.
Nov 29 02:16:08 np0005539550 podman[92209]: 2025-11-29 07:16:08.809535274 +0000 UTC m=+0.028039428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8f1b6fafa3f3cf7768b6f56a8a514a55ae0c6c13f8f8c3d327de0c597ec5c19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8f1b6fafa3f3cf7768b6f56a8a514a55ae0c6c13f8f8c3d327de0c597ec5c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8f1b6fafa3f3cf7768b6f56a8a514a55ae0c6c13f8f8c3d327de0c597ec5c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8f1b6fafa3f3cf7768b6f56a8a514a55ae0c6c13f8f8c3d327de0c597ec5c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:08 np0005539550 podman[92209]: 2025-11-29 07:16:08.933898913 +0000 UTC m=+0.152403067 container init ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:16:08 np0005539550 podman[92209]: 2025-11-29 07:16:08.940435391 +0000 UTC m=+0.158939525 container start ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:16:08 np0005539550 podman[92209]: 2025-11-29 07:16:08.945200582 +0000 UTC m=+0.163704746 container attach ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:16:09 np0005539550 ansible-async_wrapper.py[91313]: Done in kid B.
Nov 29 02:16:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 21 peering, 62 unknown, 94 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:09 np0005539550 python3[92255]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:09 np0005539550 priceless_greider[92225]: {
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:    "0": [
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:        {
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "devices": [
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "/dev/loop3"
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            ],
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "lv_name": "ceph_lv0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "lv_size": "7511998464",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "name": "ceph_lv0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "tags": {
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.cluster_name": "ceph",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.crush_device_class": "",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.encrypted": "0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.osd_id": "0",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.type": "block",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:                "ceph.vdo": "0"
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            },
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "type": "block",
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:            "vg_name": "ceph_vg0"
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:        }
Nov 29 02:16:09 np0005539550 priceless_greider[92225]:    ]
Nov 29 02:16:09 np0005539550 priceless_greider[92225]: }
Nov 29 02:16:09 np0005539550 podman[92258]: 2025-11-29 07:16:09.7461966 +0000 UTC m=+0.024145788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:09 np0005539550 systemd[1]: libpod-ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1.scope: Deactivated successfully.
Nov 29 02:16:10 np0005539550 podman[92258]: 2025-11-29 07:16:10.049611217 +0000 UTC m=+0.327560375 container create 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:10 np0005539550 systemd[1]: Started libpod-conmon-949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794.scope.
Nov 29 02:16:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:10 np0005539550 podman[92209]: 2025-11-29 07:16:10.397576243 +0000 UTC m=+1.616080377 container died ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:16:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bebc0dc95da7b0d276cc72fb0899a92d4db627ef27106c08d7f1ebe5627adee/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bebc0dc95da7b0d276cc72fb0899a92d4db627ef27106c08d7f1ebe5627adee/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d8f1b6fafa3f3cf7768b6f56a8a514a55ae0c6c13f8f8c3d327de0c597ec5c19-merged.mount: Deactivated successfully.
Nov 29 02:16:10 np0005539550 podman[92209]: 2025-11-29 07:16:10.66220974 +0000 UTC m=+1.880713874 container remove ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:16:10 np0005539550 systemd[1]: libpod-conmon-ab7a42c60b0b00bd766caf18777995388054481cf5c23e212e5ecaf570b8a9c1.scope: Deactivated successfully.
Nov 29 02:16:10 np0005539550 podman[92258]: 2025-11-29 07:16:10.753581496 +0000 UTC m=+1.031530684 container init 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:10 np0005539550 podman[92258]: 2025-11-29 07:16:10.766757543 +0000 UTC m=+1.044706701 container start 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:10 np0005539550 podman[92258]: 2025-11-29 07:16:10.772064418 +0000 UTC m=+1.050013606 container attach 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:11 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14322 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:16:11 np0005539550 naughty_nash[92284]: 
Nov 29 02:16:11 np0005539550 naughty_nash[92284]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 29 02:16:11 np0005539550 systemd[1]: libpod-949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794.scope: Deactivated successfully.
Nov 29 02:16:11 np0005539550 podman[92447]: 2025-11-29 07:16:11.394881202 +0000 UTC m=+0.026699334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.65153957 +0000 UTC m=+1.283357672 container create d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:16:12 np0005539550 systemd[1]: Started libpod-conmon-d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e.scope.
Nov 29 02:16:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 29 02:16:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:16:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.896898703 +0000 UTC m=+1.528716825 container init d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.90343574 +0000 UTC m=+1.535253842 container start d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.19( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.5( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.3( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.6( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.a( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.17( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.14( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.1e( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 stoic_napier[92478]: 167 167
Nov 29 02:16:12 np0005539550 systemd[1]: libpod-d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e.scope: Deactivated successfully.
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.c( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[5.1d( empty local-lis/les=0/0 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991762161s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264427185s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991547585s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264259338s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.a( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991720200s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264427185s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.8( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991518021s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264259338s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991579056s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264488220s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991783142s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264770508s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.e( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991483688s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264488220s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991540909s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264572144s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.2( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991704941s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264770508s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.5( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991517067s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264572144s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991512299s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264785767s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.3( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991485596s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264785767s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991418839s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264862061s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.1( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991358757s) [2] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264862061s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991418839s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.265098572s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.d( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991395950s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.265098572s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991048813s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active pruub 115.264839172s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:16:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[6.7( empty local-lis/les=41/42 n=0 ec=41/23 lis/c=41/41 les/c/f=42/42/0 sis=45 pruub=15.991021156s) [1] r=-1 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 115.264839172s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.912753509 +0000 UTC m=+1.544571641 container attach d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.913803915 +0000 UTC m=+1.545622017 container died d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-459607b2a71a50c6e10685ad1a18d045081df0f618671c322d2581a4df11da20-merged.mount: Deactivated successfully.
Nov 29 02:16:12 np0005539550 podman[92447]: 2025-11-29 07:16:12.992232521 +0000 UTC m=+1.624050623 container remove d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:16:12 np0005539550 systemd[1]: libpod-conmon-d93021736fe8ee4ef77e8e83051c40200bd7edd314e625ff600bdbdb5bac353e.scope: Deactivated successfully.
Nov 29 02:16:13 np0005539550 podman[92258]: 2025-11-29 07:16:13.09115045 +0000 UTC m=+3.369099638 container died 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:16:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3bebc0dc95da7b0d276cc72fb0899a92d4db627ef27106c08d7f1ebe5627adee-merged.mount: Deactivated successfully.
Nov 29 02:16:13 np0005539550 podman[92463]: 2025-11-29 07:16:13.146764082 +0000 UTC m=+1.674415400 container remove 949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794 (image=quay.io/ceph/ceph:v18, name=naughty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:16:13 np0005539550 systemd[1]: libpod-conmon-949438c24fae30ec3804adf0d8daea0e77cd90e6d6bf5bcd080e310025007794.scope: Deactivated successfully.
Nov 29 02:16:13 np0005539550 podman[92504]: 2025-11-29 07:16:13.171940405 +0000 UTC m=+0.054313649 container create 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.4( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.10( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.b( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.13( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.2( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.1b( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.18( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 45 pg[7.1e( empty local-lis/les=0/0 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 10 peering, 167 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:13 np0005539550 systemd[1]: Started libpod-conmon-3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a.scope.
Nov 29 02:16:13 np0005539550 podman[92504]: 2025-11-29 07:16:13.146509915 +0000 UTC m=+0.028883189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c58affab5ac2bb904a5f66b3023837a01a9f5f0caa4941e03aeb00cabc5bb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c58affab5ac2bb904a5f66b3023837a01a9f5f0caa4941e03aeb00cabc5bb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c58affab5ac2bb904a5f66b3023837a01a9f5f0caa4941e03aeb00cabc5bb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73c58affab5ac2bb904a5f66b3023837a01a9f5f0caa4941e03aeb00cabc5bb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:13 np0005539550 podman[92504]: 2025-11-29 07:16:13.273709297 +0000 UTC m=+0.156082541 container init 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:13 np0005539550 podman[92504]: 2025-11-29 07:16:13.285702464 +0000 UTC m=+0.168075708 container start 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:16:13 np0005539550 podman[92504]: 2025-11-29 07:16:13.291491932 +0000 UTC m=+0.173865176 container attach 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 29 02:16:13 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 29 02:16:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.18( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.e( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.1e( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.10( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.13( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.b( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.9( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.8( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.f( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.4( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.2( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.3( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.6( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[7.1b( empty local-lis/les=45/46 n=0 ec=43/25 lis/c=43/43 les/c/f=44/44/0 sis=45) [0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=41/21 lis/c=41/41 les/c/f=42/42/0 sis=45) [0] r=0 lpr=45 pi=[41,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]: {
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:        "osd_id": 0,
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:        "type": "bluestore"
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]:    }
Nov 29 02:16:14 np0005539550 competent_bardeen[92522]: }
Nov 29 02:16:14 np0005539550 systemd[1]: libpod-3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a.scope: Deactivated successfully.
Nov 29 02:16:14 np0005539550 systemd[1]: libpod-3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a.scope: Consumed 1.031s CPU time.
Nov 29 02:16:14 np0005539550 podman[92504]: 2025-11-29 07:16:14.327082369 +0000 UTC m=+1.209455613 container died 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-73c58affab5ac2bb904a5f66b3023837a01a9f5f0caa4941e03aeb00cabc5bb0-merged.mount: Deactivated successfully.
Nov 29 02:16:14 np0005539550 python3[92560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:14 np0005539550 podman[92504]: 2025-11-29 07:16:14.391232029 +0000 UTC m=+1.273605273 container remove 3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:16:14 np0005539550 systemd[1]: libpod-conmon-3428dcb5ebaec446ef4d5a57e67d93f3a6766f1681b1b6cdba60cc48f14c559a.scope: Deactivated successfully.
Nov 29 02:16:14 np0005539550 podman[92580]: 2025-11-29 07:16:14.425262489 +0000 UTC m=+0.048774658 container create 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:14 np0005539550 systemd[1]: Started libpod-conmon-949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6.scope.
Nov 29 02:16:14 np0005539550 podman[92580]: 2025-11-29 07:16:14.403746279 +0000 UTC m=+0.027258478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2226420223b024031c7503b5176f23b9c26d319596ba974bb458c3fc3d9292d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2226420223b024031c7503b5176f23b9c26d319596ba974bb458c3fc3d9292d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:14 np0005539550 podman[92580]: 2025-11-29 07:16:14.518030041 +0000 UTC m=+0.141542240 container init 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:16:14 np0005539550 podman[92580]: 2025-11-29 07:16:14.525184954 +0000 UTC m=+0.148697123 container start 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:16:14 np0005539550 podman[92580]: 2025-11-29 07:16:14.529199836 +0000 UTC m=+0.152712005 container attach 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 8c83fc55-2b6d-4712-8c7c-f5d3e57c36ae (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:15 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 29 02:16:15 np0005539550 peaceful_mclaren[92597]: 
Nov 29 02:16:15 np0005539550 peaceful_mclaren[92597]: [{"container_id": "ec39c57f87b0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.68%", "created": "2025-11-29T07:13:26.340679Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-29T07:13:26.390223Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:40.935445Z", "memory_usage": 11806965, "ports": [], "service_name": "crash", "started": "2025-11-29T07:13:26.246498Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@crash.compute-0", "version": "18.2.7"}, {"container_id": "90ce4adbab91", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.67%", "created": "2025-11-29T07:14:15.038549Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2025-11-29T07:14:15.092499Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T07:15:56.039129Z", "memory_usage": 11733565, "ports": [], "service_name": "crash", "started": "2025-11-29T07:14:14.947616Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@crash.compute-1", "version": "18.2.7"}, {"container_id": "805bc584ce25", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.84%", "created": "2025-11-29T07:15:33.156312Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2025-11-29T07:15:34.299497Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-29T07:15:56.617324Z", "memory_usage": 11670650, "ports": [], "service_name": "crash", "started": "2025-11-29T07:15:32.953895Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@crash.compute-2", "version": "18.2.7"}, {"container_id": "df06dba8a731", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "37.78%", "created": "2025-11-29T07:12:03.481583Z", "daemon_id": "compute-0.pdhsqi", "daemon_name": "mgr.compute-0.pdhsqi", "daemon_type": "mgr", "events": ["2025-11-29T07:13:29.422679Z daemon:mgr.compute-0.pdhsqi [INFO] \"Reconfigured mgr.compute-0.pdhsqi on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:40.935357Z", "memory_usage": 546937241, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-29T07:12:03.383577Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@mgr.compute-0.pdhsqi", "version": "18.2.7"}, {"container_id": "ea9c94546005", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "84.35%", "created": "2025-11-29T07:15:29.720669Z", "daemon_id": "compute-1.fchyan", "daemon_name": "mgr.compute-1.fchyan", "daemon_type": "mgr", "events": ["2025-11-29T07:15:29.871916Z daemon:mgr.compute-1.fchyan [INFO] \"Deployed mgr.compute-1.fchyan on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-29T07:15:56.039329Z", "memory_usage": 514116812, "ports": [8765], "service_name": "mgr", "started": "2025-11-29T07:15:29.620054Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@mgr.compute-1.fchyan", "version": "18.2.7"}, {"container_id": "41f46c9921ed", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "61.63%", "created": "2025-11-29T07:15:22.143027Z", "daemon_id": "compute-2.zfrvoq", "daemon_name": "mgr.compute-2.zfrvoq", "daemon_type": "mgr", "events": ["2025-11-29T07:15:28.047511Z daemon:mgr.compute-2.zfrvoq [INFO] \"Deployed mgr.compute-2.zfrvoq on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-29T07:15:56.617230Z", "memory_usage": 513382809, "ports": [8765], "service_name": "mgr", "started": "2025-11-29T07:15:22.047184Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@mgr.compute-2.zfrvoq", "version": "18.2.7"}, {"container_id": "7bc856b2ad58", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.45%", "created": "2025-11-29T07:11:57.065423Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-29T07:13:28.596028Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-29T07:14:40.935262Z", "memory_request": 2147483648, "memory_usage": 31855738, "ports": [], "service_name": "mon", "started": "2025-11-29T07:12:01.103519Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-b66774a7-56d9-5535-bd8c-681234404870@mon.compute-0", "version": "18.2.7"}, {"container_id": "6ae96226e672", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.33%", "created": "2025-11-29T07:15:10.287092Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "events": ["2025-11-29T07:15:14.901347Z daemon
Nov 29 02:16:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 10 peering, 167 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:15 np0005539550 systemd[1]: libpod-949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6.scope: Deactivated successfully.
Nov 29 02:16:15 np0005539550 podman[92580]: 2025-11-29 07:16:15.20599311 +0000 UTC m=+0.829505289 container died 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:16:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f2226420223b024031c7503b5176f23b9c26d319596ba974bb458c3fc3d9292d-merged.mount: Deactivated successfully.
Nov 29 02:16:15 np0005539550 podman[92580]: 2025-11-29 07:16:15.257411464 +0000 UTC m=+0.880923633 container remove 949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6 (image=quay.io/ceph/ceph:v18, name=peaceful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:16:15 np0005539550 systemd[1]: libpod-conmon-949291d1af4b7f53c8e08d6b8530eef8d96ee803c9df1539bca1df345dc0e3c6.scope: Deactivated successfully.
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:16:15 np0005539550 rsyslogd[1008]: message too long (12805) with configured size 8096, begin of message is: [{"container_id": "ec39c57f87b0", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:15 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.gstlru on compute-2
Nov 29 02:16:15 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.gstlru on compute-2
Nov 29 02:16:16 np0005539550 python3[92661]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:16 np0005539550 podman[92662]: 2025-11-29 07:16:16.373960711 +0000 UTC m=+0.045177367 container create 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:16 np0005539550 systemd[1]: Started libpod-conmon-0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a.scope.
Nov 29 02:16:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efbf1ed9443b5437579a9887726833e06796191f019aa374a629eeec4a85a68c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efbf1ed9443b5437579a9887726833e06796191f019aa374a629eeec4a85a68c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:16 np0005539550 podman[92662]: 2025-11-29 07:16:16.355422087 +0000 UTC m=+0.026638763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:16 np0005539550 podman[92662]: 2025-11-29 07:16:16.455845964 +0000 UTC m=+0.127062630 container init 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:16 np0005539550 podman[92662]: 2025-11-29 07:16:16.465108711 +0000 UTC m=+0.136325357 container start 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:16:16 np0005539550 podman[92662]: 2025-11-29 07:16:16.468893238 +0000 UTC m=+0.140109904 container attach 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:16:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gstlru", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:16 np0005539550 ceph-mon[74435]: Deploying daemon rgw.rgw.compute-2.gstlru on compute-2
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2088817193' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 29 02:16:17 np0005539550 gallant_ardinghelli[92677]: 
Nov 29 02:16:17 np0005539550 gallant_ardinghelli[92677]: {"fsid":"b66774a7-56d9-5535-bd8c-681234404870","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":16,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":54,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":46,"num_osds":3,"num_up_osds":3,"osd_up_since":1764400563,"num_in_osds":3,"osd_in_since":1764400536,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":167},{"state_name":"peering","count":10}],"num_pgs":177,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84275200,"bytes_avail":22451720192,"bytes_total":22535995392,"inactive_pgs_ratio":0.056497175246477127},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-11-29T07:15:47.193555+0000","services":{"mgr":{"daemons":{"summary":"","compute-2.zfrvoq":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"8c83fc55-2b6d-4712-8c7c-f5d3e57c36ae":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 29 02:16:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 10 peering, 167 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:17 np0005539550 systemd[1]: libpod-0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a.scope: Deactivated successfully.
Nov 29 02:16:17 np0005539550 podman[92662]: 2025-11-29 07:16:17.230072119 +0000 UTC m=+0.901288765 container died 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:16:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-efbf1ed9443b5437579a9887726833e06796191f019aa374a629eeec4a85a68c-merged.mount: Deactivated successfully.
Nov 29 02:16:17 np0005539550 podman[92662]: 2025-11-29 07:16:17.538940566 +0000 UTC m=+1.210157242 container remove 0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a (image=quay.io/ceph/ceph:v18, name=gallant_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:16:17 np0005539550 systemd[1]: libpod-conmon-0a034377fdcafb6bf54f854e4d0e98a3954c3cef4c5adda1238aad5b3860649a.scope: Deactivated successfully.
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:17 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.wmgqmg on compute-1
Nov 29 02:16:17 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.wmgqmg on compute-1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 29 02:16:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 47 pg[8.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wmgqmg", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:18 np0005539550 python3[92742]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:18 np0005539550 podman[92743]: 2025-11-29 07:16:18.694921371 +0000 UTC m=+0.050198064 container create 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:18 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 29 02:16:18 np0005539550 systemd[1]: Started libpod-conmon-2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251.scope.
Nov 29 02:16:18 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 29 02:16:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 29 02:16:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0967d3a39739f6d95c3657cbb99b007b9926c00086e71efc30ffe9b60304d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e0967d3a39739f6d95c3657cbb99b007b9926c00086e71efc30ffe9b60304d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:16:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 29 02:16:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 29 02:16:18 np0005539550 podman[92743]: 2025-11-29 07:16:18.677071165 +0000 UTC m=+0.032347878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:18 np0005539550 podman[92743]: 2025-11-29 07:16:18.7742853 +0000 UTC m=+0.129562003 container init 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:16:18 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 48 pg[8.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:18 np0005539550 podman[92743]: 2025-11-29 07:16:18.78366795 +0000 UTC m=+0.138944633 container start 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:16:18 np0005539550 podman[92743]: 2025-11-29 07:16:18.787617381 +0000 UTC m=+0.142894104 container attach 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:16:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v142: 178 pgs: 1 creating+peering, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698134652' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 29 02:16:19 np0005539550 stoic_snyder[92758]: 
Nov 29 02:16:19 np0005539550 systemd[1]: libpod-2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251.scope: Deactivated successfully.
Nov 29 02:16:19 np0005539550 stoic_snyder[92758]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-1.wmgqmg","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.gstlru","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 29 02:16:19 np0005539550 podman[92743]: 2025-11-29 07:16:19.404958795 +0000 UTC m=+0.760235478 container died 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:16:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: Deploying daemon rgw.rgw.compute-1.wmgqmg on compute-1
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 29 02:16:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1e0967d3a39739f6d95c3657cbb99b007b9926c00086e71efc30ffe9b60304d5-merged.mount: Deactivated successfully.
Nov 29 02:16:19 np0005539550 podman[92743]: 2025-11-29 07:16:19.635953641 +0000 UTC m=+0.991230324 container remove 2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251 (image=quay.io/ceph/ceph:v18, name=stoic_snyder, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:16:19 np0005539550 systemd[1]: libpod-conmon-2091a22d9544e92abe8ae178e545fc1a11405386279ce4d3775a5a6e2236e251.scope: Deactivated successfully.
Nov 29 02:16:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 29 02:16:19 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 29 02:16:19 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 29 02:16:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 29 02:16:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 29 02:16:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 49 pg[9.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:16:20 np0005539550 python3[92823]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:20 np0005539550 podman[92824]: 2025-11-29 07:16:20.827812732 +0000 UTC m=+0.042852887 container create 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:16:20 np0005539550 systemd[1]: Started libpod-conmon-48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c.scope.
Nov 29 02:16:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965c2de28d1f7c8a44fbd5aca46dda63288db8d3f965bc5859f7bf90e14f1c65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965c2de28d1f7c8a44fbd5aca46dda63288db8d3f965bc5859f7bf90e14f1c65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:20 np0005539550 podman[92824]: 2025-11-29 07:16:20.810192941 +0000 UTC m=+0.025233116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:20 np0005539550 podman[92824]: 2025-11-29 07:16:20.912043415 +0000 UTC m=+0.127083590 container init 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:20 np0005539550 podman[92824]: 2025-11-29 07:16:20.922485132 +0000 UTC m=+0.137525297 container start 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:16:20 np0005539550 podman[92824]: 2025-11-29 07:16:20.92670595 +0000 UTC m=+0.141746105 container attach 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 29 02:16:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v144: 179 pgs: 1 unknown, 1 creating+peering, 177 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:16:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 29 02:16:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513307987' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 29 02:16:21 np0005539550 condescending_chaplygin[92840]: mimic
Nov 29 02:16:21 np0005539550 systemd[1]: libpod-48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c.scope: Deactivated successfully.
Nov 29 02:16:21 np0005539550 podman[92824]: 2025-11-29 07:16:21.545359687 +0000 UTC m=+0.760399862 container died 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-965c2de28d1f7c8a44fbd5aca46dda63288db8d3f965bc5859f7bf90e14f1c65-merged.mount: Deactivated successfully.
Nov 29 02:16:21 np0005539550 podman[92824]: 2025-11-29 07:16:21.602747394 +0000 UTC m=+0.817787549 container remove 48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c (image=quay.io/ceph/ceph:v18, name=condescending_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:16:21 np0005539550 systemd[1]: libpod-conmon-48c2248d6e0eb6d37754a940242b7f4f4ada045bf2bf80db933dc1cb34e10d2c.scope: Deactivated successfully.
Nov 29 02:16:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 29 02:16:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 29 02:16:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 50 pg[9.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [0] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:22 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.lkiqxb on compute-0
Nov 29 02:16:22 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.lkiqxb on compute-0
Nov 29 02:16:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:22 np0005539550 python3[92905]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:16:22 np0005539550 podman[92946]: 2025-11-29 07:16:22.744434444 +0000 UTC m=+0.045189477 container create 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:22 np0005539550 systemd[1]: Started libpod-conmon-9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7.scope.
Nov 29 02:16:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a375b0ac1b543ccd850d6baefeb6de69313ee89939064cb730ec0b4688736a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a375b0ac1b543ccd850d6baefeb6de69313ee89939064cb730ec0b4688736a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:22 np0005539550 podman[92946]: 2025-11-29 07:16:22.724569416 +0000 UTC m=+0.025324459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:16:22 np0005539550 podman[92946]: 2025-11-29 07:16:22.823019453 +0000 UTC m=+0.123774486 container init 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:22 np0005539550 podman[92946]: 2025-11-29 07:16:22.830030142 +0000 UTC m=+0.130785165 container start 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:16:22 np0005539550 podman[92946]: 2025-11-29 07:16:22.834008524 +0000 UTC m=+0.134763577 container attach 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:16:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v146: 179 pgs: 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 751 B/s wr, 2 op/s
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.214386499 +0000 UTC m=+0.047392603 container create 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 29 02:16:23 np0005539550 systemd[1]: Started libpod-conmon-7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893.scope.
Nov 29 02:16:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.192208722 +0000 UTC m=+0.025214836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.293618824 +0000 UTC m=+0.126624958 container init 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.299513524 +0000 UTC m=+0.132519628 container start 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.303618929 +0000 UTC m=+0.136625053 container attach 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:23 np0005539550 gallant_johnson[93099]: 167 167
Nov 29 02:16:23 np0005539550 systemd[1]: libpod-7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893.scope: Deactivated successfully.
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.30481494 +0000 UTC m=+0.137821084 container died 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0fac3b55eef0a4b802ff8c8bf43d6ad7473e8671674ca5ae7d42b37e25a4c28a-merged.mount: Deactivated successfully.
Nov 29 02:16:23 np0005539550 podman[93063]: 2025-11-29 07:16:23.34511074 +0000 UTC m=+0.178116854 container remove 7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:16:23 np0005539550 systemd[1]: libpod-conmon-7dd74b5d15b5a04435d7e8054d9a357f3f52bec17c0e59718b3f95350917b893.scope: Deactivated successfully.
Nov 29 02:16:23 np0005539550 systemd[1]: Reloading.
Nov 29 02:16:23 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:23 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 29 02:16:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139918994' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 29 02:16:23 np0005539550 nice_bose[92994]: 
Nov 29 02:16:23 np0005539550 nice_bose[92994]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":9}}
Nov 29 02:16:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:23 np0005539550 podman[93156]: 2025-11-29 07:16:23.604567614 +0000 UTC m=+0.026891949 container died 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:16:23 np0005539550 systemd[1]: libpod-9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7.scope: Deactivated successfully.
Nov 29 02:16:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a375b0ac1b543ccd850d6baefeb6de69313ee89939064cb730ec0b4688736a4-merged.mount: Deactivated successfully.
Nov 29 02:16:23 np0005539550 systemd[1]: Reloading.
Nov 29 02:16:23 np0005539550 podman[93156]: 2025-11-29 07:16:23.729938169 +0000 UTC m=+0.152262484 container remove 9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7 (image=quay.io/ceph/ceph:v18, name=nice_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:16:23 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:23 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:23 np0005539550 systemd[1]: libpod-conmon-9aa12cb70437908a6365cb2bb78b5b782fd98e41cbd272ac146fc89a9fbfc0e7.scope: Deactivated successfully.
Nov 29 02:16:23 np0005539550 systemd[1]: Starting Ceph rgw.rgw.compute-0.lkiqxb for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lkiqxb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: Deploying daemon rgw.rgw.compute-0.lkiqxb on compute-0
Nov 29 02:16:24 np0005539550 podman[93259]: 2025-11-29 07:16:24.211962263 +0000 UTC m=+0.043326459 container create 469954a41c521a96f1010db0380a4275e25754a3c286da981734094d01232c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-0-lkiqxb, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:16:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93ac3a022503ff801459c09a62378aa7dd65c8c783ea649edee6661e8ca0795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93ac3a022503ff801459c09a62378aa7dd65c8c783ea649edee6661e8ca0795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93ac3a022503ff801459c09a62378aa7dd65c8c783ea649edee6661e8ca0795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c93ac3a022503ff801459c09a62378aa7dd65c8c783ea649edee6661e8ca0795/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.lkiqxb supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:24 np0005539550 podman[93259]: 2025-11-29 07:16:24.272572903 +0000 UTC m=+0.103937119 container init 469954a41c521a96f1010db0380a4275e25754a3c286da981734094d01232c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-0-lkiqxb, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:16:24 np0005539550 podman[93259]: 2025-11-29 07:16:24.278253258 +0000 UTC m=+0.109617454 container start 469954a41c521a96f1010db0380a4275e25754a3c286da981734094d01232c59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-0-lkiqxb, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:16:24 np0005539550 bash[93259]: 469954a41c521a96f1010db0380a4275e25754a3c286da981734094d01232c59
Nov 29 02:16:24 np0005539550 podman[93259]: 2025-11-29 07:16:24.194115507 +0000 UTC m=+0.025479733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:24 np0005539550 systemd[1]: Started Ceph rgw.rgw.compute-0.lkiqxb for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:24 np0005539550 radosgw[93278]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:24 np0005539550 radosgw[93278]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 29 02:16:24 np0005539550 radosgw[93278]: framework: beast
Nov 29 02:16:24 np0005539550 radosgw[93278]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 29 02:16:24 np0005539550 radosgw[93278]: init_numa not setting numa affinity
Nov 29 02:16:24 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 2b4b5e9d-2b56-4b50-bb8a-c4a5dc953e21 (Global Recovery Event) in 5 seconds
Nov 29 02:16:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 29 02:16:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v148: 180 pgs: 1 unknown, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 1 op/s
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 8c83fc55-2b6d-4712-8c7c-f5d3e57c36ae (Updating rgw.rgw deployment (+3 -> 3))
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 8c83fc55-2b6d-4712-8c7c-f5d3e57c36ae (Updating rgw.rgw deployment (+3 -> 3)) in 11 seconds
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 0a0f84a7-fdde-4475-9fd1-fefe339ec183 (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.mmoati on compute-2
Nov 29 02:16:25 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.mmoati on compute-2
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/1509958396' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.101:0/4041558685' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.mmoati", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4210256520' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 29 02:16:26 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 29 02:16:26 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v150: 180 pgs: 1 creating+peering, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 1 op/s
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e3 new map
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:15:46.469738+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.mmoati{-1:24154} state up:standby seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: Deploying daemon mds.cephfs.compute-2.mmoati on compute-2
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] up:boot
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] as mds.0
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mmoati assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mmoati"} v 0) v1
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mmoati"}]: dispatch
Nov 29 02:16:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e3 all = 0
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 53 pg[11.0( empty local-lis/les=0/0 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [0] r=0 lpr=53 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e4 new map
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:27.688749+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:creating}
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.mmoati is now active in filesystem cephfs as rank 0
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:28 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.qcwnhf on compute-0
Nov 29 02:16:28 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.qcwnhf on compute-0
Nov 29 02:16:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 29 02:16:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.1e deep-scrub starts
Nov 29 02:16:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.1e deep-scrub ok
Nov 29 02:16:28 np0005539550 podman[93493]: 2025-11-29 07:16:28.998974113 +0000 UTC m=+0.043092113 container create a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 29 02:16:29 np0005539550 systemd[1]: Started libpod-conmon-a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2.scope.
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 54 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=0/0 les/c/f=0/0/0 sis=53) [0] r=0 lpr=53 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:16:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:28.979594378 +0000 UTC m=+0.023712398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:29.090211916 +0000 UTC m=+0.134329926 container init a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:29.099021621 +0000 UTC m=+0.143139611 container start a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:29.104301336 +0000 UTC m=+0.148419326 container attach a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:29 np0005539550 relaxed_pasteur[93510]: 167 167
Nov 29 02:16:29 np0005539550 systemd[1]: libpod-a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2.scope: Deactivated successfully.
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:29.107820996 +0000 UTC m=+0.151938996 container died a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:16:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dffbde23c5a3deca32c5395ced86c82dfc9808d30bc57b15706fe9e8631b6bd7-merged.mount: Deactivated successfully.
Nov 29 02:16:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v153: 181 pgs: 2 creating+peering, 179 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 508 B/s wr, 5 op/s
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: daemon mds.cephfs.compute-2.mmoati assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: daemon mds.cephfs.compute-2.mmoati is now active in filesystem cephfs as rank 0
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qcwnhf", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: Deploying daemon mds.cephfs.compute-0.qcwnhf on compute-0
Nov 29 02:16:29 np0005539550 podman[93493]: 2025-11-29 07:16:29.312666403 +0000 UTC m=+0.356784403 container remove a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:16:29 np0005539550 systemd[1]: libpod-conmon-a48a263fd562066a531de53c631fb247401dd97539a0d8e4275a8fc447d4b1b2.scope: Deactivated successfully.
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:29 np0005539550 systemd[1]: Reloading.
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e5 new map
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] up:active
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:active}
Nov 29 02:16:29 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 14 completed events
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:16:29 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:29 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Nov 29 02:16:29 np0005539550 systemd[1]: Reloading.
Nov 29 02:16:29 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:16:29 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 29 02:16:29 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:16:29 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 29 02:16:30 np0005539550 systemd[1]: Starting Ceph mds.cephfs.compute-0.qcwnhf for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 29 02:16:30 np0005539550 podman[93656]: 2025-11-29 07:16:30.231399633 +0000 UTC m=+0.053228552 container create 0c9b4cd57cb8759c3a186c0946f95a313c8477b72fbe80a2b9614fdb9f3f1b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-0-qcwnhf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:16:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab3694f25980531ad7e4f66011bcd1b9f547f46ab5f1f35cab359d5240a96e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab3694f25980531ad7e4f66011bcd1b9f547f46ab5f1f35cab359d5240a96e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab3694f25980531ad7e4f66011bcd1b9f547f46ab5f1f35cab359d5240a96e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ab3694f25980531ad7e4f66011bcd1b9f547f46ab5f1f35cab359d5240a96e/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.qcwnhf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:16:30 np0005539550 podman[93656]: 2025-11-29 07:16:30.202010331 +0000 UTC m=+0.023839270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:16:30 np0005539550 podman[93656]: 2025-11-29 07:16:30.319462854 +0000 UTC m=+0.141291793 container init 0c9b4cd57cb8759c3a186c0946f95a313c8477b72fbe80a2b9614fdb9f3f1b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-0-qcwnhf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:16:30 np0005539550 podman[93656]: 2025-11-29 07:16:30.325292953 +0000 UTC m=+0.147121872 container start 0c9b4cd57cb8759c3a186c0946f95a313c8477b72fbe80a2b9614fdb9f3f1b9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-0-qcwnhf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:16:30 np0005539550 bash[93656]: 0c9b4cd57cb8759c3a186c0946f95a313c8477b72fbe80a2b9614fdb9f3f1b9a
Nov 29 02:16:30 np0005539550 systemd[1]: Started Ceph mds.cephfs.compute-0.qcwnhf for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:16:30 np0005539550 ceph-mds[93677]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:30 np0005539550 ceph-mds[93677]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 29 02:16:30 np0005539550 ceph-mds[93677]: main not setting numa affinity
Nov 29 02:16:30 np0005539550 ceph-mds[93677]: pidfile_write: ignore empty --pid-file
Nov 29 02:16:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mds-cephfs-compute-0-qcwnhf[93673]: starting mds.cephfs.compute-0.qcwnhf at 
Nov 29 02:16:30 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 5 from mon.0
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.102:0/2673003238' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.101:0/4122975246' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 29 02:16:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e6 new map
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:31 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 6 from mon.0
Nov 29 02:16:31 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Monitors have assigned me to become a standby.
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v155: 181 pgs: 1 creating+peering, 180 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 2.3 KiB/s wr, 8 op/s
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] up:boot
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:active} 1 up:standby
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qcwnhf"} v 0) v1
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qcwnhf"}]: dispatch
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e6 all = 0
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e7 new map
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:active} 1 up:standby
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: from='client.? 192.168.122.100:0/4213096877' entity='client.rgw.rgw.compute-0.lkiqxb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-2.gstlru' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: from='client.? ' entity='client.rgw.rgw.compute-1.wmgqmg' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:16:32 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.ldsugj on compute-1
Nov 29 02:16:32 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.ldsugj on compute-1
Nov 29 02:16:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:32 np0005539550 radosgw[93278]: LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:32 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-rgw-rgw-compute-0-lkiqxb[93274]: 2025-11-29T07:16:32.926+0000 7fdc8db57940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 29 02:16:32 np0005539550 radosgw[93278]: framework: beast
Nov 29 02:16:32 np0005539550 radosgw[93278]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 29 02:16:32 np0005539550 radosgw[93278]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 29 02:16:32 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 02:16:32 np0005539550 radosgw[93278]: starting handler: beast
Nov 29 02:16:32 np0005539550 radosgw[93278]: set uid:gid to 167:167 (ceph:ceph)
Nov 29 02:16:33 np0005539550 radosgw[93278]: mgrc service_daemon_register rgw.14367 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.lkiqxb,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=dee0d062-b680-4e01-91de-db2466760c83,zone_name=default,zonegroup_id=135cb529-a24d-489d-8cfe-9e045cfed63b,zonegroup_name=default}
Nov 29 02:16:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v156: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 8.0 KiB/s wr, 46 op/s
Nov 29 02:16:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 29 02:16:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ldsugj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 29 02:16:33 np0005539550 ceph-mon[74435]: Deploying daemon mds.cephfs.compute-1.ldsugj on compute-1
Nov 29 02:16:33 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.19 deep-scrub starts
Nov 29 02:16:33 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 2.19 deep-scrub ok
Nov 29 02:16:34 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 76ddf282-f24b-4a5b-b0aa-0deef0d5c9c8 (Global Recovery Event) in 5 seconds
Nov 29 02:16:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v157: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 7.2 KiB/s rd, 6.3 KiB/s wr, 36 op/s
Nov 29 02:16:36 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 29 02:16:36 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 29 02:16:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v158: 181 pgs: 181 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 5.6 KiB/s wr, 30 op/s
Nov 29 02:16:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:37 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 29 02:16:38 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 29 02:16:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v159: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 4.3 KiB/s wr, 198 op/s
Nov 29 02:16:39 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 15 completed events
Nov 29 02:16:39 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 29 02:16:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:16:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:16:40 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 29 02:16:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v160: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 102 KiB/s rd, 4.0 KiB/s wr, 184 op/s
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e8 new map
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] up:boot
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:active} 2 up:standby
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.ldsugj"} v 0) v1
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.ldsugj"}]: dispatch
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e8 all = 0
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:16:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v161: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 120 KiB/s rd, 2.8 KiB/s wr, 217 op/s
Nov 29 02:16:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:43 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 0a0f84a7-fdde-4475-9fd1-fefe339ec183 (Updating mds.cephfs deployment (+3 -> 3))
Nov 29 02:16:43 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 0a0f84a7-fdde-4475-9fd1-fefe339ec183 (Updating mds.cephfs deployment (+3 -> 3)) in 18 seconds
Nov 29 02:16:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 29 02:16:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:44 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 29 02:16:44 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 29 02:16:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v162: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 117 KiB/s rd, 0 B/s wr, 197 op/s
Nov 29 02:16:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 29 02:16:45 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 16 completed events
Nov 29 02:16:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:16:45 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 29 02:16:45 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 29 02:16:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v163: 181 pgs: 181 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 117 KiB/s rd, 0 B/s wr, 197 op/s
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:47 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e9 new map
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:29.351992+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.mmoati{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qcwnhf{-1:14382} state up:standby seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:47 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 9 from mon.0
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:47 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 4e65c482-0997-4925-a038-a52de37cc123 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] up:standby
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Dropping low affinity active daemon mds.cephfs.compute-2.mmoati in favor of higher affinity standby.
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e9  replacing 24154 [v2:192.168.122.102:6804/1598903637,v1:192.168.122.102:6805/1598903637] mds.0.4 up:active with 14382/cephfs.compute-0.qcwnhf [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860]
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.compute-2.mmoati as rank 0 with standby daemon mds.cephfs.compute-0.qcwnhf
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e9 fail_mds_gid 24154 mds.cephfs.compute-2.mmoati role 0
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.mmoati=up:active} 2 up:standby
Nov 29 02:16:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Nov 29 02:16:47 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e10 new map
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01110#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:47.848009+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:replay seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 10 from mon.0
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map i am now mds.0.10
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map state change up:standby --> up:replay
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10 replay_start
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10  waiting for osdmap 56 (which blocklists prior instance)
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 29 02:16:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qcwnhf=up:replay} 1 up:standby
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.cache creating system inode with ino:0x100
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.cache creating system inode with ino:0x1
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10 Finished replaying journal
Nov 29 02:16:48 np0005539550 ceph-mds[93677]: mds.0.10 making mds journal writeable
Nov 29 02:16:48 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.uyfjya on compute-0
Nov 29 02:16:48 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.uyfjya on compute-0
Nov 29 02:16:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v165: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 107 op/s
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: Dropping low affinity active daemon mds.cephfs.compute-2.mmoati in favor of higher affinity standby.
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: Replacing daemon mds.cephfs.compute-2.mmoati as rank 0 with standby daemon mds.cephfs.compute-0.qcwnhf
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e11 new map
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e11 print_map#012e11#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01111#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:49.781453+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:reconnect seq 6 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 11 from mon.0
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map i am now mds.0.10
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map state change up:replay --> up:reconnect
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.0.10 reconnect_start
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.0.10 reopen_log
Nov 29 02:16:49 np0005539550 ceph-mds[93677]: mds.0.10 reconnect_done
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] up:standby
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] up:reconnect
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] up:boot
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qcwnhf=up:reconnect} 2 up:standby
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.mmoati"} v 0) v1
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.mmoati"}]: dispatch
Nov 29 02:16:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e11 all = 0
Nov 29 02:16:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v166: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 107 op/s
Nov 29 02:16:51 np0005539550 ceph-mon[74435]: Deploying daemon haproxy.rgw.default.compute-0.uyfjya on compute-0
Nov 29 02:16:51 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Nov 29 02:16:51 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e12 new map
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e12 print_map#012e12#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:50.823341+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:rejoin seq 7 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 12 from mon.0
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map i am now mds.0.10
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map state change up:reconnect --> up:rejoin
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.0.10 rejoin_start
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.0.10 rejoin_joint_start
Nov 29 02:16:52 np0005539550 ceph-mds[93677]: mds.0.10 rejoin_done
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] up:rejoin
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.qcwnhf=up:rejoin} 2 up:standby
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.qcwnhf is now active in filesystem cephfs as rank 0
Nov 29 02:16:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v167: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 new map
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 print_map#012e13#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-29T07:15:46.469688+0000#012modified#0112025-11-29T07:16:53.325309+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01156#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14382}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-0.qcwnhf{0:14382} state up:active seq 8 join_fscid=1 addr [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-1.ldsugj{-1:24146} state up:standby seq 3 join_fscid=1 addr [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-2.mmoati{-1:24160} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] compat {c=[1],r=[1],i=[7ff]}]
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf Updating MDS map to version 13 from mon.0
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map i am now mds.0.10
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.0.10 handle_mds_map state change up:rejoin --> up:active
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.0.10 recovery_done -- successful recovery!
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.0.10 active_start
Nov 29 02:16:53 np0005539550 ceph-mds[93677]: mds.0.10 cluster recovered.
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/4251203860,v1:192.168.122.100:6807/4251203860] up:active
Nov 29 02:16:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:16:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v168: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 02:16:55 np0005539550 ceph-mon[74435]: daemon mds.cephfs.compute-0.qcwnhf is now active in filesystem cephfs as rank 0
Nov 29 02:16:55 np0005539550 ceph-mon[74435]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Nov 29 02:16:55 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v169: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Nov 29 02:16:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:16:57 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 29 02:16:57 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.169734928 +0000 UTC m=+8.789262407 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.33762077 +0000 UTC m=+8.957148249 container create 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:58 np0005539550 systemd[1]: Started libpod-conmon-56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7.scope.
Nov 29 02:16:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.703421963 +0000 UTC m=+9.322949452 container init 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.717732058 +0000 UTC m=+9.337259537 container start 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:58 np0005539550 charming_benz[94521]: 0 0
Nov 29 02:16:58 np0005539550 systemd[1]: libpod-56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7.scope: Deactivated successfully.
Nov 29 02:16:58 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 29 02:16:58 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.908323331 +0000 UTC m=+9.527850800 container attach 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:58 np0005539550 podman[94392]: 2025-11-29 07:16:58.909396709 +0000 UTC m=+9.528924178 container died 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:16:59
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'vms', 'volumes', 'images', '.mgr']
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:16:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v170: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:16:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-97355b1f9e6f29a77e9076fb02c19957a1c919cd5662ae25574b7e7a57741dca-merged.mount: Deactivated successfully.
Nov 29 02:16:59 np0005539550 podman[94392]: 2025-11-29 07:16:59.758743103 +0000 UTC m=+10.378270562 container remove 56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7 (image=quay.io/ceph/haproxy:2.3, name=charming_benz)
Nov 29 02:16:59 np0005539550 systemd[1]: libpod-conmon-56ecb24c826c01a6c810c464d250d08b1a2ffaa7d5e51fce7ce820015b55a9d7.scope: Deactivated successfully.
Nov 29 02:17:01 np0005539550 systemd[1]: Reloading.
Nov 29 02:17:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v171: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6.8 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 02:17:01 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:01 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:01 np0005539550 systemd[1]: Reloading.
Nov 29 02:17:01 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:01 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:01 np0005539550 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.uyfjya for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:17:02 np0005539550 podman[94669]: 2025-11-29 07:17:02.090733985 +0000 UTC m=+0.022685351 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 29 02:17:02 np0005539550 podman[94669]: 2025-11-29 07:17:02.187183561 +0000 UTC m=+0.119134897 container create 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:17:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ecaeee6b37c5edc5909179b5cde3b5e9628d0506917732ca5a6ab422b509378/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:02 np0005539550 podman[94669]: 2025-11-29 07:17:02.518765929 +0000 UTC m=+0.450717315 container init 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:17:02 np0005539550 podman[94669]: 2025-11-29 07:17:02.524353632 +0000 UTC m=+0.456304968 container start 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:17:02 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya[94684]: [NOTICE] 332/071702 (2) : New worker #1 (4) forked
Nov 29 02:17:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:17:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:02.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:17:02 np0005539550 bash[94669]: 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c
Nov 29 02:17:02 np0005539550 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.uyfjya for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:17:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v172: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 6.6 KiB/s rd, 0 B/s wr, 6 op/s
Nov 29 02:17:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:17:03 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 29 02:17:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:17:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 29 02:17:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:04.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 29 02:17:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 29 02:17:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:17:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v173: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 29 02:17:06 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev 93ed0e6c-8b73-4222-9da8-0ba021ff9201 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:17:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:06.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 29 02:17:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.efzvmt on compute-2
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.efzvmt on compute-2
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v175: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 3.9 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:17:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:17:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:17:07 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 29 02:17:07 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 29 02:17:08 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev f10129a2-9685-4284-95e3-45d16bd6443e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:17:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:08.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:08 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 29 02:17:08 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 29 02:17:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v177: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 29 02:17:10 np0005539550 python3[94733]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:17:10 np0005539550 podman[94735]: 2025-11-29 07:17:10.379728276 +0000 UTC m=+0.025987935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:17:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:17:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:10.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:17:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v178: 181 pgs: 181 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:11.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:12.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: Deploying daemon haproxy.rgw.default.compute-2.efzvmt on compute-2
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:13 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 29 02:17:13 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 29 02:17:13 np0005539550 podman[94735]: 2025-11-29 07:17:13.05038987 +0000 UTC m=+2.696649499 container create bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:17:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v179: 181 pgs: 1 active+clean+scrubbing, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:13 np0005539550 systemd[1]: Started libpod-conmon-bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0.scope.
Nov 29 02:17:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:17:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d947fc96a9c4c153b8a3d78ab4ba86de44cae13cd2b3b86113820f6037fe6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e5d947fc96a9c4c153b8a3d78ab4ba86de44cae13cd2b3b86113820f6037fe6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 29 02:17:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:13 np0005539550 podman[94735]: 2025-11-29 07:17:13.719967042 +0000 UTC m=+3.366226751 container init bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:17:13 np0005539550 podman[94735]: 2025-11-29 07:17:13.730159657 +0000 UTC m=+3.376419286 container start bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 29 02:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:17:13 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev ec8b01e2-2ee4-4372-aea7-9529afedfea4 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 02:17:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 29 02:17:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Nov 29 02:17:14 np0005539550 podman[94735]: 2025-11-29 07:17:14.252764342 +0000 UTC m=+3.899023971 container attach bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 59 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=47/48 n=4 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=8.410545349s) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 48'3 mlcod 48'3 active pruub 169.143371582s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 59 pg[9.0( v 55'1153 (0'0,55'1153] local-lis/les=49/50 n=177 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.879889488s) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 55'1152 mlcod 55'1152 active pruub 172.613845825s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 59 pg[8.0( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=59 pruub=8.410545349s) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 48'3 mlcod 0'0 unknown pruub 169.143371582s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 59 pg[9.0( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59 pruub=11.879889488s) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 55'1152 mlcod 0'0 unknown pruub 172.613845825s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:14.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 29 02:17:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v181: 243 pgs: 62 unknown, 1 active+clean+scrubbing, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:15.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:16.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:17:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v182: 243 pgs: 62 unknown, 1 active+clean+scrubbing, 180 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:17.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:18 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Nov 29 02:17:18 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Nov 29 02:17:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 29 02:17:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:18.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] update: starting ev b2dbdd2d-bfe7-4cbe-903f-d6904904a37b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 93ed0e6c-8b73-4222-9da8-0ba021ff9201 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 93ed0e6c-8b73-4222-9da8-0ba021ff9201 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 13 seconds
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev f10129a2-9685-4284-95e3-45d16bd6443e (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event f10129a2-9685-4284-95e3-45d16bd6443e (PG autoscaler increasing pool 9 PGs from 1 to 32) in 11 seconds
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev ec8b01e2-2ee4-4372-aea7-9529afedfea4 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event ec8b01e2-2ee4-4372-aea7-9529afedfea4 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 5 seconds
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev b2dbdd2d-bfe7-4cbe-903f-d6904904a37b (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event b2dbdd2d-bfe7-4cbe-903f-d6904904a37b (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 29 02:17:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v184: 243 pgs: 2 peering, 62 unknown, 179 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.18( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.19( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.17( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.16( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.e( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.11( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.16( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.10( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.17( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1e( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.11( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.10( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.3( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.2( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=47/48 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.5( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.4( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.13( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.12( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.f( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.12( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.13( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1c( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1d( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.18( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.19( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1f( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1b( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.5( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.4( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.7( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.6( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.7( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.6( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.a( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.b( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1a( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.c( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.c( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.d( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.f( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.e( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.a( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.8( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.9( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.9( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.b( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.8( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.d( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.2( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.15( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.3( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.14( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.15( v 55'1153 lc 0'0 (0'0,55'1153] local-lis/les=49/50 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.14( v 48'4 lc 0'0 (0'0,48'4] local-lis/les=47/48 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:19.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.0( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 55'1152 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.4( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.13( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1c( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1d( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1e( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1a( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.7( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.0( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 48'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.c( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.e( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.2( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.14( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=47/47 les/c/f=48/48/0 sis=59) [0] r=0 lpr=59 pi=[47,59)/1 crt=48'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 60 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=49/49 les/c/f=50/50/0 sis=59) [0] r=0 lpr=59 pi=[49,59)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 2 peering, 124 unknown, 179 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 29 02:17:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:21 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.260725975s) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active pruub 179.412139893s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:21 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 61 pg[11.0( empty local-lis/les=53/54 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61 pruub=11.260725975s) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown pruub 179.412139893s@ mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 29 02:17:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.jyvvou on compute-0
Nov 29 02:17:22 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.jyvvou on compute-0
Nov 29 02:17:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 29 02:17:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 op/s
Nov 29 02:17:23 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 20 completed events
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 02:17:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:17:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:23.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.16( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.9( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.8( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.3( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.5( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.7( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.18( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1d( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1e( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1f( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.11( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.2( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1a( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.13( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.12( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.c( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.15( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1b( empty local-lis/les=53/54 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Nov 29 02:17:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.0( empty local-lis/les=61/62 n=0 ec=53/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.9( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.18( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.10( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.11( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.6( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.2( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.15( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 62 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=53/53 les/c/f=54/54/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:17:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 621 B/s rd, 0 op/s
Nov 29 02:17:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:25.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:26.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 29 02:17:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 1 peering, 62 unknown, 242 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 0 op/s
Nov 29 02:17:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 510 B/s rd, 0 op/s
Nov 29 02:17:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 29 02:17:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 29 02:17:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 409 B/s rd, 0 op/s
Nov 29 02:17:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:32 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 29 02:17:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:33 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 29 02:17:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:17:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:17:34 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Nov 29 02:17:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v194: 305 pgs: 1 active+clean+scrubbing, 31 unknown, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:36 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:17:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:36.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 1 active+clean+scrubbing, 31 unknown, 273 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:38 np0005539550 podman[94972]: 2025-11-29 07:17:38.406110849 +0000 UTC m=+15.702437223 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:17:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:38.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:38 np0005539550 podman[94972]: 2025-11-29 07:17:38.818018238 +0000 UTC m=+16.114344582 container create 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, distribution-scope=public)
Nov 29 02:17:38 np0005539550 systemd[1]: Started libpod-conmon-1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f.scope.
Nov 29 02:17:38 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Nov 29 02:17:39 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Nov 29 02:17:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:17:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 31 unknown, 274 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:39 np0005539550 podman[94972]: 2025-11-29 07:17:39.58152196 +0000 UTC m=+16.877848324 container init 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Nov 29 02:17:39 np0005539550 podman[94972]: 2025-11-29 07:17:39.591939141 +0000 UTC m=+16.888265485 container start 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Nov 29 02:17:39 np0005539550 nostalgic_tharp[95089]: 0 0
Nov 29 02:17:39 np0005539550 systemd[1]: libpod-1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f.scope: Deactivated successfully.
Nov 29 02:17:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:40 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:17:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:40.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 32 peering, 1 active+clean+scrubbing, 272 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:41 np0005539550 podman[94972]: 2025-11-29 07:17:41.646577701 +0000 UTC m=+18.942904065 container attach 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64)
Nov 29 02:17:41 np0005539550 podman[94972]: 2025-11-29 07:17:41.647502075 +0000 UTC m=+18.943828419 container died 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, release=1793, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, name=keepalived, distribution-scope=public, description=keepalived for Ceph, vcs-type=git, vendor=Red Hat, Inc.)
Nov 29 02:17:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:17:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:41.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:17:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:17:41 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(21) init, last seen epoch 21, mid-election, bumping
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-60f23fd42f0368c213b6592e406b51b5843ff6ab9c6aa23fe0a584ead9113648-merged.mount: Deactivated successfully.
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:42.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:42 np0005539550 podman[94972]: 2025-11-29 07:17:42.695242729 +0000 UTC m=+19.991569073 container remove 1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_tharp, io.buildah.version=1.28.2, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Nov 29 02:17:42 np0005539550 systemd[1]: libpod-conmon-1c87d174cb4090d1825ab86816d80491b29e9f0878af9f8c0f78d296c6baf24f.scope: Deactivated successfully.
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 4m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:17:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:17:43 np0005539550 systemd[1]: Reloading.
Nov 29 02:17:43 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:43 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 32 peering, 1 active+clean+scrubbing, 272 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:43 np0005539550 systemd[1]: Reloading.
Nov 29 02:17:43 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:17:43 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:17:43 np0005539550 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.jyvvou for b66774a7-56d9-5535-bd8c-681234404870...
Nov 29 02:17:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:43.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 29 02:17:44 np0005539550 podman[95239]: 2025-11-29 07:17:43.945061895 +0000 UTC m=+0.025163335 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:44.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:44 np0005539550 podman[95239]: 2025-11-29 07:17:44.612718897 +0000 UTC m=+0.692820317 container create 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-1 calling monitor election
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:17:44 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:17:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1515f466d77d30402cd1d4389358e7e8ef9c991dd68fe6dbefed680c473a1343/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:17:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v200: 305 pgs: 32 peering, 1 active+clean+scrubbing, 272 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 29 02:17:45 np0005539550 podman[95239]: 2025-11-29 07:17:45.707154515 +0000 UTC m=+1.787255935 container init 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, com.redhat.component=keepalived-container, version=2.2.4, io.openshift.expose-services=, architecture=x86_64, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20)
Nov 29 02:17:45 np0005539550 podman[95239]: 2025-11-29 07:17:45.713236393 +0000 UTC m=+1.793337803 container start 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, description=keepalived for Ceph, vcs-type=git, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64)
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Starting VRRP child process, pid=4
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: Startup complete
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: (VI_0) Entering BACKUP STATE (init)
Nov 29 02:17:45 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:45 2025: VRRP_Script(check_backend) succeeded
Nov 29 02:17:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:45.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:45 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 29 02:17:46 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 29 02:17:46 np0005539550 bash[95239]: 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7
Nov 29 02:17:46 np0005539550 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.jyvvou for b66774a7-56d9-5535-bd8c-681234404870.
Nov 29 02:17:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:46.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:17:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:47.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:47 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Nov 29 02:17:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:48.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:48 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 29 02:17:49 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event bcc8c223-1107-4afc-ba87-7a56e07b663b (Global Recovery Event) in 31 seconds
Nov 29 02:17:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:49 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:17:49 2025: (VI_0) Entering MASTER STATE
Nov 29 02:17:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:49.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:50 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Nov 29 02:17:50 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 29 02:17:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:50.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 29 02:17:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 0 op/s
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:51 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:51 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 29 02:17:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:51.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 29 02:17:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:52.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:52 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.884047985s, txc = 0x556d2c6e3500
Nov 29 02:17:52 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 29 02:17:52 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 29 02:17:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 0 op/s
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:17:53 np0005539550 kind_allen[94753]: could not fetch user info: no user info saved
Nov 29 02:17:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:53.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 29 02:17:54 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 21 completed events
Nov 29 02:17:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 29 02:17:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:54.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:17:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 29 02:17:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v206: 305 pgs: 2 active+clean+scrubbing, 303 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 0 op/s
Nov 29 02:17:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 29 02:17:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.14( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.13( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.15( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.18( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.5( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.2( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[10.8( empty local-lis/les=0/0 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.546639442s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.524276733s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.17( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.546607018s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.524276733s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.643319130s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621017456s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.16( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.643287659s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621017456s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943734169s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921493530s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.15( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943710327s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921493530s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943618774s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921508789s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.3( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943596840s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921508789s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943527222s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921478271s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.8( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943481445s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921478271s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642699242s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.620788574s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642675400s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.620788574s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943188667s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921325684s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.9( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943166733s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921325684s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943086624s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921340942s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.a( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.943050385s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921340942s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642472267s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.620819092s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642442703s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.620819092s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642779350s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621200562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.f( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642754555s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621200562s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942789078s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921264648s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.d( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942767143s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921264648s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642361641s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.620925903s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942654610s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921234131s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.8( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642334938s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.620925903s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942630768s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921234131s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942563057s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921218872s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942918777s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921661377s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642440796s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621185303s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942462921s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921218872s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.3( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642422676s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621185303s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642501831s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621322632s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.4( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642473221s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642280579s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621185303s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942284584s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921203613s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642263412s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621200562s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.5( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642262459s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621185303s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.6( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942259789s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921203613s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942111015s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921096802s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.7( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642237663s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621200562s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.4( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942086220s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921096802s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642202377s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621322632s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.941647530s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920776367s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.19( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642181396s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1b( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.941626549s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920776367s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642256737s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621536255s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1a( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.642241478s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621536255s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.941452980s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920761108s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.19( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.941433907s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920761108s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641915321s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621322632s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1d( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641896248s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641902924s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621368408s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1e( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641880035s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621368408s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940949440s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920578003s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.14( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.942035675s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921661377s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1c( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940925598s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920578003s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940607071s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920440674s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940567970s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920440674s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.12( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940586090s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920440674s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.5( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940545082s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920440674s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940289497s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920379639s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641353607s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621475220s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.2( v 48'4 (0'0,48'4] local-lis/les=59/60 n=1 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940216064s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920379639s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641299248s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621475220s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641107559s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621429443s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.13( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641075134s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621429443s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939952850s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920349121s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.10( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939851761s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920349121s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641105652s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621673584s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1c( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.641077042s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621673584s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939704895s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920333862s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.1f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939668655s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920333862s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640703201s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621459961s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.14( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640677452s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621459961s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939397812s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920288086s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640520096s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621444702s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.17( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939373970s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920288086s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.12( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640490532s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621444702s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939210892s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920303345s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.11( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939190865s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920303345s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.940030098s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.921173096s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.f( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.939999580s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.921173096s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640124321s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 active pruub 210.621459961s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.938878059s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920242310s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[11.1b( empty local-lis/les=61/62 n=0 ec=61/53 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=8.640063286s) [1] r=-1 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 210.621459961s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.16( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.938828468s) [2] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920242310s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.938830376s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 active pruub 214.920394897s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:17:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 64 pg[8.18( v 48'4 (0'0,48'4] local-lis/les=59/60 n=0 ec=59/47 lis/c=59/59 les/c/f=60/60/0 sis=64 pruub=12.938812256s) [1] r=-1 lpr=64 pi=[59,64)/1 crt=48'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 214.920394897s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:17:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:55.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 29 02:17:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 29 02:17:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,21 pgs not in active + clean state
Nov 29 02:17:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v208: 305 pgs: 21 peering, 284 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 0 op/s
Nov 29 02:17:57 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 29 02:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:17:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:17:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:17:58.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:17:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 29 02:17:58 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 29 02:17:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:17:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:17:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:17:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:17:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.gntzbr on compute-2
Nov 29 02:17:58 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.gntzbr on compute-2
Nov 29 02:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:17:59
Nov 29 02:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.068852) are inactive; try again later
Nov 29 02:17:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 21 peering, 284 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:17:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 29 02:17:59 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.8( v 60'49 (0'0,60'49] local-lis/les=64/66 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.5( v 60'49 (0'0,60'49] local-lis/les=64/66 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.19( v 60'49 (0'0,60'49] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.1b( v 60'49 (0'0,60'49] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.18( v 60'49 (0'0,60'49] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.15( v 63'54 lc 63'53 (0'0,63'54] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=63'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.13( v 60'49 (0'0,60'49] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.14( v 63'54 lc 63'53 (0'0,63'54] local-lis/les=64/66 n=0 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=63'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 66 pg[10.2( v 60'49 (0'0,60'49] local-lis/les=64/66 n=1 ec=61/51 lis/c=61/61 les/c/f=63/63/0 sis=64) [0] r=0 lpr=64 pi=[61,64)/1 crt=60'49 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:00.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: Deploying daemon keepalived.rgw.default.compute-2.gntzbr on compute-2
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 29 02:18:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 204 B/s, 1 objects/s recovering
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 02:18:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 29 02:18:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 29 02:18:01 np0005539550 systemd[1]: libpod-bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0.scope: Deactivated successfully.
Nov 29 02:18:01 np0005539550 podman[94735]: 2025-11-29 07:18:01.604694379 +0000 UTC m=+51.250954028 container died bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:01 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 36fc2185-39d4-40db-ac79-d2a2d804fa9f (Global Recovery Event) in 5 seconds
Nov 29 02:18:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 29 02:18:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:02.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8e5d947fc96a9c4c153b8a3d78ab4ba86de44cae13cd2b3b86113820f6037fe6-merged.mount: Deactivated successfully.
Nov 29 02:18:02 np0005539550 podman[94735]: 2025-11-29 07:18:02.23337896 +0000 UTC m=+51.879638589 container remove bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0 (image=quay.io/ceph/ceph:v18, name=kind_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:02 np0005539550 systemd[1]: libpod-conmon-bd9ff84da6de1e8503ecd5436e5d48d8af522bcfd4b4867a341c7c7638873eb0.scope: Deactivated successfully.
Nov 29 02:18:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:02 np0005539550 python3[95350]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid b66774a7-56d9-5535-bd8c-681234404870 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:18:02 np0005539550 podman[95353]: 2025-11-29 07:18:02.723882721 +0000 UTC m=+0.055603616 container create 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:18:02 np0005539550 systemd[1]: Started libpod-conmon-06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f.scope.
Nov 29 02:18:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:02 np0005539550 podman[95353]: 2025-11-29 07:18:02.705586985 +0000 UTC m=+0.037307910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 29 02:18:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c227326abc5f1ca6347aed7ef862885cb1b6a3a5e389b051039028cd01a99a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c227326abc5f1ca6347aed7ef862885cb1b6a3a5e389b051039028cd01a99a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:02 np0005539550 podman[95353]: 2025-11-29 07:18:02.816105556 +0000 UTC m=+0.147826491 container init 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:18:02 np0005539550 podman[95353]: 2025-11-29 07:18:02.827188984 +0000 UTC m=+0.158909889 container start 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:18:02 np0005539550 podman[95353]: 2025-11-29 07:18:02.8339518 +0000 UTC m=+0.165672725 container attach 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:18:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 173 B/s, 0 objects/s recovering
Nov 29 02:18:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 29 02:18:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 29 02:18:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.807026863s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921676636s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.806876183s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921569824s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.806962013s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921676636s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.806817055s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921569824s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.806062698s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921310425s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.806042671s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921310425s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.805488586s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921249390s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.805460930s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921249390s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.805200577s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921218872s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.805180550s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921218872s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.804058075s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.920806885s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.804006577s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.920806885s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.804190636s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.921035767s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.804167747s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.921035767s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.803159714s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 222.920623779s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 67 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=67 pruub=12.803134918s) [2] r=-1 lpr=67 pi=[59,67)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.920623779s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:04.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 29 02:18:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v214: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 173 B/s, 0 objects/s recovering
Nov 29 02:18:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 29 02:18:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 29 02:18:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 29 02:18:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 29 02:18:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 29 02:18:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:06.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:18:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Nov 29 02:18:06 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Nov 29 02:18:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:06 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 22 completed events
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]: {
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "user_id": "openstack",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "display_name": "openstack",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "email": "",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "suspended": 0,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "max_buckets": 1000,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "subusers": [],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "keys": [
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        {
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:            "user": "openstack",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:            "access_key": "8GE353F8FLMMF8XB9WKK",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:            "secret_key": "aAbbVdUIKsSHO3vPNvXnTODA19KjcHPuFvrsr1SG"
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        }
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    ],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "swift_keys": [],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "caps": [],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "op_mask": "read, write, delete",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "default_placement": "",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "default_storage_class": "",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "placement_tags": [],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "bucket_quota": {
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "enabled": false,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "check_on_raw": false,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_size": -1,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_size_kb": 0,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_objects": -1
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    },
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "user_quota": {
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "enabled": false,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "check_on_raw": false,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_size": -1,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_size_kb": 0,
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:        "max_objects": -1
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    },
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "temp_url_keys": [],
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "type": "rgw",
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]:    "mfa_ids": []
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]: }
Nov 29 02:18:06 np0005539550 dreamy_ptolemy[95368]: 
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 29 02:18:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 69 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:18:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:18:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 29 02:18:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:08.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 29 02:18:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:18:10 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou[95256]: Sat Nov 29 07:18:10 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Nov 29 02:18:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 29 02:18:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 29 02:18:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:10.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 29 02:18:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 29 02:18:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Nov 29 02:18:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:12 np0005539550 systemd[1]: libpod-06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f.scope: Deactivated successfully.
Nov 29 02:18:12 np0005539550 podman[95353]: 2025-11-29 07:18:12.813437209 +0000 UTC m=+10.145158114 container died 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:18:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:13 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 29 02:18:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:14.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 29 02:18:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 70 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[59,69)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-32c227326abc5f1ca6347aed7ef862885cb1b6a3a5e389b051039028cd01a99a-merged.mount: Deactivated successfully.
Nov 29 02:18:15 np0005539550 systemd[75927]: Created slice User Background Tasks Slice.
Nov 29 02:18:15 np0005539550 systemd[75927]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 02:18:15 np0005539550 systemd[75927]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 02:18:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 8 unknown, 297 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:15 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 29 02:18:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:18:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 29 02:18:15 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 29 02:18:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:16.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:16 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 29 02:18:16 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 29 02:18:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 29 02:18:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:16.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 6 active+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 767 B/s wr, 70 op/s; 7/212 objects misplaced (3.302%); 109 B/s, 3 objects/s recovering
Nov 29 02:18:17 np0005539550 podman[95353]: 2025-11-29 07:18:17.543508686 +0000 UTC m=+14.875229591 container remove 06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f (image=quay.io/ceph/ceph:v18, name=dreamy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:18:17 np0005539550 systemd[1]: libpod-conmon-06a2e956dad759e1f2a048ae561908b7ef066f0911a999b9b54a5d7700ec9a2f.scope: Deactivated successfully.
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 29 02:18:17 np0005539550 ceph-mgr[74726]: [progress INFO root] complete: finished ev 4e65c482-0997-4925-a038-a52de37cc123 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:18:17 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 4e65c482-0997-4925-a038-a52de37cc123 (Updating ingress.rgw.default deployment (+4 -> 4)) in 90 seconds
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.388860703s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 236.543640137s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966924667s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121765137s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.17( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.388770103s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.543640137s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966348648s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121231079s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966252327s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121322632s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.7( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966189384s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121322632s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.13( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966588020s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121765137s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966508865s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121795654s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.965982437s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121292114s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966094971s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 237.121414185s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966459274s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121795654s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.3( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.965929985s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121292114s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=6 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966058731s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121414185s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.966233253s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.121231079s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.387910843s) [2] async=[2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 236.543609619s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:17 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 71 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=69/70 n=5 ec=59/49 lis/c=69/59 les/c/f=70/60/0 sis=71 pruub=12.387812614s) [2] r=-1 lpr=71 pi=[59,71)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.543609619s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 29 02:18:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Nov 29 02:18:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 29 02:18:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:18.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.361378652521869e-06 of space, bias 1.0, pg target 0.0019084135957565607 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:18:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 6 active+remapped, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 921 B/s wr, 99 op/s; 7/212 objects misplaced (3.302%); 131 B/s, 4 objects/s recovering
Nov 29 02:18:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 29 02:18:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:18:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 29 02:18:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 29 02:18:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 29 02:18:20 np0005539550 podman[95760]: 2025-11-29 07:18:20.184849129 +0000 UTC m=+0.548403456 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:20 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 29 02:18:20 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 29 02:18:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:20.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:20 np0005539550 podman[95760]: 2025-11-29 07:18:20.646306427 +0000 UTC m=+1.009860734 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:18:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 511 B/s wr, 80 op/s; 151 B/s, 4 objects/s recovering
Nov 29 02:18:21 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 29 02:18:21 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:18:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:22.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.349166870s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 238.921920776s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.349099159s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.921920776s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.348874092s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 238.922073364s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.348814964s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.922073364s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.348054886s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 238.921478271s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.348021507s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.921478271s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.347549438s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 238.921447754s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 73 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=73 pruub=10.347417831s) [2] r=-1 lpr=73 pi=[59,73)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.921447754s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:22 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 23 completed events
Nov 29 02:18:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:22.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:18:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:18:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 8 peering, 297 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 53 B/s, 1 objects/s recovering
Nov 29 02:18:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:23 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 29 02:18:23 np0005539550 podman[95917]: 2025-11-29 07:18:23.519983575 +0000 UTC m=+1.045698380 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:18:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 29 02:18:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:24.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:24 np0005539550 podman[95917]: 2025-11-29 07:18:24.244955894 +0000 UTC m=+1.770670699 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:18:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 4 unknown, 301 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 0 objects/s recovering
Nov 29 02:18:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 29 02:18:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:25 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 29 02:18:25 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 29 02:18:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:26.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:26 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 29 02:18:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 29 02:18:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 2 active+clean+scrubbing, 4 unknown, 299 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 0 objects/s recovering
Nov 29 02:18:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.8 deep-scrub starts
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 74 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.8 deep-scrub ok
Nov 29 02:18:27 np0005539550 podman[95988]: 2025-11-29 07:18:27.716397132 +0000 UTC m=+0.757588483 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 02:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:27 np0005539550 podman[96012]: 2025-11-29 07:18:27.838047907 +0000 UTC m=+0.096039064 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.openshift.expose-services=, release=1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:28 np0005539550 podman[95988]: 2025-11-29 07:18:28.126312978 +0000 UTC m=+1.167504319 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 29 02:18:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 29 02:18:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:28.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 29 02:18:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 75 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 75 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 75 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:28 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 75 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[59,74)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 2 active+clean+scrubbing, 4 unknown, 299 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:18:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 29 02:18:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:30.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 29 02:18:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 29 02:18:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8577c135-3228-465f-b3ed-585aba680135 does not exist
Nov 29 02:18:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c86b3208-8fe8-4411-ab88-09505b719dcf does not exist
Nov 29 02:18:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f4f0168-00d0-4cc1-b583-58a57c1c1011 does not exist
Nov 29 02:18:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 4 active+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959820747s) [2] async=[2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 251.239700317s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959870338s) [2] async=[2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 251.239639282s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959918976s) [2] async=[2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 251.239654541s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.1d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959691048s) [2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.239639282s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.5( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959679604s) [2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.239654541s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.959017754s) [2] async=[2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 251.239685059s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.d( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=6 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.958908081s) [2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.239700317s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 76 pg[9.15( v 55'1153 (0'0,55'1153] local-lis/les=74/75 n=5 ec=59/49 lis/c=74/59 les/c/f=75/60/0 sis=76 pruub=12.958868027s) [2] r=-1 lpr=76 pi=[59,76)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.239685059s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 29 02:18:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 29 02:18:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:32.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:32 np0005539550 podman[96301]: 2025-11-29 07:18:32.146658234 +0000 UTC m=+0.031550025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:32.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:32 np0005539550 podman[96301]: 2025-11-29 07:18:32.669387636 +0000 UTC m=+0.554279407 container create d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:18:32 np0005539550 systemd[1]: Started libpod-conmon-d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332.scope.
Nov 29 02:18:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:33 np0005539550 podman[96301]: 2025-11-29 07:18:33.006968072 +0000 UTC m=+0.891859873 container init d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:18:33 np0005539550 podman[96301]: 2025-11-29 07:18:33.016619061 +0000 UTC m=+0.901510832 container start d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:18:33 np0005539550 naughty_cartwright[96320]: 167 167
Nov 29 02:18:33 np0005539550 systemd[1]: libpod-d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332.scope: Deactivated successfully.
Nov 29 02:18:33 np0005539550 podman[96301]: 2025-11-29 07:18:33.054643424 +0000 UTC m=+0.939535195 container attach d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:33 np0005539550 podman[96301]: 2025-11-29 07:18:33.055489676 +0000 UTC m=+0.940381447 container died d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 4 active+remapped, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 98 B/s, 3 objects/s recovering
Nov 29 02:18:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 29 02:18:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-112bac25a816825a02fe0cf4bf72fb4d857d0b3f91c5f249cb170d7b272bedd1-merged.mount: Deactivated successfully.
Nov 29 02:18:33 np0005539550 podman[96301]: 2025-11-29 07:18:33.340031271 +0000 UTC m=+1.224923042 container remove d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:18:33 np0005539550 systemd[1]: libpod-conmon-d485e2642dad3dc3fe215e75605ee21db11f5c5098f4ff963dd47c42a598e332.scope: Deactivated successfully.
Nov 29 02:18:33 np0005539550 podman[96346]: 2025-11-29 07:18:33.49126508 +0000 UTC m=+0.026791323 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:33 np0005539550 podman[96346]: 2025-11-29 07:18:33.616368514 +0000 UTC m=+0.151894777 container create f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:33 np0005539550 systemd[1]: Started libpod-conmon-f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0.scope.
Nov 29 02:18:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:33 np0005539550 podman[96346]: 2025-11-29 07:18:33.947611526 +0000 UTC m=+0.483137779 container init f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:33 np0005539550 podman[96346]: 2025-11-29 07:18:33.961278059 +0000 UTC m=+0.496804272 container start f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:18:34 np0005539550 podman[96346]: 2025-11-29 07:18:34.083897968 +0000 UTC m=+0.619424191 container attach f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:34.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:34.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:34 np0005539550 confident_snyder[96363]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:18:34 np0005539550 confident_snyder[96363]: --> relative data size: 1.0
Nov 29 02:18:34 np0005539550 confident_snyder[96363]: --> All data devices are unavailable
Nov 29 02:18:34 np0005539550 systemd[1]: libpod-f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0.scope: Deactivated successfully.
Nov 29 02:18:34 np0005539550 podman[96346]: 2025-11-29 07:18:34.836277066 +0000 UTC m=+1.371803309 container died f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:18:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3158c009e63b5d6f46e257ce4b6b9391f7a36d145d63080d73368abff813bfde-merged.mount: Deactivated successfully.
Nov 29 02:18:35 np0005539550 podman[96346]: 2025-11-29 07:18:35.106240624 +0000 UTC m=+1.641766847 container remove f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:18:35 np0005539550 systemd[1]: libpod-conmon-f310d2f27beca9128ca817ca6415573efc3f69a3b5d33ba484d551cfcda750c0.scope: Deactivated successfully.
Nov 29 02:18:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.71660179 +0000 UTC m=+0.048589316 container create c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:18:35 np0005539550 systemd[1]: Started libpod-conmon-c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c.scope.
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.690553727 +0000 UTC m=+0.022541273 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.887056125 +0000 UTC m=+0.219043651 container init c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.895524064 +0000 UTC m=+0.227511590 container start c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:18:35 np0005539550 confident_sinoussi[96550]: 167 167
Nov 29 02:18:35 np0005539550 systemd[1]: libpod-c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c.scope: Deactivated successfully.
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.903075069 +0000 UTC m=+0.235062605 container attach c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:18:35 np0005539550 podman[96533]: 2025-11-29 07:18:35.903744467 +0000 UTC m=+0.235731993 container died c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:18:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-39fabd16fdd22792adc543950842691a9a3d535d382b9478642d3fee12b61954-merged.mount: Deactivated successfully.
Nov 29 02:18:36 np0005539550 podman[96533]: 2025-11-29 07:18:36.151504981 +0000 UTC m=+0.483492507 container remove c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:18:36 np0005539550 systemd[1]: libpod-conmon-c674b347c53f9a97bfa60b049a04e1bf91bd5f94bb5083f2afa21b0c851f7f7c.scope: Deactivated successfully.
Nov 29 02:18:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:36 np0005539550 podman[96576]: 2025-11-29 07:18:36.33911983 +0000 UTC m=+0.066989642 container create c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:18:36 np0005539550 systemd[1]: Started libpod-conmon-c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e.scope.
Nov 29 02:18:36 np0005539550 podman[96576]: 2025-11-29 07:18:36.303062058 +0000 UTC m=+0.030931890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ce5c043a4ef08d03bfea8785e40a6c9451f184c9e2dfb6824861f2db711732/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ce5c043a4ef08d03bfea8785e40a6c9451f184c9e2dfb6824861f2db711732/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ce5c043a4ef08d03bfea8785e40a6c9451f184c9e2dfb6824861f2db711732/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70ce5c043a4ef08d03bfea8785e40a6c9451f184c9e2dfb6824861f2db711732/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:36 np0005539550 podman[96576]: 2025-11-29 07:18:36.587310175 +0000 UTC m=+0.315180007 container init c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:18:36 np0005539550 podman[96576]: 2025-11-29 07:18:36.595330313 +0000 UTC m=+0.323200115 container start c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:18:36 np0005539550 podman[96576]: 2025-11-29 07:18:36.599932882 +0000 UTC m=+0.327802694 container attach c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:36.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 4 peering, 301 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:37 np0005539550 practical_chaum[96592]: {
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:    "0": [
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:        {
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "devices": [
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "/dev/loop3"
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            ],
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "lv_name": "ceph_lv0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "lv_size": "7511998464",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "name": "ceph_lv0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "tags": {
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.cluster_name": "ceph",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.crush_device_class": "",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.encrypted": "0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.osd_id": "0",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.type": "block",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:                "ceph.vdo": "0"
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            },
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "type": "block",
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:            "vg_name": "ceph_vg0"
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:        }
Nov 29 02:18:37 np0005539550 practical_chaum[96592]:    ]
Nov 29 02:18:37 np0005539550 practical_chaum[96592]: }
Nov 29 02:18:37 np0005539550 systemd[1]: libpod-c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e.scope: Deactivated successfully.
Nov 29 02:18:37 np0005539550 podman[96576]: 2025-11-29 07:18:37.471089289 +0000 UTC m=+1.198959101 container died c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:18:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-70ce5c043a4ef08d03bfea8785e40a6c9451f184c9e2dfb6824861f2db711732-merged.mount: Deactivated successfully.
Nov 29 02:18:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 29 02:18:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:18:37 np0005539550 podman[96576]: 2025-11-29 07:18:37.926431488 +0000 UTC m=+1.654301290 container remove c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.675482750s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 255.968719482s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.675436020s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.968719482s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674880981s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 255.968612671s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674847603s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.968612671s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674812317s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 255.968627930s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674786568s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.968627930s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674750328s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 255.968658447s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:37 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 77 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=77 pruub=11.674722672s) [1] r=-1 lpr=77 pi=[59,77)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 255.968658447s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 29 02:18:37 np0005539550 systemd[1]: libpod-conmon-c2aa445898c5dac141c2d3d6f4f57ef211a23907691e6f63dbdcf093ad8dd35e.scope: Deactivated successfully.
Nov 29 02:18:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.579054278 +0000 UTC m=+0.100889449 container create c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.501698578 +0000 UTC m=+0.023533809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:38 np0005539550 systemd[1]: Started libpod-conmon-c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e.scope.
Nov 29 02:18:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 29 02:18:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:38.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.793328836 +0000 UTC m=+0.315164037 container init c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.802211246 +0000 UTC m=+0.324046417 container start c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:18:38 np0005539550 angry_neumann[96773]: 167 167
Nov 29 02:18:38 np0005539550 systemd[1]: libpod-c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e.scope: Deactivated successfully.
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.81320409 +0000 UTC m=+0.335039261 container attach c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.81357455 +0000 UTC m=+0.335409721 container died c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:18:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6cc695417dafe007caabdda29a733016f548a62e2e03c14953ad225fde336b80-merged.mount: Deactivated successfully.
Nov 29 02:18:38 np0005539550 podman[96757]: 2025-11-29 07:18:38.858351817 +0000 UTC m=+0.380186988 container remove c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:18:38 np0005539550 systemd[1]: libpod-conmon-c3a47f609c21c42f6402cd2c12e562ffdb3ab85e855146375c4e1ae5fc6e841e.scope: Deactivated successfully.
Nov 29 02:18:39 np0005539550 podman[96848]: 2025-11-29 07:18:38.999062394 +0000 UTC m=+0.026065714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 29 02:18:39 np0005539550 podman[96848]: 2025-11-29 07:18:39.163930926 +0000 UTC m=+0.190934226 container create 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:39 np0005539550 systemd[1]: Started libpod-conmon-6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100.scope.
Nov 29 02:18:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 active+clean+scrubbing, 4 peering, 300 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4828e0441492f2cd7f41e1fdd5637f27a631bcb38837143d374b3bd79b3c2ecf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4828e0441492f2cd7f41e1fdd5637f27a631bcb38837143d374b3bd79b3c2ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4828e0441492f2cd7f41e1fdd5637f27a631bcb38837143d374b3bd79b3c2ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4828e0441492f2cd7f41e1fdd5637f27a631bcb38837143d374b3bd79b3c2ecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:18:39 np0005539550 podman[96848]: 2025-11-29 07:18:39.319386233 +0000 UTC m=+0.346389563 container init 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:39 np0005539550 podman[96848]: 2025-11-29 07:18:39.327481252 +0000 UTC m=+0.354484562 container start 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 29 02:18:39 np0005539550 podman[96848]: 2025-11-29 07:18:39.667430909 +0000 UTC m=+0.694434209 container attach 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]: {
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:        "osd_id": 0,
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:        "type": "bluestore"
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]:    }
Nov 29 02:18:40 np0005539550 nifty_shaw[96866]: }
Nov 29 02:18:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:40 np0005539550 systemd[1]: libpod-6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100.scope: Deactivated successfully.
Nov 29 02:18:40 np0005539550 podman[96848]: 2025-11-29 07:18:40.252356289 +0000 UTC m=+1.279359589 container died 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:18:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:40.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:40 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 78 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 29 02:18:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 29 02:18:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4828e0441492f2cd7f41e1fdd5637f27a631bcb38837143d374b3bd79b3c2ecf-merged.mount: Deactivated successfully.
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 29 02:18:41 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 79 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:41 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 79 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:41 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 79 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:41 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 79 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [1]/[0] async=[1] r=0 lpr=78 pi=[59,78)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:41 np0005539550 podman[96848]: 2025-11-29 07:18:41.633525269 +0000 UTC m=+2.660528569 container remove 6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:41 np0005539550 systemd[1]: libpod-conmon-6bec4d3cf23f55f630aafd13327ad3ff192bfd4dccfb6663a5d2e3a2ac4f7100.scope: Deactivated successfully.
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:42 np0005539550 systemd-logind[788]: New session 35 of user zuul.
Nov 29 02:18:42 np0005539550 systemd[1]: Started Session 35 of User zuul.
Nov 29 02:18:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:18:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:18:42 np0005539550 ceph-mgr[74726]: [progress INFO root] Completed event 90694232-e0f6-4717-8892-2c5a55cd3ac6 (Global Recovery Event) in 30 seconds
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:42 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Nov 29 02:18:42 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 29 02:18:42 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 80 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=14.907655716s) [1] async=[1] r=-1 lpr=80 pi=[59,80)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.912322998s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:42 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 80 pg[9.16( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=80 pruub=14.907586098s) [1] r=-1 lpr=80 pi=[59,80)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.912322998s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:42.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 999d6428-bb55-4704-a658-9d6e70147dfe does not exist
Nov 29 02:18:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 00c2880d-e963-4dd2-a814-bff28df7d2bc does not exist
Nov 29 02:18:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e6aa738-b66f-45c2-839d-5db6f4c6efe4 does not exist
Nov 29 02:18:43 np0005539550 python3.9[97060]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:18:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 110 B/s, 4 objects/s recovering
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:18:43 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:18:43 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:43 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:18:43 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 29 02:18:43 np0005539550 podman[97291]: 2025-11-29 07:18:43.917910204 +0000 UTC m=+0.055721471 container create 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 29 02:18:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 29 02:18:43 np0005539550 systemd[1]: Started libpod-conmon-6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4.scope.
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.584361076s) [1] async=[1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.915771484s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.6( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.584266663s) [1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.915771484s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=13.637109756s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.969055176s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=13.636988640s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.969055176s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.583561897s) [1] async=[1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.915832520s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=5 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.583514214s) [1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.915832520s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.579596519s) [1] async=[1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.912322998s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.e( v 55'1153 (0'0,55'1153] local-lis/les=78/79 n=6 ec=59/49 lis/c=78/59 les/c/f=79/60/0 sis=81 pruub=13.579523087s) [1] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.912322998s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=13.636197090s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 263.969360352s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 81 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=81 pruub=13.636013985s) [2] r=-1 lpr=81 pi=[59,81)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 263.969360352s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:43 np0005539550 podman[97291]: 2025-11-29 07:18:43.890170037 +0000 UTC m=+0.027981324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:44 np0005539550 podman[97291]: 2025-11-29 07:18:44.025089575 +0000 UTC m=+0.162900932 container init 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:44 np0005539550 podman[97291]: 2025-11-29 07:18:44.035571665 +0000 UTC m=+0.173382932 container start 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: Reconfiguring mon.compute-0 (monmap changed)...
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 29 02:18:44 np0005539550 serene_mcnulty[97305]: 167 167
Nov 29 02:18:44 np0005539550 systemd[1]: libpod-6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4.scope: Deactivated successfully.
Nov 29 02:18:44 np0005539550 podman[97291]: 2025-11-29 07:18:44.044759693 +0000 UTC m=+0.182570970 container attach 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:18:44 np0005539550 podman[97291]: 2025-11-29 07:18:44.047077783 +0000 UTC m=+0.184889050 container died 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:18:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-55d01050394b9dc7ac959b597c20c181560e6c6aa42b1029462df73bf3b13a8e-merged.mount: Deactivated successfully.
Nov 29 02:18:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:44.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:44 np0005539550 podman[97291]: 2025-11-29 07:18:44.275843316 +0000 UTC m=+0.413654573 container remove 6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mcnulty, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:44 np0005539550 systemd[1]: libpod-conmon-6370975a06c7fb2689247054eca34e7d591735cd170aea28b3d10c00a8769eb4.scope: Deactivated successfully.
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:44 np0005539550 python3.9[97477]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:18:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 29 02:18:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 29 02:18:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 29 02:18:45 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 82 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:45 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 82 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:45 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 82 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:45 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 82 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:18:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:46 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.pdhsqi (monmap changed)...
Nov 29 02:18:46 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.pdhsqi (monmap changed)...
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:46 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:18:46 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:18:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:46 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 29 02:18:46 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 29 02:18:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.840216449 +0000 UTC m=+0.062517977 container create 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:18:46 np0005539550 systemd[1]: Started libpod-conmon-83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147.scope.
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.8062213 +0000 UTC m=+0.028522848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.965936568 +0000 UTC m=+0.188238116 container init 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.974276264 +0000 UTC m=+0.196577792 container start 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:18:46 np0005539550 systemd[1]: libpod-83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147.scope: Deactivated successfully.
Nov 29 02:18:46 np0005539550 hardcore_thompson[97623]: 167 167
Nov 29 02:18:46 np0005539550 conmon[97623]: conmon 83322ff7f048837b7200 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147.scope/container/memory.events
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.992521466 +0000 UTC m=+0.214823024 container attach 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:18:46 np0005539550 podman[97606]: 2025-11-29 07:18:46.993172132 +0000 UTC m=+0.215473660 container died 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:18:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1765a2fd2fec937e592eaf3564b60443cd9729b2144b6b3a264c0395dbfbef27-merged.mount: Deactivated successfully.
Nov 29 02:18:47 np0005539550 podman[97606]: 2025-11-29 07:18:47.120700479 +0000 UTC m=+0.343002007 container remove 83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:47 np0005539550 systemd[1]: libpod-conmon-83322ff7f048837b72004a75efd60e264b8da08dd61c45b60503967d528c1147.scope: Deactivated successfully.
Nov 29 02:18:47 np0005539550 ceph-mgr[74726]: [progress INFO root] Writing back 24 completed events
Nov 29 02:18:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:18:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 29 02:18:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:48.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:48 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 29 02:18:48 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 29 02:18:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:48.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 33 B/s, 3 objects/s recovering
Nov 29 02:18:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:50.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.pdhsqi", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 29 02:18:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 02:18:51 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 29 02:18:51 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 29 02:18:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 29 02:18:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 29 02:18:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:52.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:18:52 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 83 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:52 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 83 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] async=[2] r=0 lpr=82 pi=[59,82)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:18:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:52.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 29 02:18:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 active+clean+scrubbing, 2 unknown, 302 active+clean; 455 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 02:18:53 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Nov 29 02:18:53 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Nov 29 02:18:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:54.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:54.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: Reconfiguring mgr.compute-0.pdhsqi (monmap changed)...
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: Reconfiguring daemon mgr.compute-0.pdhsqi on compute-0
Nov 29 02:18:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 84 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=5 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=84 pruub=13.354863167s) [2] async=[2] r=-1 lpr=84 pi=[59,84)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 274.938659668s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:55 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 84 pg[9.18( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=5 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=84 pruub=13.354741096s) [2] r=-1 lpr=84 pi=[59,84)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 274.938659668s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 29 02:18:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 active+clean+scrubbing, 2 unknown, 302 active+clean; 455 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 2 objects/s recovering
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:55 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:18:55 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:55 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:18:55 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 29 02:18:56 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 85 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=7 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=85 pruub=12.376668930s) [2] async=[2] r=-1 lpr=85 pi=[59,85)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 274.958526611s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:18:56 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 85 pg[9.8( v 55'1153 (0'0,55'1153] local-lis/les=82/83 n=7 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=85 pruub=12.376571655s) [2] r=-1 lpr=85 pi=[59,85)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 274.958526611s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:18:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:56.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.331065146 +0000 UTC m=+0.024621877 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.433515344 +0000 UTC m=+0.127072055 container create 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:18:56 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Nov 29 02:18:56 np0005539550 systemd[1]: Started libpod-conmon-99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1.scope.
Nov 29 02:18:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.580463902 +0000 UTC m=+0.274020643 container init 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.588700215 +0000 UTC m=+0.282256926 container start 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:18:56 np0005539550 loving_shannon[97810]: 167 167
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.594208208 +0000 UTC m=+0.287764939 container attach 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:56 np0005539550 systemd[1]: libpod-99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1.scope: Deactivated successfully.
Nov 29 02:18:56 np0005539550 podman[97792]: 2025-11-29 07:18:56.594929116 +0000 UTC m=+0.288485857 container died 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:18:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:18:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:18:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b680901d0732e602aaec78d2d380fed1000a9bd34d59becd85dcbad5ca078a2a-merged.mount: Deactivated successfully.
Nov 29 02:18:57 np0005539550 podman[97792]: 2025-11-29 07:18:57.041047778 +0000 UTC m=+0.734604519 container remove 99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:18:57 np0005539550 systemd[1]: libpod-conmon-99ab921115e3a4e104eec6fc45b93386171b20409e93fb359350584a4dafd4b1.scope: Deactivated successfully.
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 1 active+remapped, 1 active+clean+scrubbing, 1 unknown, 302 active+clean; 455 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 02:18:57 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:57 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: Reconfiguring crash.compute-0 (monmap changed)...
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: Reconfiguring daemon crash.compute-0 on compute-0
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Nov 29 02:18:57 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Nov 29 02:18:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:18:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:18:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:18:58.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.297095752 +0000 UTC m=+0.025390687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.424433354 +0000 UTC m=+0.152728269 container create 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:18:58 np0005539550 systemd[1]: Started libpod-conmon-2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770.scope.
Nov 29 02:18:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.540842793 +0000 UTC m=+0.269137738 container init 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.54845938 +0000 UTC m=+0.276754295 container start 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.553554061 +0000 UTC m=+0.281849006 container attach 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:58 np0005539550 vibrant_goldstine[97992]: 167 167
Nov 29 02:18:58 np0005539550 systemd[1]: libpod-2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770.scope: Deactivated successfully.
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.555694667 +0000 UTC m=+0.283989592 container died 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:18:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-030e34f9825bbe74c90bf641eb4540c15dbd4dfa71d07e2b9d2b4fd9721a6175-merged.mount: Deactivated successfully.
Nov 29 02:18:58 np0005539550 podman[97973]: 2025-11-29 07:18:58.67264697 +0000 UTC m=+0.400941885 container remove 2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldstine, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:18:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:18:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:18:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:18:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:18:58 np0005539550 systemd[1]: libpod-conmon-2042348ecd9cfafef5038b75f99ebb04c0c46216e1aeb54b55d3f782cbc32770.scope: Deactivated successfully.
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: Reconfiguring osd.0 (monmap changed)...
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: Reconfiguring daemon osd.0 on compute-0
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:18:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:18:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 29 02:18:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:18:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:18:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:18:59
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Nov 29 02:18:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 1 objects/s recovering
Nov 29 02:18:59 np0005539550 systemd[1]: session-35.scope: Deactivated successfully.
Nov 29 02:18:59 np0005539550 systemd[1]: session-35.scope: Consumed 9.032s CPU time.
Nov 29 02:18:59 np0005539550 systemd-logind[788]: Session 35 logged out. Waiting for processes to exit.
Nov 29 02:18:59 np0005539550 systemd-logind[788]: Removed session 35.
Nov 29 02:19:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:00.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 35 B/s, 1 objects/s recovering
Nov 29 02:19:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 29 02:19:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 02:19:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:01 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 29 02:19:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 29 02:19:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:02.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Nov 29 02:19:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 29 02:19:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Nov 29 02:19:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: Reconfiguring crash.compute-1 (monmap changed)...
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: Reconfiguring daemon crash.compute-1 on compute-1
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 29 02:19:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 87 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87 pruub=8.692103386s) [2] r=-1 lpr=87 pi=[59,87)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 279.970275879s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 87 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87 pruub=8.692027092s) [2] r=-1 lpr=87 pi=[59,87)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 279.970275879s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 87 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87 pruub=15.284976959s) [2] r=-1 lpr=87 pi=[59,87)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 286.564453125s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 87 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=87 pruub=15.284907341s) [2] r=-1 lpr=87 pi=[59,87)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 286.564453125s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:19:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 29 02:19:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:19:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 29 02:19:05 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 29 02:19:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:19:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 29 02:19:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 29 02:19:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:06.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=14.580251694s) [1] r=-1 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 287.976745605s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=14.580206871s) [1] r=-1 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 287.976745605s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=14.579883575s) [1] r=-1 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 287.976684570s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 88 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=14.579852104s) [1] r=-1 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 287.976684570s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 89 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=7 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[59,88)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:07 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:19:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:19:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:08 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 29 02:19:08 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 29 02:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:19:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:09 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 29 02:19:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 29 02:19:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:19:09 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 29 02:19:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 29 02:19:10 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 29 02:19:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:10.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 29 02:19:10 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 29 02:19:10 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 90 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=5 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=90 pruub=12.432558060s) [2] async=[2] r=-1 lpr=90 pi=[59,90)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 289.731079102s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:10 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 90 pg[9.9( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=5 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=90 pruub=12.432484627s) [2] r=-1 lpr=90 pi=[59,90)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.731079102s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 2 remapped+peering, 2 active+remapped, 301 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Nov 29 02:19:11 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 29 02:19:11 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 29 02:19:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 29 02:19:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:12.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 90 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 90 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=9 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=89) [1]/[0] async=[1] r=0 lpr=89 pi=[59,89)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:12.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: Reconfiguring osd.1 (monmap changed)...
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: Reconfiguring daemon osd.1 on compute-1
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 29 02:19:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:19:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 2 remapped+peering, 2 active+remapped, 301 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 34 B/s, 1 objects/s recovering
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 29 02:19:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 91 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=7 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=91 pruub=9.895577431s) [2] async=[2] r=-1 lpr=91 pi=[59,91)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 289.731170654s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 91 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=88/89 n=7 ec=59/49 lis/c=88/59 les/c/f=89/60/0 sis=91 pruub=9.895351410s) [2] r=-1 lpr=91 pi=[59,91)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.731170654s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:19:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:14.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 29 02:19:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 29 02:19:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 92 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=9 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92 pruub=13.648133278s) [1] async=[1] r=-1 lpr=92 pi=[59,92)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 294.809509277s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 92 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=4 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92 pruub=13.648007393s) [1] async=[1] r=-1 lpr=92 pi=[59,92)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 294.809478760s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 92 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=4 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92 pruub=13.647922516s) [1] r=-1 lpr=92 pi=[59,92)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 294.809478760s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:14 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 92 pg[9.a( v 55'1153 (0'0,55'1153] local-lis/les=89/90 n=9 ec=59/49 lis/c=89/59 les/c/f=90/60/0 sis=92 pruub=13.647759438s) [1] r=-1 lpr=92 pi=[59,92)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 294.809509277s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:14 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:19:14 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:19:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:19:14 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:19:14 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:19:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 2 remapped+peering, 2 peering, 301 active+clean; 455 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Nov 29 02:19:15 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 29 02:19:15 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 29 02:19:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 29 02:19:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:19:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:16.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:16 np0005539550 systemd-logind[788]: New session 36 of user zuul.
Nov 29 02:19:16 np0005539550 systemd[1]: Started Session 36 of User zuul.
Nov 29 02:19:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 29 02:19:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 29 02:19:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:16.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 2 remapped+peering, 2 peering, 301 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:17 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 29 02:19:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:19:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:17 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 29 02:19:17 np0005539550 python3.9[98252]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 02:19:17 np0005539550 ceph-mon[74435]: Reconfiguring mon.compute-1 (monmap changed)...
Nov 29 02:19:17 np0005539550 ceph-mon[74435]: Reconfiguring daemon mon.compute-1 on compute-1
Nov 29 02:19:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:18 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:19:18 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:19:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:19:18 np0005539550 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:19:18 np0005539550 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:19:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:18.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:18 np0005539550 python3.9[98427]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:19 np0005539550 ceph-mon[74435]: Reconfiguring mon.compute-2 (monmap changed)...
Nov 29 02:19:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 29 02:19:19 np0005539550 ceph-mon[74435]: Reconfiguring daemon mon.compute-2 on compute-2
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.543132328308208e-06 of space, bias 1.0, pg target 0.001962939698492462 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:19:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 2 peering, 303 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 2 objects/s recovering
Nov 29 02:19:20 np0005539550 python3.9[98636]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:19:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:20.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:20.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 28 B/s, 1 objects/s recovering
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:19:21 np0005539550 python3.9[98789]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:19:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:22 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 29 02:19:22 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 29 02:19:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539550 python3.9[99046]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 29 02:19:22 np0005539550 podman[99142]: 2025-11-29 07:19:22.709396662 +0000 UTC m=+0.064086253 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:19:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:22.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 29 02:19:22 np0005539550 podman[99142]: 2025-11-29 07:19:22.851238505 +0000 UTC m=+0.205928076 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:19:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 2 unknown, 303 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:23 np0005539550 python3.9[99340]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 29 02:19:23 np0005539550 podman[99451]: 2025-11-29 07:19:23.955929476 +0000 UTC m=+0.509403027 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 29 02:19:23 np0005539550 podman[99451]: 2025-11-29 07:19:23.991218695 +0000 UTC m=+0.544692216 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:24 np0005539550 podman[99639]: 2025-11-29 07:19:24.203615216 +0000 UTC m=+0.053825142 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, name=keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:19:24 np0005539550 podman[99639]: 2025-11-29 07:19:24.214022831 +0000 UTC m=+0.064232737 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:19:24 np0005539550 python3.9[99615]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:19:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Nov 29 02:19:24 np0005539550 network[99685]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:19:24 np0005539550 network[99686]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:19:24 np0005539550 network[99687]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:19:24 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Nov 29 02:19:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:19:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 60b5d13e-a348-419c-8f4a-cd1b5658a6c1 does not exist
Nov 29 02:19:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c352d220-492d-48a5-bdd4-ac80b73ca1e9 does not exist
Nov 29 02:19:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8840583f-b703-4938-af49-d9bde06f81e4 does not exist
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:19:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:19:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 29 02:19:25 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 29 02:19:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 2 active+remapped, 303 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 1 objects/s recovering
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:25 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.527170045 +0000 UTC m=+0.022740071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.676197461 +0000 UTC m=+0.171767467 container create 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:19:25 np0005539550 systemd[1]: Started libpod-conmon-91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d.scope.
Nov 29 02:19:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.778801855 +0000 UTC m=+0.274371891 container init 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.788059261 +0000 UTC m=+0.283629267 container start 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.791608951 +0000 UTC m=+0.287178977 container attach 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:19:25 np0005539550 confident_fermat[99884]: 167 167
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.795820639 +0000 UTC m=+0.291390645 container died 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:19:25 np0005539550 systemd[1]: libpod-91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d.scope: Deactivated successfully.
Nov 29 02:19:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-905923b0e561cf98b48d8257f023ea5d798ad14d355f2fd9a8534a418eb67c18-merged.mount: Deactivated successfully.
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:19:25 np0005539550 podman[99856]: 2025-11-29 07:19:25.875429486 +0000 UTC m=+0.370999492 container remove 91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:19:25 np0005539550 systemd[1]: libpod-conmon-91dfd78b64a3605274361d34858c7c7e46677dbb5acaf84b302ed0aa9612435d.scope: Deactivated successfully.
Nov 29 02:19:26 np0005539550 podman[99921]: 2025-11-29 07:19:26.026826533 +0000 UTC m=+0.041767646 container create 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:19:26 np0005539550 systemd[1]: Started libpod-conmon-4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19.scope.
Nov 29 02:19:26 np0005539550 podman[99921]: 2025-11-29 07:19:26.00827089 +0000 UTC m=+0.023212023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:26 np0005539550 podman[99921]: 2025-11-29 07:19:26.123596858 +0000 UTC m=+0.138538001 container init 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:26 np0005539550 podman[99921]: 2025-11-29 07:19:26.130621717 +0000 UTC m=+0.145562820 container start 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:19:26 np0005539550 podman[99921]: 2025-11-29 07:19:26.1362484 +0000 UTC m=+0.151189523 container attach 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:19:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 29 02:19:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 29 02:19:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:26.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:27 np0005539550 brave_cannon[99942]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:19:27 np0005539550 brave_cannon[99942]: --> relative data size: 1.0
Nov 29 02:19:27 np0005539550 brave_cannon[99942]: --> All data devices are unavailable
Nov 29 02:19:27 np0005539550 systemd[1]: libpod-4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19.scope: Deactivated successfully.
Nov 29 02:19:27 np0005539550 podman[99921]: 2025-11-29 07:19:27.07282727 +0000 UTC m=+1.087768393 container died 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c2846d9021dd2cac9297d0a53156217281acc4a1db39fa6c1338cf084eeb2d20-merged.mount: Deactivated successfully.
Nov 29 02:19:27 np0005539550 podman[99921]: 2025-11-29 07:19:27.137943629 +0000 UTC m=+1.152884742 container remove 4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cannon, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:27 np0005539550 systemd[1]: libpod-conmon-4b9b643d6a354fa48351f7f164e50dc4e42576fe07a563f47b6b6d157b679f19.scope: Deactivated successfully.
Nov 29 02:19:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 29 02:19:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 29 02:19:27 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 29 02:19:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 2 active+remapped, 303 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 2 objects/s recovering
Nov 29 02:19:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 29 02:19:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.732258399 +0000 UTC m=+0.042834372 container create 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:27 np0005539550 systemd[1]: Started libpod-conmon-79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5.scope.
Nov 29 02:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.714206399 +0000 UTC m=+0.024782392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.828378988 +0000 UTC m=+0.138954961 container init 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.835717504 +0000 UTC m=+0.146293477 container start 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.840318922 +0000 UTC m=+0.150894895 container attach 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:27 np0005539550 great_heisenberg[100199]: 167 167
Nov 29 02:19:27 np0005539550 systemd[1]: libpod-79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5.scope: Deactivated successfully.
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.841924323 +0000 UTC m=+0.152500306 container died 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:19:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-195e4e83350ecd9b8347996b18f1baf387529fcfebc80460e15dc8df4cd99dca-merged.mount: Deactivated successfully.
Nov 29 02:19:27 np0005539550 podman[100182]: 2025-11-29 07:19:27.877968631 +0000 UTC m=+0.188544604 container remove 79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:19:27 np0005539550 systemd[1]: libpod-conmon-79a6b72a61740c4966cec0fc94bf58542cea5b31cf385fe82ee72619be6970f5.scope: Deactivated successfully.
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.030407134 +0000 UTC m=+0.038456330 container create 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:28 np0005539550 systemd[1]: Started libpod-conmon-61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5.scope.
Nov 29 02:19:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc755f2f2d06bfb0ef8a3dac70529e0982ee4cb41372da2a1d2a46d8085175a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc755f2f2d06bfb0ef8a3dac70529e0982ee4cb41372da2a1d2a46d8085175a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc755f2f2d06bfb0ef8a3dac70529e0982ee4cb41372da2a1d2a46d8085175a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc755f2f2d06bfb0ef8a3dac70529e0982ee4cb41372da2a1d2a46d8085175a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.012755165 +0000 UTC m=+0.020804391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.129487308 +0000 UTC m=+0.137536534 container init 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.137258346 +0000 UTC m=+0.145307542 container start 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.141068673 +0000 UTC m=+0.149117909 container attach 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:28 np0005539550 python3.9[100370]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:19:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]: {
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:    "0": [
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:        {
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "devices": [
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "/dev/loop3"
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            ],
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "lv_name": "ceph_lv0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "lv_size": "7511998464",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "name": "ceph_lv0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "tags": {
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.cluster_name": "ceph",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.crush_device_class": "",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.encrypted": "0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.osd_id": "0",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.type": "block",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:                "ceph.vdo": "0"
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            },
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "type": "block",
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:            "vg_name": "ceph_vg0"
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:        }
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]:    ]
Nov 29 02:19:28 np0005539550 gifted_ramanujan[100260]: }
Nov 29 02:19:28 np0005539550 systemd[1]: libpod-61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5.scope: Deactivated successfully.
Nov 29 02:19:28 np0005539550 podman[100224]: 2025-11-29 07:19:28.981786591 +0000 UTC m=+0.989835817 container died 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dc755f2f2d06bfb0ef8a3dac70529e0982ee4cb41372da2a1d2a46d8085175a2-merged.mount: Deactivated successfully.
Nov 29 02:19:29 np0005539550 podman[100224]: 2025-11-29 07:19:29.039407209 +0000 UTC m=+1.047456405 container remove 61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:19:29 np0005539550 systemd[1]: libpod-conmon-61715ae1fa112edc409ae8317ada0f39bb684d56f5bd25683c412ab30c7c5ef5.scope: Deactivated successfully.
Nov 29 02:19:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 2 peering, 303 active+clean; 455 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 312 B/s wr, 28 op/s; 0 B/s, 1 objects/s recovering
Nov 29 02:19:29 np0005539550 python3.9[100562]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 29 02:19:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.604367611 +0000 UTC m=+0.040562915 container create d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:19:29 np0005539550 systemd[1]: Started libpod-conmon-d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461.scope.
Nov 29 02:19:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.584288499 +0000 UTC m=+0.020483823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.687218961 +0000 UTC m=+0.123414275 container init d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.695289997 +0000 UTC m=+0.131485291 container start d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:19:29 np0005539550 admiring_beaver[100700]: 167 167
Nov 29 02:19:29 np0005539550 systemd[1]: libpod-d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461.scope: Deactivated successfully.
Nov 29 02:19:29 np0005539550 conmon[100700]: conmon d4ac241e58b9e77796af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461.scope/container/memory.events
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.700255903 +0000 UTC m=+0.136451197 container attach d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.700987942 +0000 UTC m=+0.137183236 container died d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:19:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2e1aebade1f259c54cb2855fa5a2ddadf13dcd9299f9e7c067212d1d2df141b3-merged.mount: Deactivated successfully.
Nov 29 02:19:29 np0005539550 podman[100685]: 2025-11-29 07:19:29.736208759 +0000 UTC m=+0.172404053 container remove d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:19:29 np0005539550 systemd[1]: libpod-conmon-d4ac241e58b9e77796af17fdd6a4393227cdff38576f54f6f7f83a2ee68b5461.scope: Deactivated successfully.
Nov 29 02:19:29 np0005539550 podman[100749]: 2025-11-29 07:19:29.886146539 +0000 UTC m=+0.047564833 container create 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:19:29 np0005539550 systemd[1]: Started libpod-conmon-8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543.scope.
Nov 29 02:19:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:19:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e45365ded970b84fac9bda98093b9ebe82033540a396f0c8d22572fa35cdab7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e45365ded970b84fac9bda98093b9ebe82033540a396f0c8d22572fa35cdab7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e45365ded970b84fac9bda98093b9ebe82033540a396f0c8d22572fa35cdab7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e45365ded970b84fac9bda98093b9ebe82033540a396f0c8d22572fa35cdab7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:19:29 np0005539550 podman[100749]: 2025-11-29 07:19:29.945015159 +0000 UTC m=+0.106433483 container init 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:19:29 np0005539550 podman[100749]: 2025-11-29 07:19:29.953904005 +0000 UTC m=+0.115322299 container start 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:19:29 np0005539550 podman[100749]: 2025-11-29 07:19:29.957646991 +0000 UTC m=+0.119065285 container attach 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:19:29 np0005539550 podman[100749]: 2025-11-29 07:19:29.864407605 +0000 UTC m=+0.025825949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:19:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 29 02:19:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 29 02:19:30 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 29 02:19:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 29 02:19:30 np0005539550 python3.9[100897]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:19:30 np0005539550 lucid_tu[100767]: {
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:        "osd_id": 0,
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:        "type": "bluestore"
Nov 29 02:19:30 np0005539550 lucid_tu[100767]:    }
Nov 29 02:19:30 np0005539550 lucid_tu[100767]: }
Nov 29 02:19:30 np0005539550 systemd[1]: libpod-8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543.scope: Deactivated successfully.
Nov 29 02:19:30 np0005539550 podman[100749]: 2025-11-29 07:19:30.919495074 +0000 UTC m=+1.080913368 container died 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:19:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 2 peering, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 280 B/s wr, 25 op/s; 0 B/s, 1 objects/s recovering
Nov 29 02:19:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 29 02:19:31 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 29 02:19:31 np0005539550 python3.9[101085]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:19:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7e45365ded970b84fac9bda98093b9ebe82033540a396f0c8d22572fa35cdab7-merged.mount: Deactivated successfully.
Nov 29 02:19:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:32.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:32 np0005539550 podman[100749]: 2025-11-29 07:19:32.824969497 +0000 UTC m=+2.986387791 container remove 8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:19:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:19:32 np0005539550 python3.9[101171]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:19:32 np0005539550 systemd[1]: libpod-conmon-8ba766d25ee8b909cdd5afb846e551b709b061ea2d43923e50b8a547f744c543.scope: Deactivated successfully.
Nov 29 02:19:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:34 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 29 02:19:34 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 29 02:19:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:34.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 29 02:19:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:19:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 29 02:19:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:36.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:37 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 29 02:19:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 02:19:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:19:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:37 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 29 02:19:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 02:19:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 29 02:19:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:40 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 29 02:19:40 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 29 02:19:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:40.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 29 02:19:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 29 02:19:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6feb976f-4d77-417b-b633-6460a9827d0e does not exist
Nov 29 02:19:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b95e45af-da5e-4209-9f7b-b247df482749 does not exist
Nov 29 02:19:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6f025b22-7eab-4521-a4ab-486cdde0fb0f does not exist
Nov 29 02:19:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 29 02:19:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:19:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 29 02:19:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:42.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 29 02:19:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 29 02:19:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 101 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101 pruub=10.073981285s) [1] r=-1 lpr=101 pi=[59,101)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 319.978668213s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:43 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 101 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=101 pruub=10.073904037s) [1] r=-1 lpr=101 pi=[59,101)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 319.978668213s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:44.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 29 02:19:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:44.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 29 02:19:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 29 02:19:46 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Nov 29 02:19:46 np0005539550 ceph-osd[84753]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Nov 29 02:19:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 29 02:19:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 29 02:19:46 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 102 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=0 lpr=102 pi=[59,102)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:46 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 102 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] r=0 lpr=102 pi=[59,102)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:19:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:19:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:46.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:19:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 29 02:19:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:48.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:48.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 2 unknown, 303 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:19:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 29 02:19:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 29 02:19:49 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 103 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=102/103 n=2 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=102) [1]/[0] async=[1] r=0 lpr=102 pi=[59,102)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:19:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:50.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 29 02:19:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:50.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 3 active+remapped, 302 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 02:19:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 29 02:19:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:19:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 29 02:19:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 29 02:19:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:52.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:52.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:52 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 104 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=102/103 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104 pruub=12.828146935s) [1] async=[1] r=-1 lpr=104 pi=[59,104)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 332.153442383s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:52 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 104 pg[9.10( v 55'1153 (0'0,55'1153] local-lis/les=102/103 n=2 ec=59/49 lis/c=102/59 les/c/f=103/60/0 sis=104 pruub=12.828075409s) [1] r=-1 lpr=104 pi=[59,104)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 332.153442383s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:19:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 29 02:19:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 29 02:19:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 1 objects/s recovering
Nov 29 02:19:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:19:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 29 02:19:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 29 02:19:54 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 105 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=105 pruub=15.344828606s) [1] r=-1 lpr=105 pi=[59,105)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 335.979339600s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:19:54 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 105 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=105 pruub=15.344707489s) [1] r=-1 lpr=105 pi=[59,105)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 335.979339600s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:19:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:54.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:54.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 29 02:19:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 1 objects/s recovering
Nov 29 02:19:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:19:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:19:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:56.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8 B/s, 1 objects/s recovering
Nov 29 02:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:19:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:19:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:19:58.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:19:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:19:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:19:58.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:19:59
Nov 29 02:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 02:19:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:20:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:00.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:00.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:02.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 29 02:20:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 29 02:20:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 106 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=106) [1]/[0] r=0 lpr=106 pi=[59,106)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 106 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=106) [1]/[0] r=0 lpr=106 pi=[59,106)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 29 02:20:03 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:20:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 29 02:20:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:04.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 29 02:20:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 29 02:20:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:05 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 107 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=106/107 n=5 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=106) [1]/[0] async=[1] r=0 lpr=106 pi=[59,106)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 29 02:20:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 29 02:20:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 29 02:20:06 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 108 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=106/107 n=5 ec=59/49 lis/c=106/59 les/c/f=107/60/0 sis=108 pruub=15.679882050s) [1] async=[1] r=-1 lpr=108 pi=[59,108)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 348.135253906s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:06 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 108 pg[9.11( v 55'1153 (0'0,55'1153] local-lis/les=106/107 n=5 ec=59/49 lis/c=106/59 les/c/f=107/60/0 sis=108 pruub=15.679778099s) [1] r=-1 lpr=108 pi=[59,108)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 348.135253906s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 29 02:20:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 29 02:20:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 5/210 objects misplaced (2.381%)
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:20:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:20:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:20:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:20:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:08.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 02:20:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 02:20:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 29 02:20:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:10.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 17 B/s, 0 objects/s recovering
Nov 29 02:20:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 02:20:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:20:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 29 02:20:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 29 02:20:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 29 02:20:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:20:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:20:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 29 02:20:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:20:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 02:20:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:16 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 110 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=110 pruub=9.607781410s) [1] r=-1 lpr=110 pi=[59,110)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 351.980041504s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:16 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 110 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=110 pruub=9.607722282s) [1] r=-1 lpr=110 pi=[59,110)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 351.980041504s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 29 02:20:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 29 02:20:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 29 02:20:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 29 02:20:18 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 112 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=112) [1]/[0] r=0 lpr=112 pi=[59,112)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:18 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 112 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=59/60 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=112) [1]/[0] r=0 lpr=112 pi=[59,112)/1 crt=55'1153 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 29 02:20:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 29 02:20:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 29 02:20:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:20:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 29 02:20:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 29 02:20:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 29 02:20:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 29 02:20:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:20.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 113 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=112/113 n=4 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=112) [1]/[0] async=[1] r=0 lpr=112 pi=[59,112)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:20:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:20.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 4/210 objects misplaced (1.905%)
Nov 29 02:20:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 29 02:20:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 29 02:20:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 29 02:20:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 114 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=112/113 n=4 ec=59/49 lis/c=112/59 les/c/f=113/60/0 sis=114 pruub=14.554889679s) [1] async=[1] r=-1 lpr=114 pi=[59,114)/1 crt=55'1153 lcod 0'0 mlcod 0'0 active pruub 363.091949463s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:20:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 114 pg[9.12( v 55'1153 (0'0,55'1153] local-lis/les=112/113 n=4 ec=59/49 lis/c=112/59 les/c/f=113/60/0 sis=114 pruub=14.554762840s) [1] r=-1 lpr=114 pi=[59,114)/1 crt=55'1153 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 363.091949463s@ mbc={}] state<Start>: transitioning to Stray
Nov 29 02:20:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:20:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:20:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 29 02:20:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:24.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:24.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:26.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 29 02:20:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 29 02:20:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:28.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:20:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 29 02:20:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:20:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 29 02:20:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:30.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:20:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 29 02:20:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 29 02:20:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:20:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 29 02:20:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:20:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 29 02:20:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 29 02:20:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:32.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:32.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 29 02:20:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 29 02:20:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 29 02:20:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 02:20:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 29 02:20:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 29 02:20:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 29 02:20:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:34 np0005539550 python3.9[101775]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:20:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 29 02:20:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 29 02:20:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 29 02:20:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 29 02:20:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:36.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:36.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:36 np0005539550 python3.9[102067]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 02:20:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 29 02:20:37 np0005539550 python3.9[102219]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 02:20:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:38.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:38.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:39 np0005539550 python3.9[102374]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:40 np0005539550 python3.9[102529]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 02:20:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:40.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 29 02:20:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 29 02:20:41 np0005539550 python3.9[102681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:42 np0005539550 podman[103025]: 2025-11-29 07:20:42.30902058 +0000 UTC m=+0.065233501 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:20:42 np0005539550 podman[103025]: 2025-11-29 07:20:42.430108582 +0000 UTC m=+0.186321473 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:20:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:42 np0005539550 python3.9[103073]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:42.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:43 np0005539550 python3.9[103239]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:20:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:43 np0005539550 podman[103284]: 2025-11-29 07:20:43.226157753 +0000 UTC m=+0.233802926 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:20:43 np0005539550 podman[103284]: 2025-11-29 07:20:43.23735784 +0000 UTC m=+0.245002983 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:20:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:43 np0005539550 podman[103374]: 2025-11-29 07:20:43.475064652 +0000 UTC m=+0.049522220 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.28.2, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc.)
Nov 29 02:20:43 np0005539550 podman[103374]: 2025-11-29 07:20:43.483917715 +0000 UTC m=+0.058375283 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, architecture=x86_64, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=)
Nov 29 02:20:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:20:44 np0005539550 python3.9[103535]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:20:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:44.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:20:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:46.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:46 np0005539550 python3.9[103794]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:46.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539550 python3.9[103975]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 02:20:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ce05ec6b-dce4-4761-b954-3c83c71bfbda does not exist
Nov 29 02:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 77b62bc8-eb56-44f9-9e77-ccdce10876d1 does not exist
Nov 29 02:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 31c1c0b3-2791-4fe8-84de-1243dc6363f2 does not exist
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:20:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.169490927 +0000 UTC m=+0.049654012 container create b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:48 np0005539550 python3.9[104241]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:20:48 np0005539550 systemd[1]: Started libpod-conmon-b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66.scope.
Nov 29 02:20:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.145588491 +0000 UTC m=+0.025751606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.254819238 +0000 UTC m=+0.134982353 container init b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.262507169 +0000 UTC m=+0.142670264 container start b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.268591536 +0000 UTC m=+0.148754631 container attach b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:20:48 np0005539550 recursing_yalow[104286]: 167 167
Nov 29 02:20:48 np0005539550 systemd[1]: libpod-b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66.scope: Deactivated successfully.
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.271553407 +0000 UTC m=+0.151716492 container died b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:20:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-00d8bc913d67ba317e524f78b71b565438a1e6f7978149d82705486b40c51798-merged.mount: Deactivated successfully.
Nov 29 02:20:48 np0005539550 podman[104270]: 2025-11-29 07:20:48.321037025 +0000 UTC m=+0.201200110 container remove b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:20:48 np0005539550 systemd[1]: libpod-conmon-b32a06f67e7790a2cb9a5c60a150367e9003705878391cb2795339b652ce4b66.scope: Deactivated successfully.
Nov 29 02:20:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:48.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539550 podman[104334]: 2025-11-29 07:20:48.487230815 +0000 UTC m=+0.043722901 container create daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:20:48 np0005539550 systemd[1]: Started libpod-conmon-daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5.scope.
Nov 29 02:20:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:48 np0005539550 podman[104334]: 2025-11-29 07:20:48.468981544 +0000 UTC m=+0.025473670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:48 np0005539550 podman[104334]: 2025-11-29 07:20:48.567130927 +0000 UTC m=+0.123623033 container init daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:48 np0005539550 podman[104334]: 2025-11-29 07:20:48.576946526 +0000 UTC m=+0.133438612 container start daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:48 np0005539550 podman[104334]: 2025-11-29 07:20:48.584576556 +0000 UTC m=+0.141068642 container attach daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:20:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:48 np0005539550 python3.9[104482]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 02:20:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 29 02:20:49 np0005539550 reverent_nash[104402]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:20:49 np0005539550 reverent_nash[104402]: --> relative data size: 1.0
Nov 29 02:20:49 np0005539550 reverent_nash[104402]: --> All data devices are unavailable
Nov 29 02:20:49 np0005539550 systemd[1]: libpod-daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5.scope: Deactivated successfully.
Nov 29 02:20:49 np0005539550 podman[104334]: 2025-11-29 07:20:49.493893004 +0000 UTC m=+1.050385100 container died daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b9986f3e99d4a0f2ce5cd4dec2eabd0d771b8765567f51cf6647b75ae8a992a9-merged.mount: Deactivated successfully.
Nov 29 02:20:49 np0005539550 podman[104334]: 2025-11-29 07:20:49.582390262 +0000 UTC m=+1.138882348 container remove daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_nash, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:20:49 np0005539550 systemd[1]: libpod-conmon-daeef6dc7d057630c410713022e442d461b5f5d7371738fc92e09535393e03a5.scope: Deactivated successfully.
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 29 02:20:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.163796523 +0000 UTC m=+0.029046188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:50.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.48405985 +0000 UTC m=+0.349309495 container create fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 29 02:20:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 29 02:20:50 np0005539550 systemd[1]: Started libpod-conmon-fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2.scope.
Nov 29 02:20:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.568429175 +0000 UTC m=+0.433678830 container init fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.576504876 +0000 UTC m=+0.441754521 container start fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.580708092 +0000 UTC m=+0.445957767 container attach fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:20:50 np0005539550 vigilant_heyrovsky[104816]: 167 167
Nov 29 02:20:50 np0005539550 systemd[1]: libpod-fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2.scope: Deactivated successfully.
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.582258804 +0000 UTC m=+0.447508449 container died fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:20:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a54b2ea8d38dca499baf4976e9a66aa28a96edf5f20e0fac1f1033e1f2b6d154-merged.mount: Deactivated successfully.
Nov 29 02:20:50 np0005539550 podman[104695]: 2025-11-29 07:20:50.617840491 +0000 UTC m=+0.483090136 container remove fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_heyrovsky, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:50 np0005539550 systemd[1]: libpod-conmon-fed87c0fda23e2aed8b88c5d7c2caae339335fc7ad70df7df84a8317d284a8d2.scope: Deactivated successfully.
Nov 29 02:20:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 29 02:20:50 np0005539550 python3.9[104813]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 29 02:20:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 29 02:20:50 np0005539550 podman[104841]: 2025-11-29 07:20:50.779455345 +0000 UTC m=+0.036368289 container create 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:20:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:50.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:50 np0005539550 systemd[1]: Started libpod-conmon-814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475.scope.
Nov 29 02:20:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6039c489e86b435079445068bd79535e1c6fd9c063da575458133bbd7e57bac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6039c489e86b435079445068bd79535e1c6fd9c063da575458133bbd7e57bac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6039c489e86b435079445068bd79535e1c6fd9c063da575458133bbd7e57bac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6039c489e86b435079445068bd79535e1c6fd9c063da575458133bbd7e57bac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:50 np0005539550 podman[104841]: 2025-11-29 07:20:50.857334701 +0000 UTC m=+0.114247655 container init 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:50 np0005539550 podman[104841]: 2025-11-29 07:20:50.764768302 +0000 UTC m=+0.021681266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:50 np0005539550 podman[104841]: 2025-11-29 07:20:50.86640414 +0000 UTC m=+0.123317114 container start 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:20:50 np0005539550 podman[104841]: 2025-11-29 07:20:50.872077166 +0000 UTC m=+0.128990110 container attach 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]: {
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:    "0": [
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:        {
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "devices": [
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "/dev/loop3"
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            ],
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "lv_name": "ceph_lv0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "lv_size": "7511998464",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "name": "ceph_lv0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "tags": {
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.cluster_name": "ceph",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.crush_device_class": "",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.encrypted": "0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.osd_id": "0",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.type": "block",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:                "ceph.vdo": "0"
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            },
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "type": "block",
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:            "vg_name": "ceph_vg0"
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:        }
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]:    ]
Nov 29 02:20:51 np0005539550 interesting_fermi[104857]: }
Nov 29 02:20:51 np0005539550 systemd[1]: libpod-814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475.scope: Deactivated successfully.
Nov 29 02:20:51 np0005539550 podman[104841]: 2025-11-29 07:20:51.671490949 +0000 UTC m=+0.928403893 container died 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a6039c489e86b435079445068bd79535e1c6fd9c063da575458133bbd7e57bac-merged.mount: Deactivated successfully.
Nov 29 02:20:51 np0005539550 podman[104841]: 2025-11-29 07:20:51.734353382 +0000 UTC m=+0.991266326 container remove 814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 29 02:20:51 np0005539550 systemd[1]: libpod-conmon-814566feead1f0b3c3f0b41428ddd75ff49dfabaad31c8d0bacecd5d1743e475.scope: Deactivated successfully.
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 29 02:20:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.372715077 +0000 UTC m=+0.040739809 container create 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:20:52 np0005539550 systemd[1]: Started libpod-conmon-93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829.scope.
Nov 29 02:20:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:20:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:52.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.448904157 +0000 UTC m=+0.116928919 container init 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.354999181 +0000 UTC m=+0.023023933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.455247691 +0000 UTC m=+0.123272423 container start 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.458105179 +0000 UTC m=+0.126129911 container attach 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:20:52 np0005539550 systemd[1]: libpod-93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829.scope: Deactivated successfully.
Nov 29 02:20:52 np0005539550 friendly_hofstadter[105139]: 167 167
Nov 29 02:20:52 np0005539550 conmon[105139]: conmon 93e1078ab0df5d7a6c10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829.scope/container/memory.events
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.461438381 +0000 UTC m=+0.129463133 container died 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:20:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2fd23bc1382a5eaca02686cc90138eb69133df80084ddd097151e6c8a11a5161-merged.mount: Deactivated successfully.
Nov 29 02:20:52 np0005539550 podman[105100]: 2025-11-29 07:20:52.500974455 +0000 UTC m=+0.168999187 container remove 93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:20:52 np0005539550 systemd[1]: libpod-conmon-93e1078ab0df5d7a6c1077799ea3c17c919b540504a293f20bba85dc28ad8829.scope: Deactivated successfully.
Nov 29 02:20:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 29 02:20:52 np0005539550 podman[105216]: 2025-11-29 07:20:52.64438188 +0000 UTC m=+0.041928331 container create 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:20:52 np0005539550 systemd[1]: Started libpod-conmon-7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef.scope.
Nov 29 02:20:52 np0005539550 python3.9[105210]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:20:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb3ac73bcb6ac6f5b200c817609f9b076be5eb080571d1fc86707a08186259a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:52 np0005539550 podman[105216]: 2025-11-29 07:20:52.624485744 +0000 UTC m=+0.022032005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:20:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb3ac73bcb6ac6f5b200c817609f9b076be5eb080571d1fc86707a08186259a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb3ac73bcb6ac6f5b200c817609f9b076be5eb080571d1fc86707a08186259a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cb3ac73bcb6ac6f5b200c817609f9b076be5eb080571d1fc86707a08186259a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:20:52 np0005539550 podman[105216]: 2025-11-29 07:20:52.734938115 +0000 UTC m=+0.132484376 container init 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:20:52 np0005539550 podman[105216]: 2025-11-29 07:20:52.744471396 +0000 UTC m=+0.142017637 container start 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:20:52 np0005539550 podman[105216]: 2025-11-29 07:20:52.74824609 +0000 UTC m=+0.145792331 container attach 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:20:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 29 02:20:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:52.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 29 02:20:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 29 02:20:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 29 02:20:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:20:53 np0005539550 python3.9[105389]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]: {
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:        "osd_id": 0,
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:        "type": "bluestore"
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]:    }
Nov 29 02:20:53 np0005539550 stoic_thompson[105233]: }
Nov 29 02:20:53 np0005539550 systemd[1]: libpod-7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef.scope: Deactivated successfully.
Nov 29 02:20:53 np0005539550 podman[105432]: 2025-11-29 07:20:53.646423582 +0000 UTC m=+0.026240761 container died 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:20:53 np0005539550 python3.9[105495]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 29 02:20:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 29 02:20:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:54.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8cb3ac73bcb6ac6f5b200c817609f9b076be5eb080571d1fc86707a08186259a-merged.mount: Deactivated successfully.
Nov 29 02:20:54 np0005539550 podman[105432]: 2025-11-29 07:20:54.570943288 +0000 UTC m=+0.950760457 container remove 7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:20:54 np0005539550 systemd[1]: libpod-conmon-7f5ef55d884f1cfa2f12b823e4b75c8e522da3c9c9d52e677020ec994b04eeef.scope: Deactivated successfully.
Nov 29 02:20:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:20:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:54.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:20:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 29 02:20:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 29 02:20:55 np0005539550 python3.9[105652]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:20:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:55 np0005539550 python3.9[105730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:20:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:56.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:57.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:20:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:20:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:20:57 np0005539550 python3.9[105885]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:20:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:20:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:20:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 29 02:20:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:20:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:20:58.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:20:59
Nov 29 02:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Nov 29 02:20:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:20:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:20:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:20:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:20:59.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:00 np0005539550 python3.9[106042]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c73eb1e1-8821-46a2-9ac2-8d09083c115f does not exist
Nov 29 02:21:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 891b5262-8cb9-43d5-8cd2-d2b81ffc50a7 does not exist
Nov 29 02:21:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4a6c7755-a979-4dc5-af53-172e54db7eeb does not exist
Nov 29 02:21:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:00.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:01 np0005539550 python3.9[106244]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 02:21:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 29 02:21:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:21:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 29 02:21:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:01 np0005539550 python3.9[106396]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:02.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:21:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 29 02:21:03 np0005539550 python3.9[106601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 29 02:21:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 02:21:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:03 np0005539550 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 02:21:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:03.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:03 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 127 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=127) [0] r=0 lpr=127 pi=[91,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:03 np0005539550 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 02:21:03 np0005539550 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 02:21:03 np0005539550 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 02:21:03 np0005539550 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 02:21:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 29 02:21:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:21:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 29 02:21:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 29 02:21:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:04.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:04 np0005539550 python3.9[106765]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 02:21:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 29 02:21:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:05.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 29 02:21:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 29 02:21:06 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 128 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=128) [0]/[2] r=-1 lpr=128 pi=[91,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:06 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 128 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=91/91 les/c/f=92/92/0 sis=128) [0]/[2] r=-1 lpr=128 pi=[91,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 29 02:21:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:07.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:21:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:21:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 128 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=128) [0] r=0 lpr=128 pi=[92,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:21:08 np0005539550 python3.9[106923]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 29 02:21:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:08.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 29 02:21:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:09 np0005539550 python3.9[107077]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:21:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 29 02:21:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:10.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:11 np0005539550 systemd[1]: session-36.scope: Deactivated successfully.
Nov 29 02:21:11 np0005539550 systemd[1]: session-36.scope: Consumed 1min 5.169s CPU time.
Nov 29 02:21:11 np0005539550 systemd-logind[788]: Session 36 logged out. Waiting for processes to exit.
Nov 29 02:21:11 np0005539550 systemd-logind[788]: Removed session 36.
Nov 29 02:21:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 1 unknown, 1 active+recovering+remapped, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 7/207 objects misplaced (3.382%)
Nov 29 02:21:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 29 02:21:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 29 02:21:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=130) [0]/[1] r=-1 lpr=130 pi=[92,130)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:12 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 130 pg[9.1a( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=92/92 les/c/f=93/93/0 sis=130) [0]/[1] r=-1 lpr=130 pi=[92,130)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 29 02:21:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 29 02:21:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 1 unknown, 1 active+recovering+remapped, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 7/207 objects misplaced (3.382%); 0 B/s, 0 objects/s recovering
Nov 29 02:21:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 29 02:21:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:13.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:14.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 02:21:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:15.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 29 02:21:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 29 02:21:16 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 131 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=128/91 les/c/f=129/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:16 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 131 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=7 ec=59/49 lis/c=128/91 les/c/f=129/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:16 np0005539550 systemd-logind[788]: New session 37 of user zuul.
Nov 29 02:21:16 np0005539550 systemd[1]: Started Session 37 of User zuul.
Nov 29 02:21:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 29 02:21:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 1 active+remapped, 1 unknown, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 1 objects/s recovering
Nov 29 02:21:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:17 np0005539550 python3.9[107269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:18.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 29 02:21:19 np0005539550 python3.9[107428]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 02:21:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 29 02:21:19 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 132 pg[9.19( v 55'1153 (0'0,55'1153] local-lis/les=131/132 n=7 ec=59/49 lis/c=128/91 les/c/f=129/92/0 sis=131) [0] r=0 lpr=131 pi=[91,131)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.361378652521869e-06 of space, bias 1.0, pg target 0.0019084135957565607 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:21:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 30 B/s, 1 objects/s recovering
Nov 29 02:21:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:19.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539550 python3.9[107583]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:21:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 29 02:21:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 29 02:21:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 29 02:21:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:20.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:20 np0005539550 python3.9[107668]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:21:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Nov 29 02:21:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:21.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 133 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=130/92 les/c/f=132/93/0 sis=133) [0] r=0 lpr=133 pi=[92,133)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:22 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 133 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=4 ec=59/49 lis/c=130/92 les/c/f=132/93/0 sis=133) [0] r=0 lpr=133 pi=[92,133)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 29 02:21:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:22.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 02:21:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:23.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:24 np0005539550 python3.9[107879]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:24.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:21:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 29 02:21:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 02:21:26 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 134 pg[9.1a( v 55'1153 (0'0,55'1153] local-lis/les=133/134 n=4 ec=59/49 lis/c=130/92 les/c/f=132/93/0 sis=133) [0] r=0 lpr=133 pi=[92,133)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:21:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:26.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000028s ======
Nov 29 02:21:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:27.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 29 02:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:21:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:28 np0005539550 python3.9[108034]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:21:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:21:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:28.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:21:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 145 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:29 np0005539550 python3.9[108187]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:30.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:30 np0005539550 python3.9[108340]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 02:21:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 02:21:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 02:21:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:31.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:32 np0005539550 python3.9[108492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:21:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:32.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:33 np0005539550 python3.9[108653]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 02:21:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:33.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:34.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 02:21:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:35.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:36.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 02:21:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:37.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).paxos(paxos updating c 1..738) accept timeout, calling fresh election
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(24) init, last seen epoch 24
Nov 29 02:21:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:38.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 02:21:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:40 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:21:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 02:21:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:41.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:42.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:21:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 8m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:21:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:21:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:43.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:21:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 29 02:21:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:44.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 29 02:21:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:45.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:21:46 np0005539550 ceph-mon[74435]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:21:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 29 02:21:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:46.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:46 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 135 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=135) [0] r=0 lpr=135 pi=[71,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:21:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 29 02:21:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 02:21:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:47 np0005539550 python3.9[108881]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:21:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:49 np0005539550 python3.9[109171]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:21:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:50 np0005539550 python3.9[109324]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:50 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 136 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=136) [0]/[2] r=-1 lpr=136 pi=[71,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:21:50 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 136 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=71/71 les/c/f=73/73/0 sis=136) [0]/[2] r=-1 lpr=136 pi=[71,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:21:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 29 02:21:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:21:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:50.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:21:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 29 02:21:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:51 np0005539550 python3.9[109480]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 29 02:21:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 29 02:21:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 29 02:21:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:52.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 29 02:21:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 29 02:21:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:21:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:21:54 np0005539550 python3.9[109637]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(27) init, last seen epoch 27, mid-election, bumping
Nov 29 02:21:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:21:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:54.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:21:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:21:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:21:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:21:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:56.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:21:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:21:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:21:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:57.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:57 np0005539550 python3.9[109797]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:21:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:21:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:21:58 np0005539550 python3.9[109952]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 29 02:21:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:21:58.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:21:59
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data']
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:21:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 1 active+recovering+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 2/211 objects misplaced (0.948%)
Nov 29 02:21:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:21:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:21:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:21:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:21:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:21:59.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:22:00 np0005539550 systemd[1]: session-37.scope: Deactivated successfully.
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 29 02:22:00 np0005539550 systemd[1]: session-37.scope: Consumed 17.817s CPU time.
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 9m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:22:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:22:00 np0005539550 systemd-logind[788]: Session 37 logged out. Waiting for processes to exit.
Nov 29 02:22:00 np0005539550 systemd-logind[788]: Removed session 37.
Nov 29 02:22:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:00.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 1735 writes, 8129 keys, 1734 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 1734 writes, 1733 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1735 writes, 8129 keys, 1734 commit groups, 1.0 writes per commit group, ingest: 11.26 MB, 0.02 MB/s#012Interval WAL: 1734 writes, 1733 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.34 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   54.34 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 56.27 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(8,55.50 KB,0.0178287%) FilterBlock(2,0.42 KB,0.000135522%) IndexBlock(2,0.34 KB,0.000110425%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:22:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539550 podman[110153]: 2025-11-29 07:22:01.431528551 +0000 UTC m=+0.083018235 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000022s ======
Nov 29 02:22:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:01.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 29 02:22:01 np0005539550 podman[110153]: 2025-11-29 07:22:01.56138219 +0000 UTC m=+0.212871884 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:22:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 29 02:22:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 139 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=2 ec=59/49 lis/c=136/71 les/c/f=138/73/0 sis=139) [0] r=0 lpr=139 pi=[71,139)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:01 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 139 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=2 ec=59/49 lis/c=136/71 les/c/f=138/73/0 sis=139) [0] r=0 lpr=139 pi=[71,139)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:02 np0005539550 podman[110309]: 2025-11-29 07:22:02.100924886 +0000 UTC m=+0.051740388 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:22:02 np0005539550 podman[110309]: 2025-11-29 07:22:02.111244942 +0000 UTC m=+0.062060424 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:22:02 np0005539550 podman[110377]: 2025-11-29 07:22:02.300006792 +0000 UTC m=+0.053427086 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, distribution-scope=public, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container)
Nov 29 02:22:02 np0005539550 podman[110377]: 2025-11-29 07:22:02.310182066 +0000 UTC m=+0.063602350 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 02:22:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:22:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:22:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 29 02:22:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:02.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:22:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 11 B/s, 0 objects/s recovering
Nov 29 02:22:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 29 02:22:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:22:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:03.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 29 02:22:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:22:04 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 140 pg[9.1b( v 55'1153 (0'0,55'1153] local-lis/les=139/140 n=2 ec=59/49 lis/c=136/71 les/c/f=138/73/0 sis=139) [0] r=0 lpr=139 pi=[71,139)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:04.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.034683) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925034833, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 8453, "num_deletes": 252, "total_data_size": 12019084, "memory_usage": 12292688, "flush_reason": "Manual Compaction"}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925124138, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 10164739, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 144, "largest_seqno": 8588, "table_properties": {"data_size": 10131846, "index_size": 21770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95667, "raw_average_key_size": 23, "raw_value_size": 10054258, "raw_average_value_size": 2514, "num_data_blocks": 949, "num_entries": 3998, "num_filter_entries": 3998, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400321, "oldest_key_time": 1764400321, "file_creation_time": 1764400925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 89512 microseconds, and 24452 cpu microseconds.
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.124212) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 10164739 bytes OK
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.124239) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.126850) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.126904) EVENT_LOG_v1 {"time_micros": 1764400925126897, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.127044) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 11980601, prev total WAL file size 12000849, number of live WAL files 2.
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.129627) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(9926KB) 13(52KB) 8(1944B)]
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925129751, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 10220384, "oldest_snapshot_seqno": -1}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3803 keys, 10176120 bytes, temperature: kUnknown
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925208986, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 10176120, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10143770, "index_size": 21744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93409, "raw_average_key_size": 24, "raw_value_size": 10068117, "raw_average_value_size": 2647, "num_data_blocks": 952, "num_entries": 3803, "num_filter_entries": 3803, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764400925, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.209228) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 10176120 bytes
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.211517) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 128.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(9.7, 0.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 4101, records dropped: 298 output_compression: NoCompression
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.211538) EVENT_LOG_v1 {"time_micros": 1764400925211528, "job": 4, "event": "compaction_finished", "compaction_time_micros": 79306, "compaction_time_cpu_micros": 22530, "output_level": 6, "num_output_files": 1, "total_output_size": 10176120, "num_input_records": 4101, "num_output_records": 3803, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925214160, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925214238, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764400925214300, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:22:05.129423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 29 02:22:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 13 B/s, 0 objects/s recovering
Nov 29 02:22:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:22:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:05.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:05 np0005539550 systemd-logind[788]: New session 38 of user zuul.
Nov 29 02:22:05 np0005539550 systemd[1]: Started Session 38 of User zuul.
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539550 python3.9[110746]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:06 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 141 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=141) [0] r=0 lpr=141 pi=[81,141)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:06.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 29 02:22:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:22:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:22:07 np0005539550 python3.9[110900]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 20a56bb8-68e6-4c63-84dc-77d8221d7e1f does not exist
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2de7008e-07fe-4057-875d-c21a3aa21e3f does not exist
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9d369a79-2b7d-4907-bc53-b7d3011234b3 does not exist
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:22:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 142 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=142) [0]/[1] r=-1 lpr=142 pi=[81,142)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:08 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 142 pg[9.1e( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=81/81 les/c/f=82/82/0 sis=142) [0]/[1] r=-1 lpr=142 pi=[81,142)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.73956718 +0000 UTC m=+0.038996796 container create 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:08 np0005539550 systemd[1]: Started libpod-conmon-826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac.scope.
Nov 29 02:22:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.721030885 +0000 UTC m=+0.020460531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.825778627 +0000 UTC m=+0.125208263 container init 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.832478161 +0000 UTC m=+0.131907787 container start 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:22:08 np0005539550 nice_lamarr[111178]: 167 167
Nov 29 02:22:08 np0005539550 systemd[1]: libpod-826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac.scope: Deactivated successfully.
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.845592712 +0000 UTC m=+0.145022358 container attach 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:08 np0005539550 podman[111161]: 2025-11-29 07:22:08.846015562 +0000 UTC m=+0.145445198 container died 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:22:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:08.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7778aca578ac15097bc4e0651df112bd9888397a28bc9791e59168a15876e306-merged.mount: Deactivated successfully.
Nov 29 02:22:09 np0005539550 podman[111161]: 2025-11-29 07:22:09.133419894 +0000 UTC m=+0.432849520 container remove 826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamarr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 29 02:22:09 np0005539550 python3.9[111268]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:09 np0005539550 systemd[1]: libpod-conmon-826ceb5ece1ac69f3bc6958647cccc85a45af2ae8ce209a442f0c3400b1f48ac.scope: Deactivated successfully.
Nov 29 02:22:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: Health check failed: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:22:09 np0005539550 podman[111277]: 2025-11-29 07:22:09.284155361 +0000 UTC m=+0.031929163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:09 np0005539550 podman[111277]: 2025-11-29 07:22:09.380739737 +0000 UTC m=+0.128513519 container create 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:22:09 np0005539550 systemd[1]: Started libpod-conmon-650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495.scope.
Nov 29 02:22:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:09.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:09 np0005539550 podman[111277]: 2025-11-29 07:22:09.576300092 +0000 UTC m=+0.324073874 container init 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 29 02:22:09 np0005539550 podman[111277]: 2025-11-29 07:22:09.583140479 +0000 UTC m=+0.330914261 container start 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 29 02:22:09 np0005539550 systemd[1]: session-38.scope: Deactivated successfully.
Nov 29 02:22:09 np0005539550 systemd[1]: session-38.scope: Consumed 2.413s CPU time.
Nov 29 02:22:09 np0005539550 systemd-logind[788]: Session 38 logged out. Waiting for processes to exit.
Nov 29 02:22:09 np0005539550 systemd-logind[788]: Removed session 38.
Nov 29 02:22:09 np0005539550 podman[111277]: 2025-11-29 07:22:09.782945512 +0000 UTC m=+0.530719294 container attach 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:22:10 np0005539550 fervent_rubin[111317]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:22:10 np0005539550 fervent_rubin[111317]: --> relative data size: 1.0
Nov 29 02:22:10 np0005539550 fervent_rubin[111317]: --> All data devices are unavailable
Nov 29 02:22:10 np0005539550 systemd[1]: libpod-650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495.scope: Deactivated successfully.
Nov 29 02:22:10 np0005539550 podman[111277]: 2025-11-29 07:22:10.454672569 +0000 UTC m=+1.202446371 container died 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c50ec3bd3533902153a10091c58a660a6cc487da8b7c8c89ab3a5b0d17d99b96-merged.mount: Deactivated successfully.
Nov 29 02:22:10 np0005539550 podman[111277]: 2025-11-29 07:22:10.515485584 +0000 UTC m=+1.263259366 container remove 650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:22:10 np0005539550 systemd[1]: libpod-conmon-650a0d31e0be87f966df0ac0b1b15ef54db6c19dead78ecd861223e1a046a495.scope: Deactivated successfully.
Nov 29 02:22:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 29 02:22:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:10.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.080996766 +0000 UTC m=+0.043673163 container create 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:22:11 np0005539550 systemd[1]: Started libpod-conmon-28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e.scope.
Nov 29 02:22:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.061049798 +0000 UTC m=+0.023726215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.167725125 +0000 UTC m=+0.130401542 container init 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.175683958 +0000 UTC m=+0.138360355 container start 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.17927474 +0000 UTC m=+0.141951137 container attach 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:22:11 np0005539550 eager_wu[111502]: 167 167
Nov 29 02:22:11 np0005539550 systemd[1]: libpod-28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e.scope: Deactivated successfully.
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.183386834 +0000 UTC m=+0.146063231 container died 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:22:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-57ab514ebec6f211b91e516216ae8c359dc13bd5335b49a4002b4dcb23f7ebf7-merged.mount: Deactivated successfully.
Nov 29 02:22:11 np0005539550 podman[111486]: 2025-11-29 07:22:11.224565329 +0000 UTC m=+0.187241726 container remove 28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:22:11 np0005539550 systemd[1]: libpod-conmon-28d5ba22aa9ccf8962d2e92b89d67edc92c1ba20043526796e167973d3c5cf0e.scope: Deactivated successfully.
Nov 29 02:22:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:11 np0005539550 podman[111526]: 2025-11-29 07:22:11.364792015 +0000 UTC m=+0.038497044 container create 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:22:11 np0005539550 systemd[1]: Started libpod-conmon-9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15.scope.
Nov 29 02:22:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a22b90417d4855cdb3fd8cbdf88e00ba72f5cd4e853540fc6610c8b08341857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a22b90417d4855cdb3fd8cbdf88e00ba72f5cd4e853540fc6610c8b08341857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a22b90417d4855cdb3fd8cbdf88e00ba72f5cd4e853540fc6610c8b08341857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a22b90417d4855cdb3fd8cbdf88e00ba72f5cd4e853540fc6610c8b08341857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:11 np0005539550 podman[111526]: 2025-11-29 07:22:11.434283509 +0000 UTC m=+0.107988568 container init 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:22:11 np0005539550 podman[111526]: 2025-11-29 07:22:11.441103616 +0000 UTC m=+0.114808655 container start 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:22:11 np0005539550 podman[111526]: 2025-11-29 07:22:11.347968539 +0000 UTC m=+0.021673588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:11 np0005539550 podman[111526]: 2025-11-29 07:22:11.445450445 +0000 UTC m=+0.119155474 container attach 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:22:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:11.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops)
Nov 29 02:22:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:22:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 29 02:22:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 29 02:22:12 np0005539550 reverent_saha[111542]: {
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:    "0": [
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:        {
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "devices": [
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "/dev/loop3"
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            ],
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "lv_name": "ceph_lv0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "lv_size": "7511998464",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "name": "ceph_lv0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "tags": {
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.cluster_name": "ceph",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.crush_device_class": "",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.encrypted": "0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.osd_id": "0",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.type": "block",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:                "ceph.vdo": "0"
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            },
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "type": "block",
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:            "vg_name": "ceph_vg0"
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:        }
Nov 29 02:22:12 np0005539550 reverent_saha[111542]:    ]
Nov 29 02:22:12 np0005539550 reverent_saha[111542]: }
Nov 29 02:22:12 np0005539550 systemd[1]: libpod-9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15.scope: Deactivated successfully.
Nov 29 02:22:12 np0005539550 conmon[111542]: conmon 9d2fa390bd43046da084 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15.scope/container/memory.events
Nov 29 02:22:12 np0005539550 podman[111526]: 2025-11-29 07:22:12.203972604 +0000 UTC m=+0.877677633 container died 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:22:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7a22b90417d4855cdb3fd8cbdf88e00ba72f5cd4e853540fc6610c8b08341857-merged.mount: Deactivated successfully.
Nov 29 02:22:12 np0005539550 podman[111526]: 2025-11-29 07:22:12.275552286 +0000 UTC m=+0.949257305 container remove 9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_saha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:12 np0005539550 systemd[1]: libpod-conmon-9d2fa390bd43046da084dbd2534eb6e2be83b2e6ba6f9a63bdf32f8a627e3e15.scope: Deactivated successfully.
Nov 29 02:22:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 29 02:22:12 np0005539550 podman[111706]: 2025-11-29 07:22:12.851164889 +0000 UTC m=+0.047441369 container create 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:22:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 29 02:22:12 np0005539550 systemd[1]: Started libpod-conmon-2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32.scope.
Nov 29 02:22:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:12.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 29 02:22:12 np0005539550 podman[111706]: 2025-11-29 07:22:12.825136062 +0000 UTC m=+0.021412572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:13 np0005539550 podman[111706]: 2025-11-29 07:22:13.069975868 +0000 UTC m=+0.266252368 container init 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:22:13 np0005539550 podman[111706]: 2025-11-29 07:22:13.078141455 +0000 UTC m=+0.274417935 container start 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:22:13 np0005539550 stoic_kirch[111722]: 167 167
Nov 29 02:22:13 np0005539550 systemd[1]: libpod-2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32.scope: Deactivated successfully.
Nov 29 02:22:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 145 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=142/81 les/c/f=143/82/0 sis=145) [0] r=0 lpr=145 pi=[81,145)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:13 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 145 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=142/81 les/c/f=143/82/0 sis=145) [0] r=0 lpr=145 pi=[81,145)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:13 np0005539550 podman[111706]: 2025-11-29 07:22:13.290554618 +0000 UTC m=+0.486831118 container attach 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:22:13 np0005539550 podman[111706]: 2025-11-29 07:22:13.291707844 +0000 UTC m=+0.487984344 container died 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:22:13 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 36 sec, mon.compute-1 has slow ops)
Nov 29 02:22:13 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:22:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 1 active+remapped, 1 peering, 303 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Nov 29 02:22:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:13.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5fcf7712227e9ff5c75599c9da3d6dec468ac30c6b1efff4d61bf96693df3159-merged.mount: Deactivated successfully.
Nov 29 02:22:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 29 02:22:14 np0005539550 podman[111706]: 2025-11-29 07:22:14.162155789 +0000 UTC m=+1.358432279 container remove 2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kirch, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:22:14 np0005539550 systemd[1]: libpod-conmon-2f82ab2fdd88d82bf0e4267b456879139038f11c70c16c9e19eb8ae6ceda9c32.scope: Deactivated successfully.
Nov 29 02:22:14 np0005539550 podman[111749]: 2025-11-29 07:22:14.301472575 +0000 UTC m=+0.021771951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:22:14 np0005539550 podman[111749]: 2025-11-29 07:22:14.516466386 +0000 UTC m=+0.236765732 container create c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:22:14 np0005539550 systemd[1]: Started libpod-conmon-c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544.scope.
Nov 29 02:22:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 29 02:22:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:22:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0247bf166c21f409f3637fa3405b92b78242db323eaad0540ca3f9683675a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0247bf166c21f409f3637fa3405b92b78242db323eaad0540ca3f9683675a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0247bf166c21f409f3637fa3405b92b78242db323eaad0540ca3f9683675a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0247bf166c21f409f3637fa3405b92b78242db323eaad0540ca3f9683675a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:22:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 29 02:22:14 np0005539550 podman[111749]: 2025-11-29 07:22:14.779621052 +0000 UTC m=+0.499920438 container init c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:22:14 np0005539550 podman[111749]: 2025-11-29 07:22:14.790379559 +0000 UTC m=+0.510678915 container start c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:22:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:14.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539550 podman[111749]: 2025-11-29 07:22:15.120498971 +0000 UTC m=+0.840798357 container attach c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:22:15 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 146 pg[9.1e( v 55'1153 (0'0,55'1153] local-lis/les=145/146 n=5 ec=59/49 lis/c=142/81 les/c/f=143/82/0 sis=145) [0] r=0 lpr=145 pi=[81,145)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 38 B/s, 1 objects/s recovering
Nov 29 02:22:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 29 02:22:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:22:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]: {
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:        "osd_id": 0,
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:        "type": "bluestore"
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]:    }
Nov 29 02:22:15 np0005539550 nice_blackwell[111765]: }
Nov 29 02:22:15 np0005539550 systemd[1]: libpod-c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544.scope: Deactivated successfully.
Nov 29 02:22:15 np0005539550 podman[111749]: 2025-11-29 07:22:15.711472097 +0000 UTC m=+1.431771453 container died c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:22:15 np0005539550 systemd-logind[788]: New session 39 of user zuul.
Nov 29 02:22:15 np0005539550 systemd[1]: Started Session 39 of User zuul.
Nov 29 02:22:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4e0247bf166c21f409f3637fa3405b92b78242db323eaad0540ca3f9683675a1-merged.mount: Deactivated successfully.
Nov 29 02:22:16 np0005539550 podman[111749]: 2025-11-29 07:22:16.119459825 +0000 UTC m=+1.839759181 container remove c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:22:16 np0005539550 systemd[1]: libpod-conmon-c1450d5f68d96ca972ed66e500aedc29e062f0799833bb2499dc43bf07cd6544.scope: Deactivated successfully.
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 29 02:22:16 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 147 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=147) [0] r=0 lpr=147 pi=[103,147)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9cc866fb-36ce-4380-913f-3287d35c5a8a does not exist
Nov 29 02:22:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 52100d00-e28b-4172-ac7f-6eab03c6b961 does not exist
Nov 29 02:22:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 44090807-62ef-4433-a83c-537cd2348fac does not exist
Nov 29 02:22:16 np0005539550 python3.9[111956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:22:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 29 02:22:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Nov 29 02:22:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:17.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:18 np0005539550 python3.9[112162]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 29 02:22:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 29 02:22:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:18 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 148 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=148) [0]/[1] r=-1 lpr=148 pi=[103,148)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:18 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 148 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=103/103 les/c/f=104/104/0 sis=148) [0]/[1] r=-1 lpr=148 pi=[103,148)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 29 02:22:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:18.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:19 np0005539550 python3.9[112319]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:22:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 29 02:22:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 0 objects/s recovering
Nov 29 02:22:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:19.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 29 02:22:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 29 02:22:19 np0005539550 python3.9[112405]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 29 02:22:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 29 02:22:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 29 02:22:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 150 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=148/103 les/c/f=149/104/0 sis=150) [0] r=0 lpr=150 pi=[103,150)/1 luod=0'0 crt=55'1153 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 29 02:22:20 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 150 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=0/0 n=5 ec=59/49 lis/c=148/103 les/c/f=149/104/0 sis=150) [0] r=0 lpr=150 pi=[103,150)/1 crt=55'1153 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 29 02:22:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:21.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 29 02:22:22 np0005539550 python3.9[112562]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:22:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 29 02:22:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:22:23 np0005539550 ceph-osd[84753]: osd.0 pg_epoch: 151 pg[9.1f( v 55'1153 (0'0,55'1153] local-lis/les=150/151 n=5 ec=59/49 lis/c=148/103 les/c/f=149/104/0 sis=150) [0] r=0 lpr=150 pi=[103,150)/1 crt=55'1153 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 29 02:22:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:23.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:24 np0005539550 python3.9[112810]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:25.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:25 np0005539550 python3.9[112964]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:22:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:22:27 np0005539550 python3.9[113132]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Nov 29 02:22:27 np0005539550 python3.9[113212]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:27.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:28 np0005539550 python3.9[113365]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:28.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:29 np0005539550 python3.9[113445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 0 objects/s recovering
Nov 29 02:22:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:29.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:30 np0005539550 python3.9[113600]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:31 np0005539550 python3.9[113752]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 02:22:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:31.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:31 np0005539550 python3.9[113904]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:32 np0005539550 python3.9[114059]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:22:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Nov 29 02:22:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:33 np0005539550 python3.9[114211]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:33.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 02:22:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:35.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:36 np0005539550 python3.9[114370]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:22:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:36.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:37 np0005539550 python3.9[114524]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9 B/s, 0 objects/s recovering
Nov 29 02:22:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:37.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:38 np0005539550 python3.9[114679]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:22:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:38.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:39 np0005539550 python3.9[114833]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:22:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:39.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:40 np0005539550 python3.9[114987]: ansible-service_facts Invoked
Nov 29 02:22:40 np0005539550 network[115004]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:22:40 np0005539550 network[115005]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:22:40 np0005539550 network[115006]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:22:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:41.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:42.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:22:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:43.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:22:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:44.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:45.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:46.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:47.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:47 np0005539550 python3.9[115521]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:22:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:48.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:50.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:52.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:22:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:22:54 np0005539550 python3.9[115684]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 02:22:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:54.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:55.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:56 np0005539550 python3.9[115841]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:56 np0005539550 python3.9[115919]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:56.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:57.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:58 np0005539550 python3.9[116074]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:22:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:22:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:22:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:22:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:22:58.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:22:59
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:22:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:22:59 np0005539550 python3.9[116152]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:22:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:22:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:22:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:22:59.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:00.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:01 np0005539550 python3.9[116309]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:01.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:02.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:03 np0005539550 python3.9[116514]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:23:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:03.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:04 np0005539550 python3.9[116601]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:04.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:05 np0005539550 systemd[1]: session-39.scope: Deactivated successfully.
Nov 29 02:23:05 np0005539550 systemd[1]: session-39.scope: Consumed 23.985s CPU time.
Nov 29 02:23:05 np0005539550 systemd-logind[788]: Session 39 logged out. Waiting for processes to exit.
Nov 29 02:23:05 np0005539550 systemd-logind[788]: Removed session 39.
Nov 29 02:23:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:05.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:06.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:07.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:23:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:23:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:23:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:08.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:10 np0005539550 systemd-logind[788]: New session 40 of user zuul.
Nov 29 02:23:10 np0005539550 systemd[1]: Started Session 40 of User zuul.
Nov 29 02:23:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:10.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:11 np0005539550 python3.9[116794]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:11.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:12 np0005539550 python3.9[116949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:12.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:13 np0005539550 python3.9[117027]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:13 np0005539550 systemd[1]: session-40.scope: Deactivated successfully.
Nov 29 02:23:13 np0005539550 systemd[1]: session-40.scope: Consumed 1.595s CPU time.
Nov 29 02:23:13 np0005539550 systemd-logind[788]: Session 40 logged out. Waiting for processes to exit.
Nov 29 02:23:13 np0005539550 systemd-logind[788]: Removed session 40.
Nov 29 02:23:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:13.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:14.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:17 np0005539550 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 02:23:17 np0005539550 systemd[1]: session-19.scope: Consumed 1min 42.027s CPU time.
Nov 29 02:23:17 np0005539550 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Nov 29 02:23:17 np0005539550 systemd-logind[788]: Removed session 19.
Nov 29 02:23:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:23:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:23:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:18.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:23:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:19 np0005539550 systemd-logind[788]: New session 41 of user zuul.
Nov 29 02:23:19 np0005539550 systemd[1]: Started Session 41 of User zuul.
Nov 29 02:23:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:23:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:20.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:21 np0005539550 python3.9[117469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:23:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:23:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:21.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:23:22 np0005539550 python3.9[117628]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:22.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539550 python3.9[117855]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:23.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:23 np0005539550 python3.9[117933]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.z3rp2fwl recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:23:24 np0005539550 python3.9[118088]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:24.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:25 np0005539550 python3.9[118166]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.oplmjwa5 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:23:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bc78747f-c136-4ff8-9032-86d82ce31e34 does not exist
Nov 29 02:23:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2189b10c-a5f3-40fe-b888-e44328fc14a9 does not exist
Nov 29 02:23:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9732532a-48b5-48ca-8147-c8812b1471c2 does not exist
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:23:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:23:26 np0005539550 python3.9[118319]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:26 np0005539550 podman[118566]: 2025-11-29 07:23:26.565802524 +0000 UTC m=+0.025910207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:26 np0005539550 python3.9[118625]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:26 np0005539550 podman[118566]: 2025-11-29 07:23:26.951214621 +0000 UTC m=+0.411322284 container create 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:23:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:26.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539550 systemd[1]: Started libpod-conmon-1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1.scope.
Nov 29 02:23:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:27 np0005539550 python3.9[118703]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:27 np0005539550 podman[118566]: 2025-11-29 07:23:27.313166744 +0000 UTC m=+0.773274427 container init 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:23:27 np0005539550 podman[118566]: 2025-11-29 07:23:27.321116585 +0000 UTC m=+0.781224248 container start 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:23:27 np0005539550 loving_colden[118706]: 167 167
Nov 29 02:23:27 np0005539550 systemd[1]: libpod-1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1.scope: Deactivated successfully.
Nov 29 02:23:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:23:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:23:27 np0005539550 podman[118566]: 2025-11-29 07:23:27.495241684 +0000 UTC m=+0.955349367 container attach 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:27 np0005539550 podman[118566]: 2025-11-29 07:23:27.496423624 +0000 UTC m=+0.956531287 container died 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:23:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:27.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:27 np0005539550 python3.9[118872]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:28 np0005539550 python3.9[118951]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:23:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9020a043c4028d2aa57811c2e9c5e768a84e264281f5fc0adce7b8a162460f13-merged.mount: Deactivated successfully.
Nov 29 02:23:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:28.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:29 np0005539550 python3.9[119107]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:29 np0005539550 podman[118566]: 2025-11-29 07:23:29.373041322 +0000 UTC m=+2.833148985 container remove 1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:23:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:29 np0005539550 systemd[1]: libpod-conmon-1405364efd30123c051cb3ffa38356adcef23866b7550b21df1622f9c045b8d1.scope: Deactivated successfully.
Nov 29 02:23:29 np0005539550 podman[119139]: 2025-11-29 07:23:29.512363179 +0000 UTC m=+0.023659020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:29 np0005539550 python3.9[119280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:30 np0005539550 podman[119139]: 2025-11-29 07:23:29.99905786 +0000 UTC m=+0.510353681 container create a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:23:30 np0005539550 systemd[1]: Started libpod-conmon-a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576.scope.
Nov 29 02:23:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:30 np0005539550 python3.9[119364]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:30 np0005539550 podman[119139]: 2025-11-29 07:23:30.562121765 +0000 UTC m=+1.073417606 container init a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:30 np0005539550 podman[119139]: 2025-11-29 07:23:30.575382701 +0000 UTC m=+1.086678522 container start a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:30 np0005539550 podman[119139]: 2025-11-29 07:23:30.796328254 +0000 UTC m=+1.307624455 container attach a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:31 np0005539550 thirsty_brahmagupta[119361]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:23:31 np0005539550 thirsty_brahmagupta[119361]: --> relative data size: 1.0
Nov 29 02:23:31 np0005539550 thirsty_brahmagupta[119361]: --> All data devices are unavailable
Nov 29 02:23:31 np0005539550 python3.9[119519]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:31 np0005539550 systemd[1]: libpod-a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576.scope: Deactivated successfully.
Nov 29 02:23:31 np0005539550 podman[119139]: 2025-11-29 07:23:31.462657784 +0000 UTC m=+1.973953625 container died a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:23:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:31.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:32 np0005539550 python3.9[119619]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ee7f299ba32ba3f1919c7d10f36e16b5a678ed36f0d934392b6f3ed504e3435-merged.mount: Deactivated successfully.
Nov 29 02:23:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:33 np0005539550 python3.9[119773]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:33 np0005539550 systemd[1]: Reloading.
Nov 29 02:23:33 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:33 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:33 np0005539550 podman[119139]: 2025-11-29 07:23:33.582230483 +0000 UTC m=+4.093526344 container remove a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:23:33 np0005539550 systemd[1]: libpod-conmon-a6e1dc11e63df7b4b679ff298e9feb13230a3a21ce1c07e69c50f377b8e6c576.scope: Deactivated successfully.
Nov 29 02:23:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:34 np0005539550 podman[120071]: 2025-11-29 07:23:34.157130717 +0000 UTC m=+0.021819553 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:34 np0005539550 python3.9[120119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:34 np0005539550 python3.9[120197]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:35 np0005539550 python3.9[120351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:35 np0005539550 podman[120071]: 2025-11-29 07:23:35.646192795 +0000 UTC m=+1.510881611 container create a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:23:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:35.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:36 np0005539550 python3.9[120430]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:36 np0005539550 systemd[1]: Started libpod-conmon-a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf.scope.
Nov 29 02:23:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:36 np0005539550 podman[120071]: 2025-11-29 07:23:36.842186412 +0000 UTC m=+2.706875248 container init a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:36 np0005539550 podman[120071]: 2025-11-29 07:23:36.859221404 +0000 UTC m=+2.723910210 container start a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:23:36 np0005539550 gallant_lewin[120436]: 167 167
Nov 29 02:23:36 np0005539550 systemd[1]: libpod-a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf.scope: Deactivated successfully.
Nov 29 02:23:36 np0005539550 python3.9[120589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:23:36 np0005539550 systemd[1]: Reloading.
Nov 29 02:23:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:37 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:23:37 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:23:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:37 np0005539550 systemd[1]: Starting Create netns directory...
Nov 29 02:23:37 np0005539550 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:23:37 np0005539550 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:23:37 np0005539550 systemd[1]: Finished Create netns directory.
Nov 29 02:23:37 np0005539550 podman[120071]: 2025-11-29 07:23:37.57878329 +0000 UTC m=+3.443472126 container attach a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:23:37 np0005539550 podman[120071]: 2025-11-29 07:23:37.581580071 +0000 UTC m=+3.446268887 container died a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:23:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:37.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:38 np0005539550 python3.9[120796]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:23:38 np0005539550 network[120813]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:23:38 np0005539550 network[120814]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:23:38 np0005539550 network[120815]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:23:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dcb06f30f3f3b2e796c5946e7d9bcf247c0dcb11d9d29a9f16d75e526df42d0d-merged.mount: Deactivated successfully.
Nov 29 02:23:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:39.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:40 np0005539550 podman[120071]: 2025-11-29 07:23:40.66473836 +0000 UTC m=+6.529427176 container remove a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:23:40 np0005539550 systemd[1]: libpod-conmon-a63fb179812b5e533c610d796f495dfd23f5c93602707326501cabcd091ecedf.scope: Deactivated successfully.
Nov 29 02:23:40 np0005539550 podman[120935]: 2025-11-29 07:23:40.821678246 +0000 UTC m=+0.030616169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:41 np0005539550 podman[120935]: 2025-11-29 07:23:41.024539543 +0000 UTC m=+0.233477446 container create d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:23:41 np0005539550 systemd[1]: Started libpod-conmon-d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0.scope.
Nov 29 02:23:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a77c6f9c3cdb41bed1fb5fe4b49c982346325df96ab74185b3671dfb615456/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a77c6f9c3cdb41bed1fb5fe4b49c982346325df96ab74185b3671dfb615456/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a77c6f9c3cdb41bed1fb5fe4b49c982346325df96ab74185b3671dfb615456/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1a77c6f9c3cdb41bed1fb5fe4b49c982346325df96ab74185b3671dfb615456/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:41 np0005539550 podman[120935]: 2025-11-29 07:23:41.524505972 +0000 UTC m=+0.733443895 container init d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:41 np0005539550 podman[120935]: 2025-11-29 07:23:41.533253101 +0000 UTC m=+0.742190984 container start d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:23:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:41.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:41 np0005539550 podman[120935]: 2025-11-29 07:23:41.70068939 +0000 UTC m=+0.909627293 container attach d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]: {
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:    "0": [
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:        {
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "devices": [
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "/dev/loop3"
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            ],
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "lv_name": "ceph_lv0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "lv_size": "7511998464",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "name": "ceph_lv0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "tags": {
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.cluster_name": "ceph",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.crush_device_class": "",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.encrypted": "0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.osd_id": "0",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.type": "block",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:                "ceph.vdo": "0"
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            },
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "type": "block",
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:            "vg_name": "ceph_vg0"
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:        }
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]:    ]
Nov 29 02:23:42 np0005539550 priceless_elbakyan[120979]: }
Nov 29 02:23:42 np0005539550 systemd[1]: libpod-d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0.scope: Deactivated successfully.
Nov 29 02:23:42 np0005539550 podman[120935]: 2025-11-29 07:23:42.368409346 +0000 UTC m=+1.577347239 container died d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c1a77c6f9c3cdb41bed1fb5fe4b49c982346325df96ab74185b3671dfb615456-merged.mount: Deactivated successfully.
Nov 29 02:23:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:43 np0005539550 podman[120935]: 2025-11-29 07:23:43.00337297 +0000 UTC m=+2.212310863 container remove d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:23:43 np0005539550 systemd[1]: libpod-conmon-d94ade9d0f944e2f1c48046d3b817404bc702d9287563b175a67524612397cd0.scope: Deactivated successfully.
Nov 29 02:23:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:43 np0005539550 podman[121192]: 2025-11-29 07:23:43.579855676 +0000 UTC m=+0.026166847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:43.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:43 np0005539550 podman[121192]: 2025-11-29 07:23:43.707893268 +0000 UTC m=+0.154204479 container create 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:43 np0005539550 systemd[1]: Started libpod-conmon-032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a.scope.
Nov 29 02:23:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:44 np0005539550 podman[121192]: 2025-11-29 07:23:44.13666277 +0000 UTC m=+0.582973961 container init 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:23:44 np0005539550 podman[121192]: 2025-11-29 07:23:44.14620133 +0000 UTC m=+0.592512491 container start 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:23:44 np0005539550 flamboyant_chaum[121285]: 167 167
Nov 29 02:23:44 np0005539550 systemd[1]: libpod-032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a.scope: Deactivated successfully.
Nov 29 02:23:44 np0005539550 podman[121192]: 2025-11-29 07:23:44.296576631 +0000 UTC m=+0.742887892 container attach 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:23:44 np0005539550 podman[121192]: 2025-11-29 07:23:44.297219617 +0000 UTC m=+0.743530778 container died 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:23:44 np0005539550 python3.9[121342]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:44 np0005539550 python3.9[121433]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:44.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fbda308654186cca10ffe84f23cf419540a8f746f0c736f7a149f23dba3c412b-merged.mount: Deactivated successfully.
Nov 29 02:23:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:45 np0005539550 podman[121192]: 2025-11-29 07:23:45.628620257 +0000 UTC m=+2.074931418 container remove 032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:23:45 np0005539550 python3.9[121585]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:45 np0005539550 systemd[1]: libpod-conmon-032fc6c0132797710bc2e5d8553c7c0cdfa713126891cb320cd90c86dfbede9a.scope: Deactivated successfully.
Nov 29 02:23:45 np0005539550 podman[121619]: 2025-11-29 07:23:45.780570678 +0000 UTC m=+0.027328967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:23:46 np0005539550 podman[121619]: 2025-11-29 07:23:46.139204532 +0000 UTC m=+0.385962791 container create 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:23:46 np0005539550 systemd[1]: Started libpod-conmon-9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f.scope.
Nov 29 02:23:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:23:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f764aaf23f84c6d3fce4f7ee0ae131abe69f1b1a9daa839498f7003f9bc7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f764aaf23f84c6d3fce4f7ee0ae131abe69f1b1a9daa839498f7003f9bc7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f764aaf23f84c6d3fce4f7ee0ae131abe69f1b1a9daa839498f7003f9bc7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52f764aaf23f84c6d3fce4f7ee0ae131abe69f1b1a9daa839498f7003f9bc7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:23:46 np0005539550 podman[121619]: 2025-11-29 07:23:46.687106162 +0000 UTC m=+0.933864431 container init 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:23:46 np0005539550 podman[121619]: 2025-11-29 07:23:46.702811476 +0000 UTC m=+0.949569755 container start 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:46 np0005539550 python3.9[121766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:47.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:47 np0005539550 podman[121619]: 2025-11-29 07:23:47.258602814 +0000 UTC m=+1.505361093 container attach 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:23:47 np0005539550 python3.9[121848]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]: {
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:        "osd_id": 0,
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:        "type": "bluestore"
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]:    }
Nov 29 02:23:47 np0005539550 zealous_sammet[121696]: }
Nov 29 02:23:47 np0005539550 systemd[1]: libpod-9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f.scope: Deactivated successfully.
Nov 29 02:23:47 np0005539550 podman[121619]: 2025-11-29 07:23:47.684148756 +0000 UTC m=+1.930907015 container died 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:23:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:47.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:48 np0005539550 python3.9[122029]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 02:23:48 np0005539550 systemd[1]: Starting Time & Date Service...
Nov 29 02:23:48 np0005539550 systemd[1]: Started Time & Date Service.
Nov 29 02:23:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d52f764aaf23f84c6d3fce4f7ee0ae131abe69f1b1a9daa839498f7003f9bc7b-merged.mount: Deactivated successfully.
Nov 29 02:23:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:49.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:49 np0005539550 python3.9[122187]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:50 np0005539550 podman[121619]: 2025-11-29 07:23:50.261054271 +0000 UTC m=+4.507812740 container remove 9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sammet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:23:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:23:50 np0005539550 systemd[1]: libpod-conmon-9b88f85a8ced1714cca53185b7af9ef0038b60702ec44c71eba9835263f3819f.scope: Deactivated successfully.
Nov 29 02:23:50 np0005539550 python3.9[122342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:23:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 12148a55-cd82-47df-9963-a141d302b261 does not exist
Nov 29 02:23:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b24862c7-6615-4f2d-82ec-40e28c0a0b68 does not exist
Nov 29 02:23:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dbf34c9c-1367-4655-b195-53d1d6728d94 does not exist
Nov 29 02:23:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:23:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:23:51 np0005539550 python3.9[122420]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:23:51 np0005539550 python3.9[122622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:51.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:52 np0005539550 python3.9[122703]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.16nq65zw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:53 np0005539550 python3.9[122855]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:53 np0005539550 python3.9[122935]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:53.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539550 python3.9[123090]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:23:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:23:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:55 np0005539550 python3[123243]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:23:56 np0005539550 python3.9[123398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:57.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:57 np0005539550 python3.9[123476]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:23:57.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:23:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:23:58 np0005539550 python3.9[123631]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:23:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:23:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:23:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:23:59.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:23:59 np0005539550 python3.9[123709]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:23:59
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta']
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:23:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:00.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:00 np0005539550 python3.9[123861]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:00 np0005539550 python3.9[123943]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:01.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:01 np0005539550 python3.9[124095]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:01 np0005539550 python3.9[124175]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:02.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:02 np0005539550 python3.9[124328]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:03.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:03 np0005539550 python3.9[124455]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:04 np0005539550 python3.9[124611]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:05.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:05 np0005539550 python3.9[124768]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:06.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:06 np0005539550 python3.9[124921]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:06 np0005539550 python3.9[125073]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:07.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:07 np0005539550 python3.9[125227]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:24:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:24:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:08.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:08 np0005539550 python3.9[125380]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 02:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:24:08 np0005539550 systemd[1]: session-41.scope: Deactivated successfully.
Nov 29 02:24:08 np0005539550 systemd[1]: session-41.scope: Consumed 29.928s CPU time.
Nov 29 02:24:08 np0005539550 systemd-logind[788]: Session 41 logged out. Waiting for processes to exit.
Nov 29 02:24:08 np0005539550 systemd-logind[788]: Removed session 41.
Nov 29 02:24:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:09.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:10.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:11.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:12.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:13.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:14 np0005539550 systemd-logind[788]: New session 42 of user zuul.
Nov 29 02:24:14 np0005539550 systemd[1]: Started Session 42 of User zuul.
Nov 29 02:24:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:15 np0005539550 python3.9[125571]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 02:24:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:16 np0005539550 python3.9[125724]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:17 np0005539550 python3.9[125880]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 29 02:24:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:18 np0005539550 python3.9[126033]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.khaypto5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:24:18 np0005539550 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 02:24:19 np0005539550 python3.9[126162]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.khaypto5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401057.6872404-107-210195733753746/.source.khaypto5 _original_basename=._3aptbwu follow=False checksum=a1a59eb28f721bfa8fb748cb88539cd5cab8b099 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:24:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:20 np0005539550 python3.9[126315]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:21.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:21 np0005539550 python3.9[126469]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfIZbQlJSY8OFW9gaKZpL5AOJYgHeGcUU4xMMLWNL/xUPPZkDRJ+0oOBxm1GBsA8W/sQVZWDc//tIOaPRg0Ts5mepXlfGs0Url+hpuUxGZNLWaIiPfHq1tUx7zM7eWeUlVhlBayXU+bDoHZDE1TezLFLi49CXlrQuy/1Fb5Ju8aYVVJNoRltLwGKo8JrHv8UnYQ29iZPFO7+AEqgSmsEyz9hjMO7qStFsK0Z4RYJrbTZ/AMj8FNebCRWGtc2weikdIjLid5Z20teORSzpJW4jLDvRkyg92/WdI7iFDyHhslm5uNGHqqE2uRPqQFTZ7tdP6IJzfhJms7WfRdsOS7qJdAeOLzhn/EcmLaKoST1KzKZYzMdAtqrHDPDth+ERDeHtT8CEHNFNgwH4Drtp7YWlKZyVPsv6dK3iVC5WQ4Smet9VXXpZhT8JcQr97oS6/QJ/gT2yzHqH9vE62bRuuVM3lwDNiZkdn1nVbxa8d58RY3T49As7qmlP5Y43puhyXDWU=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDvBaB2c/CSsrpPIGSKo/yIA8NKQbrk/1m+GY/Ma4/XG#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCX/VzLQPSOCPDMQMb838UxHYaVIDkLBboGMSvw1EX6MmRkAHKbJbJizg3TXu8nfZimb1PW1TRaFLHQkljXQfhA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZ3gJW4xxSNpckw2TbtUBxTZruxTxiPlDkOB8Y4ICZA576sHCsss1Ph5y2zOkXYsz9fpf2TwDKPQIVDfUxQL2k42AS2PWqcJCelaMaAxDGDVmytzhvJO+0vO0kZSFoRnDYDxt2IUjJS2VV4xS4L9mRqjK8zsSYyINET0BAxRep9xLeUV0pztWwkopYucpBL9nU+ZMkA5y3nRMxInQNfxZwW5O2P7v+HScnTy2CUe+79l+0TMU0N6uM79jmcAAH5zDqSdRx1VS+lr4cWeNOPxGiXzEepk+MRml6Y0uGKdtdlboqK6kvYfSNkkhFmtXsnvtNQyA8UDSAercKYAeSPfJftqXmHbVvAY+Ky5R22RivRx7jpubqimyS4Tab95yEzsLi6hEQ2OW1pZleLTnr31vNLojOAxtrIY7YgkPSo3yrbURsfLyldLo3LfSlYfkTpkQFE2CajUrAitfcz+uMi9UVw0jCs+cC6uvKZdzu9Flnc8SDq2rMPIHuEP+9CACVSTU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOxCaPCuKLUncOQ8c8c4/3OodUXgAR3WjvU4uCVk4XkO#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA7zHYLiINcKCNo52qkzrmctOgzvnHIchoPMaZyVaf/Aonhb5ntaWhlnHGxOVN+ZUQQOMPIjt7zIO4FB9IYg2xw=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8DvicBdqy7dEZlHZpy7m/TwUChtVXFipP55AL4//M7HIh4A4ZWW0M0pb4E4WsXc1Y99eeNf5R+fmafWv5Z2x8Tq9KiRM9wQGSEJo1Sp7Ant8TcIyfbWCUIhmGAfkYUT2iUTjyyBrBL7iGVxJbYtCagodoXoIL4MSkgeZpadFa4XI4DieFBF95zOzXF6Z9RVUiocOG6vaogo3k/wTemQxQ/dlVV7SPrtj+GoZEUpeNlAKRbkAB8PNee/Ne+abzClpRp50s2pAh7smZFmL0O+wDOgWwFImPpxCkh4nR/3IJq6O53KXSl9jR4X/vmJHpFEHC6oZX5/hfwaJTfvvELB5cjzaFh3mzFweGkQq82VhAAxVksDTO2+aUZFGDJbMSvjPTSTEl+qx+GAl7E0KnzST+NMnd5qplw0KIj+BBZgkZtKK8kAsxxRU3zDMDotlvIDG1KYN+wOGRG2Cy2afXmGFIFYdzOFlvkAwmv9yhY5u5OlWxzuiZEOcqJ0dGS1e0hk8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFq0l7tgdUK0C+AqSmZJQ8Y9Z17ynv3L7Gso+BnrUJe7#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLWT8H4lhVkE+892UU3HiUydE/Wuy5lmeTLAJzcPPkEmKKDZLorB5daY+peHiUZWU/JHax1i6VTJiGCUcfBK9Vw=#012 create=True mode=0644 path=/tmp/ansible.khaypto5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:22.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:22 np0005539550 python3.9[126624]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.khaypto5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:23.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:23 np0005539550 python3.9[126780]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.khaypto5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:23 np0005539550 systemd[1]: session-42.scope: Deactivated successfully.
Nov 29 02:24:23 np0005539550 systemd[1]: session-42.scope: Consumed 5.124s CPU time.
Nov 29 02:24:23 np0005539550 systemd-logind[788]: Session 42 logged out. Waiting for processes to exit.
Nov 29 02:24:23 np0005539550 systemd-logind[788]: Removed session 42.
Nov 29 02:24:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:28.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:29.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:30 np0005539550 systemd-logind[788]: New session 43 of user zuul.
Nov 29 02:24:30 np0005539550 systemd[1]: Started Session 43 of User zuul.
Nov 29 02:24:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:30.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:31 np0005539550 python3.9[127020]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.701896) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401071701985, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1423, "num_deletes": 250, "total_data_size": 2545732, "memory_usage": 2588504, "flush_reason": "Manual Compaction"}
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401071787230, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1599504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8589, "largest_seqno": 10011, "table_properties": {"data_size": 1594022, "index_size": 2750, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13153, "raw_average_key_size": 20, "raw_value_size": 1582367, "raw_average_value_size": 2441, "num_data_blocks": 125, "num_entries": 648, "num_filter_entries": 648, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400925, "oldest_key_time": 1764400925, "file_creation_time": 1764401071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 85396 microseconds, and 5736 cpu microseconds.
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.787304) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1599504 bytes OK
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.787324) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.835687) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.835771) EVENT_LOG_v1 {"time_micros": 1764401071835760, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.835798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2539517, prev total WAL file size 2554958, number of live WAL files 2.
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.836992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1562KB)], [20(9937KB)]
Nov 29 02:24:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401071837102, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11775624, "oldest_snapshot_seqno": -1}
Nov 29 02:24:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:32.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3988 keys, 9765743 bytes, temperature: kUnknown
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072131451, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9765743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9733804, "index_size": 20885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97628, "raw_average_key_size": 24, "raw_value_size": 9656440, "raw_average_value_size": 2421, "num_data_blocks": 917, "num_entries": 3988, "num_filter_entries": 3988, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401071, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.131768) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9765743 bytes
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.200067) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 40.0 rd, 33.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(13.5) write-amplify(6.1) OK, records in: 4451, records dropped: 463 output_compression: NoCompression
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.200165) EVENT_LOG_v1 {"time_micros": 1764401072200147, "job": 6, "event": "compaction_finished", "compaction_time_micros": 294455, "compaction_time_cpu_micros": 29281, "output_level": 6, "num_output_files": 1, "total_output_size": 9765743, "num_input_records": 4451, "num_output_records": 3988, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072200681, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401072202440, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:31.836767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.202561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.202567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.202568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.202570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:24:32.202571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:24:32 np0005539550 python3.9[127179]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:24:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:24:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 601.4 total, 600.0 interval#012Cumulative writes: 7942 writes, 33K keys, 7942 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 7942 writes, 1456 syncs, 5.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7942 writes, 33K keys, 7942 commit groups, 1.0 writes per commit group, ingest: 21.26 MB, 0.04 MB/s#012Interval WAL: 7942 writes, 1456 syncs, 5.45 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 601.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Nov 29 02:24:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:34 np0005539550 python3.9[127336]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:24:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:35.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:35 np0005539550 python3.9[127491]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:36 np0005539550 python3.9[127645]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:24:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:37.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:24:37 np0005539550 python3.9[127799]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:24:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:37 np0005539550 systemd[1]: session-43.scope: Deactivated successfully.
Nov 29 02:24:37 np0005539550 systemd[1]: session-43.scope: Consumed 3.916s CPU time.
Nov 29 02:24:37 np0005539550 systemd-logind[788]: Session 43 logged out. Waiting for processes to exit.
Nov 29 02:24:37 np0005539550 systemd-logind[788]: Removed session 43.
Nov 29 02:24:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000050s ======
Nov 29 02:24:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:38.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 29 02:24:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:42.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:42 np0005539550 systemd-logind[788]: New session 44 of user zuul.
Nov 29 02:24:42 np0005539550 systemd[1]: Started Session 44 of User zuul.
Nov 29 02:24:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:44.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:44 np0005539550 python3.9[128038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:24:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:45.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:45 np0005539550 python3.9[128196]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:24:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:46.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:46 np0005539550 python3.9[128281]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 02:24:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 02:24:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:48.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:48 np0005539550 python3.9[128437]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:24:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:24:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:24:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:24:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:50.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:24:50 np0005539550 python3.9[128591]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:24:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:51 np0005539550 python3.9[128741]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:51 np0005539550 python3.9[128969]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:24:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:24:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:24:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:52.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:52 np0005539550 systemd[1]: session-44.scope: Deactivated successfully.
Nov 29 02:24:52 np0005539550 systemd[1]: session-44.scope: Consumed 6.196s CPU time.
Nov 29 02:24:52 np0005539550 systemd-logind[788]: Session 44 logged out. Waiting for processes to exit.
Nov 29 02:24:52 np0005539550 systemd-logind[788]: Removed session 44.
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:24:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:54.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:24:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:24:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:24:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 29 02:24:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:24:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:56.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:24:58.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:24:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:58 np0005539550 systemd-logind[788]: New session 45 of user zuul.
Nov 29 02:24:59 np0005539550 systemd[1]: Started Session 45 of User zuul.
Nov 29 02:24:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:24:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:24:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:24:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:24:59
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'volumes', 'default.rgw.meta']
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5f4bd247-5405-4dcc-bfbd-240cab510c12 does not exist
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1571efac-1111-431e-9bfa-6227503d3813 does not exist
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d9cf58e4-98f4-4898-9e94-63978a1300a9 does not exist
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:24:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:24:59 np0005539550 podman[129354]: 2025-11-29 07:24:59.856429312 +0000 UTC m=+0.021043700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:00 np0005539550 python3.9[129325]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.182496998 +0000 UTC m=+0.347111346 container create 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:25:00 np0005539550 systemd[1]: Started libpod-conmon-6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02.scope.
Nov 29 02:25:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.411721722 +0000 UTC m=+0.576336090 container init 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.419477959 +0000 UTC m=+0.584092307 container start 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.426039188 +0000 UTC m=+0.590653536 container attach 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:25:00 np0005539550 dazzling_banzai[129398]: 167 167
Nov 29 02:25:00 np0005539550 systemd[1]: libpod-6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02.scope: Deactivated successfully.
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.42859807 +0000 UTC m=+0.593212428 container died 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:25:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7019c1e573fdaa635cd412fb3bbb29b6ac082a290fceea20b7e8f0e4e16a0f99-merged.mount: Deactivated successfully.
Nov 29 02:25:00 np0005539550 podman[129354]: 2025-11-29 07:25:00.489625286 +0000 UTC m=+0.654239634 container remove 6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:00 np0005539550 systemd[1]: libpod-conmon-6f0dd328f0d4f3271ff66a42a9c205a61f793b23ef2cb5ec81876644736bfd02.scope: Deactivated successfully.
Nov 29 02:25:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:00 np0005539550 podman[129422]: 2025-11-29 07:25:00.641364016 +0000 UTC m=+0.032657711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:00 np0005539550 podman[129422]: 2025-11-29 07:25:00.752241027 +0000 UTC m=+0.143534702 container create 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:25:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:01 np0005539550 systemd[1]: Started libpod-conmon-4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac.scope.
Nov 29 02:25:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:01 np0005539550 podman[129422]: 2025-11-29 07:25:01.499607242 +0000 UTC m=+0.890900947 container init 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:25:01 np0005539550 podman[129422]: 2025-11-29 07:25:01.509229775 +0000 UTC m=+0.900523450 container start 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:25:01 np0005539550 podman[129422]: 2025-11-29 07:25:01.578738866 +0000 UTC m=+0.970032571 container attach 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:25:01 np0005539550 python3.9[129572]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:02.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:02 np0005539550 nice_kalam[129491]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:25:02 np0005539550 nice_kalam[129491]: --> relative data size: 1.0
Nov 29 02:25:02 np0005539550 nice_kalam[129491]: --> All data devices are unavailable
Nov 29 02:25:02 np0005539550 systemd[1]: libpod-4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac.scope: Deactivated successfully.
Nov 29 02:25:02 np0005539550 podman[129422]: 2025-11-29 07:25:02.542318219 +0000 UTC m=+1.933611894 container died 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:25:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3259d17987b74772b0523addaeec4410be2641e5ce8ec211ba2b08317ac110c4-merged.mount: Deactivated successfully.
Nov 29 02:25:02 np0005539550 podman[129422]: 2025-11-29 07:25:02.625725056 +0000 UTC m=+2.017018731 container remove 4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_kalam, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:25:02 np0005539550 systemd[1]: libpod-conmon-4b906f69e02b08cb0dc65d709fe6170091976cb468230fc05a3efc2f582db2ac.scope: Deactivated successfully.
Nov 29 02:25:02 np0005539550 python3.9[129731]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:03.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.272075618 +0000 UTC m=+0.026000410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:03 np0005539550 python3.9[130030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.416963282 +0000 UTC m=+0.170888044 container create 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:25:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:03 np0005539550 systemd[1]: Started libpod-conmon-61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6.scope.
Nov 29 02:25:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.702824006 +0000 UTC m=+0.456748788 container init 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.71252333 +0000 UTC m=+0.466448092 container start 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.716828154 +0000 UTC m=+0.470752946 container attach 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:25:03 np0005539550 loving_sanderson[130139]: 167 167
Nov 29 02:25:03 np0005539550 systemd[1]: libpod-61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6.scope: Deactivated successfully.
Nov 29 02:25:03 np0005539550 podman[130038]: 2025-11-29 07:25:03.719796566 +0000 UTC m=+0.473721338 container died 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:25:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-823d4ea37ec4b8adb33d2efdf56d7883ff1edc980b60abba45a795b6e79c770f-merged.mount: Deactivated successfully.
Nov 29 02:25:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:04 np0005539550 podman[130038]: 2025-11-29 07:25:04.177335332 +0000 UTC m=+0.931260094 container remove 61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:25:04 np0005539550 systemd[1]: libpod-conmon-61ffb443954e12c98b519a7aba7c88b03697a7c0b475ac2fe31563a48bf1fdf6.scope: Deactivated successfully.
Nov 29 02:25:04 np0005539550 python3.9[130248]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401102.895288-165-148536567668502/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=2e43b819719857167c66b5975f697b9a7d321bf3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:04 np0005539550 podman[130261]: 2025-11-29 07:25:04.312931621 +0000 UTC m=+0.023730695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:04 np0005539550 podman[130261]: 2025-11-29 07:25:04.737691874 +0000 UTC m=+0.448490918 container create 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:25:04 np0005539550 python3.9[130420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:05 np0005539550 systemd[1]: Started libpod-conmon-1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73.scope.
Nov 29 02:25:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a2b315d427aa0b1537aeca541a05ca5bf40d25d553d8aac839c86ffe15f6b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a2b315d427aa0b1537aeca541a05ca5bf40d25d553d8aac839c86ffe15f6b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a2b315d427aa0b1537aeca541a05ca5bf40d25d553d8aac839c86ffe15f6b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97a2b315d427aa0b1537aeca541a05ca5bf40d25d553d8aac839c86ffe15f6b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:05 np0005539550 podman[130261]: 2025-11-29 07:25:05.071002765 +0000 UTC m=+0.781801829 container init 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:05 np0005539550 podman[130261]: 2025-11-29 07:25:05.079247694 +0000 UTC m=+0.790046738 container start 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:25:05 np0005539550 podman[130261]: 2025-11-29 07:25:05.085150367 +0000 UTC m=+0.795949441 container attach 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:25:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:05.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:05 np0005539550 python3.9[130552]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401104.37669-165-157299219723750/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=96f02fb504411b0b161adf414c18934dfa96b5b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]: {
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:    "0": [
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:        {
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "devices": [
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "/dev/loop3"
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            ],
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "lv_name": "ceph_lv0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "lv_size": "7511998464",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "name": "ceph_lv0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "tags": {
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.cluster_name": "ceph",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.crush_device_class": "",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.encrypted": "0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.osd_id": "0",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.type": "block",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:                "ceph.vdo": "0"
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            },
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "type": "block",
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:            "vg_name": "ceph_vg0"
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:        }
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]:    ]
Nov 29 02:25:05 np0005539550 frosty_tesla[130469]: }
Nov 29 02:25:06 np0005539550 systemd[1]: libpod-1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73.scope: Deactivated successfully.
Nov 29 02:25:06 np0005539550 podman[130261]: 2025-11-29 07:25:06.013645692 +0000 UTC m=+1.724444756 container died 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:25:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-97a2b315d427aa0b1537aeca541a05ca5bf40d25d553d8aac839c86ffe15f6b5-merged.mount: Deactivated successfully.
Nov 29 02:25:06 np0005539550 podman[130261]: 2025-11-29 07:25:06.08016301 +0000 UTC m=+1.790962054 container remove 1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:25:06 np0005539550 systemd[1]: libpod-conmon-1aecc6df170b47faf78be02d99c2200edd3e5ac39136fb754d5870210cc49d73.scope: Deactivated successfully.
Nov 29 02:25:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:06 np0005539550 python3.9[130723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.80000377 +0000 UTC m=+0.046591058 container create 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:25:06 np0005539550 systemd[1]: Started libpod-conmon-137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046.scope.
Nov 29 02:25:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.779971195 +0000 UTC m=+0.026558503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.881004979 +0000 UTC m=+0.127592287 container init 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.889625107 +0000 UTC m=+0.136212385 container start 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.894246979 +0000 UTC m=+0.140834287 container attach 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:25:06 np0005539550 elegant_davinci[131005]: 167 167
Nov 29 02:25:06 np0005539550 systemd[1]: libpod-137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046.scope: Deactivated successfully.
Nov 29 02:25:06 np0005539550 conmon[131005]: conmon 137e9bb63112d0cf3317 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046.scope/container/memory.events
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.896611646 +0000 UTC m=+0.143198924 container died 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:25:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d0a3e12aaf8ef50a55e6ccb6cc2829a48c5ddea8b0174cb94d90d049d318091e-merged.mount: Deactivated successfully.
Nov 29 02:25:06 np0005539550 podman[130987]: 2025-11-29 07:25:06.946283447 +0000 UTC m=+0.192870735 container remove 137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_davinci, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:25:06 np0005539550 systemd[1]: libpod-conmon-137e9bb63112d0cf33177991320229e93f7625e19808d5237023f8e0719cf046.scope: Deactivated successfully.
Nov 29 02:25:06 np0005539550 python3.9[130989]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401105.7242022-165-49000060002375/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dc0b8d79530a00136054fb903a83874426a55ca4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:07.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:07 np0005539550 podman[131054]: 2025-11-29 07:25:07.17298326 +0000 UTC m=+0.098609556 container create 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:25:07 np0005539550 podman[131054]: 2025-11-29 07:25:07.099620496 +0000 UTC m=+0.025246822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:25:07 np0005539550 systemd[1]: Started libpod-conmon-088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28.scope.
Nov 29 02:25:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:25:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600c4dc2af3c0e9e492221f8ed5e828c8ba2ac6ded6ac862fe44efb28f119d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600c4dc2af3c0e9e492221f8ed5e828c8ba2ac6ded6ac862fe44efb28f119d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600c4dc2af3c0e9e492221f8ed5e828c8ba2ac6ded6ac862fe44efb28f119d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4600c4dc2af3c0e9e492221f8ed5e828c8ba2ac6ded6ac862fe44efb28f119d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:07 np0005539550 podman[131054]: 2025-11-29 07:25:07.457952282 +0000 UTC m=+0.383578608 container init 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:25:07 np0005539550 podman[131054]: 2025-11-29 07:25:07.466836767 +0000 UTC m=+0.392463063 container start 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:25:07 np0005539550 podman[131054]: 2025-11-29 07:25:07.482309271 +0000 UTC m=+0.407935597 container attach 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:25:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:25:07 np0005539550 python3.9[131203]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]: {
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:        "osd_id": 0,
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:        "type": "bluestore"
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]:    }
Nov 29 02:25:08 np0005539550 wizardly_banzai[131146]: }
Nov 29 02:25:08 np0005539550 systemd[1]: libpod-088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28.scope: Deactivated successfully.
Nov 29 02:25:08 np0005539550 systemd[1]: libpod-088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28.scope: Consumed 1.023s CPU time.
Nov 29 02:25:08 np0005539550 podman[131374]: 2025-11-29 07:25:08.54324445 +0000 UTC m=+0.030830957 container died 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:25:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4600c4dc2af3c0e9e492221f8ed5e828c8ba2ac6ded6ac862fe44efb28f119d9-merged.mount: Deactivated successfully.
Nov 29 02:25:08 np0005539550 podman[131374]: 2025-11-29 07:25:08.628997534 +0000 UTC m=+0.116584021 container remove 088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:25:08 np0005539550 systemd[1]: libpod-conmon-088e1cb0407f95a862a2d1215decfa1663820dad64b95e3f20ecc977d4b05a28.scope: Deactivated successfully.
Nov 29 02:25:08 np0005539550 python3.9[131370]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:25:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:25:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9bc21ce7-5759-4208-95fd-c1d63194ff75 does not exist
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bb1aa2ad-74d9-43eb-a5f3-bbdb5ea016c1 does not exist
Nov 29 02:25:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6fb75dab-f9b4-47a1-a022-34e1ca7e508f does not exist
Nov 29 02:25:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:09 np0005539550 python3.9[131593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:10 np0005539550 python3.9[131717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401108.8987105-351-64930629149896/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8b7bece49f8285e67b4565d8f48d12af70383fb0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:10.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:10 np0005539550 python3.9[131871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:25:11 np0005539550 python3.9[131994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401110.1906815-351-136052380862152/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:12.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:12 np0005539550 python3.9[132149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:12 np0005539550 python3.9[132272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401111.586809-351-146500595602292/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ecde49182db87438cdeb27f9482e4c3a685afb56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:13 np0005539550 python3.9[132426]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:14.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:14 np0005539550 python3.9[132579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:15.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:15 np0005539550 python3.9[132733]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:15 np0005539550 python3.9[132856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401114.6581793-526-10366615912816/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=a65efe8d9256cec999c4d224986072a3cf54d9e9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:16 np0005539550 python3.9[133009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:16 np0005539550 python3.9[133134]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401115.8835702-526-104331684435464/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=061972f8260f53205da8541ffc96c3e0cb49837b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:17.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:17 np0005539550 python3.9[133286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:18 np0005539550 python3.9[133410]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401117.1119125-526-271682511458527/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=83b61a4a78f74f5431e915e3102b338e65ff26c0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:25:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:19 np0005539550 python3.9[133564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:20 np0005539550 python3.9[133719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:21 np0005539550 python3.9[133842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401119.882182-732-18555299425741/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:21 np0005539550 python3.9[133994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:22.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:22 np0005539550 python3.9[134149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:23.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:23 np0005539550 python3.9[134272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401122.038447-804-259516056738085/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:24 np0005539550 python3.9[134477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:24.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:24 np0005539550 python3.9[134629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:25 np0005539550 python3.9[134752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401124.2964492-879-7448183152443/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:26 np0005539550 python3.9[134907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:26.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:26 np0005539550 python3.9[135059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:27.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:27 np0005539550 python3.9[135182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401126.290847-951-59270200110076/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:28.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:28 np0005539550 python3.9[135337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:28 np0005539550 python3.9[135489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:29 np0005539550 python3.9[135614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401128.4154873-1025-183713835333122/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:30.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:30 np0005539550 python3.9[135767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:30 np0005539550 python3.9[135921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:31 np0005539550 python3.9[136044]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401130.4381006-1086-167265245038267/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=1960c13778c50062ca07f689a187e0cd26c6ab56 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:32.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:32 np0005539550 systemd[1]: session-45.scope: Deactivated successfully.
Nov 29 02:25:32 np0005539550 systemd[1]: session-45.scope: Consumed 24.546s CPU time.
Nov 29 02:25:32 np0005539550 systemd-logind[788]: Session 45 logged out. Waiting for processes to exit.
Nov 29 02:25:32 np0005539550 systemd-logind[788]: Removed session 45.
Nov 29 02:25:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:34.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:35.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:36.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:38 np0005539550 systemd-logind[788]: New session 46 of user zuul.
Nov 29 02:25:38 np0005539550 systemd[1]: Started Session 46 of User zuul.
Nov 29 02:25:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:39.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:39 np0005539550 python3.9[136238]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:40 np0005539550 python3.9[136391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:41 np0005539550 python3.9[136516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401139.7279675-67-248252803925178/.source.conf _original_basename=ceph.conf follow=False checksum=dcade63291eb6ea0d49dedd3c47047e031c2100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:41.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:41 np0005539550 python3.9[136668]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:25:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:42 np0005539550 python3.9[136792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401141.3039963-67-26944349423963/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=ced193c31d6b83611be924c31eabde34732ad5bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:25:42 np0005539550 systemd[1]: session-46.scope: Deactivated successfully.
Nov 29 02:25:42 np0005539550 systemd[1]: session-46.scope: Consumed 2.789s CPU time.
Nov 29 02:25:42 np0005539550 systemd-logind[788]: Session 46 logged out. Waiting for processes to exit.
Nov 29 02:25:42 np0005539550 systemd-logind[788]: Removed session 46.
Nov 29 02:25:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:43.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:25:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:25:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:46.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:47.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:48 np0005539550 systemd-logind[788]: New session 47 of user zuul.
Nov 29 02:25:48 np0005539550 systemd[1]: Started Session 47 of User zuul.
Nov 29 02:25:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:49.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:49 np0005539550 python3.9[137031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:50 np0005539550 python3.9[137190]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:51.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:51 np0005539550 python3.9[137342]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:25:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:52.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:52 np0005539550 python3.9[137495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:25:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:53.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:53 np0005539550 python3.9[137647]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:25:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:54.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:55.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:25:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:56.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:25:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:57.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:25:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:25:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:25:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:25:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:25:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:25:58 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 02:25:58 np0005539550 python3.9[137813]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:25:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:25:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:25:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:25:59.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:25:59
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta']
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:25:59 np0005539550 python3.9[137897]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:25:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:01.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:02 np0005539550 python3.9[138054]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:26:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:02.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:03 np0005539550 python3[138211]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 02:26:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:03.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:03 np0005539550 python3.9[138365]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:04 np0005539550 python3.9[138568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:05.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:05 np0005539550 python3.9[138646]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:05 np0005539550 python3.9[138800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:06 np0005539550 python3.9[138879]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.c6j29eey recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:07 np0005539550 python3.9[139033]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:07 np0005539550 python3.9[139111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:26:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:26:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:08 np0005539550 python3.9[139266]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:26:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:09.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:09 np0005539550 python3[139419]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:26:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:10 np0005539550 python3.9[139699]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:10.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:10 np0005539550 python3.9[139830]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401169.5682888-436-131619387867831/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:11.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:11 np0005539550 python3.9[139982]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:12 np0005539550 python3.9[140110]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401171.0666506-481-229156545393901/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:13.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:14 np0005539550 python3.9[140265]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:14 np0005539550 python3.9[140390]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401173.704404-526-126076780728690/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:15.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:15 np0005539550 python3.9[140544]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:16 np0005539550 python3.9[140670]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401174.9609528-571-133040918429731/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:17.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:17 np0005539550 python3.9[140824]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:17 np0005539550 python3.9[140949]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401176.3764184-616-30581201916534/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:19 np0005539550 python3.9[141104]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:26:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:19 np0005539550 python3.9[141256]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:20.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:20 np0005539550 python3.9[141414]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:21.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:21 np0005539550 python3.9[141566]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.904407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401181904485, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1032, "num_deletes": 252, "total_data_size": 1716770, "memory_usage": 1750424, "flush_reason": "Manual Compaction"}
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401181916504, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1686982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10012, "largest_seqno": 11043, "table_properties": {"data_size": 1681932, "index_size": 2574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10714, "raw_average_key_size": 19, "raw_value_size": 1671684, "raw_average_value_size": 3028, "num_data_blocks": 117, "num_entries": 552, "num_filter_entries": 552, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401071, "oldest_key_time": 1764401071, "file_creation_time": 1764401181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12173 microseconds, and 5235 cpu microseconds.
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.916592) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1686982 bytes OK
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.916611) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.918092) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.918115) EVENT_LOG_v1 {"time_micros": 1764401181918110, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.918134) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1712042, prev total WAL file size 1712042, number of live WAL files 2.
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.918905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1647KB)], [23(9536KB)]
Nov 29 02:26:21 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401181919014, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 11452725, "oldest_snapshot_seqno": -1}
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4018 keys, 9584805 bytes, temperature: kUnknown
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182023527, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 9584805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9553537, "index_size": 20150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 99005, "raw_average_key_size": 24, "raw_value_size": 9476432, "raw_average_value_size": 2358, "num_data_blocks": 876, "num_entries": 4018, "num_filter_entries": 4018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401181, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.023851) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 9584805 bytes
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.026545) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.5 rd, 91.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.3 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.5) write-amplify(5.7) OK, records in: 4540, records dropped: 522 output_compression: NoCompression
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.026579) EVENT_LOG_v1 {"time_micros": 1764401182026565, "job": 8, "event": "compaction_finished", "compaction_time_micros": 104617, "compaction_time_cpu_micros": 22736, "output_level": 6, "num_output_files": 1, "total_output_size": 9584805, "num_input_records": 4540, "num_output_records": 4018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182027029, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401182028795, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:21.918710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.028942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.028948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.028950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.028951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:26:22.028952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:26:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:22.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:22 np0005539550 python3.9[141722]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:23.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:26:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:23 np0005539550 python3.9[141878]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:24.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:26:24 np0005539550 python3.9[142057]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:25.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 571e53c0-0d75-4bab-b6de-9b0369a89ae5 does not exist
Nov 29 02:26:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 552b6124-b0ae-457d-b0c4-659e8767cc21 does not exist
Nov 29 02:26:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b48ee619-2307-4a92-abfd-7fdd79b04619 does not exist
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:26:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:26:25 np0005539550 python3.9[142260]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:26:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.10645146 +0000 UTC m=+0.067610741 container create 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:26:26 np0005539550 systemd[1]: Started libpod-conmon-5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77.scope.
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.062219471 +0000 UTC m=+0.023378782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.19657327 +0000 UTC m=+0.157732571 container init 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.207173628 +0000 UTC m=+0.168332909 container start 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.211189869 +0000 UTC m=+0.172349150 container attach 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:26:26 np0005539550 focused_swartz[142419]: 167 167
Nov 29 02:26:26 np0005539550 systemd[1]: libpod-5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77.scope: Deactivated successfully.
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.215745524 +0000 UTC m=+0.176904805 container died 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:26:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f6a67c15f7b4b1373fcd56bbfebcd3e4f7c19da71b9572f280777414ffee6f74-merged.mount: Deactivated successfully.
Nov 29 02:26:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:26.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:26 np0005539550 podman[142403]: 2025-11-29 07:26:26.275104046 +0000 UTC m=+0.236263337 container remove 5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:26:26 np0005539550 systemd[1]: libpod-conmon-5973db3cf86574d66aacd9f6294fd5562783c7dc2ea0161bb59cb87641b0ca77.scope: Deactivated successfully.
Nov 29 02:26:26 np0005539550 podman[142441]: 2025-11-29 07:26:26.437673898 +0000 UTC m=+0.052470748 container create 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:26:26 np0005539550 systemd[1]: Started libpod-conmon-6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05.scope.
Nov 29 02:26:26 np0005539550 podman[142441]: 2025-11-29 07:26:26.416258626 +0000 UTC m=+0.031055536 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:26 np0005539550 podman[142441]: 2025-11-29 07:26:26.541904814 +0000 UTC m=+0.156701674 container init 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:26:26 np0005539550 podman[142441]: 2025-11-29 07:26:26.549348752 +0000 UTC m=+0.164145602 container start 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:26:26 np0005539550 podman[142441]: 2025-11-29 07:26:26.553728563 +0000 UTC m=+0.168525413 container attach 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 02:26:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:27.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:26:27 np0005539550 python3.9[142592]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:27 np0005539550 ovs-vsctl[142599]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 02:26:27 np0005539550 angry_raman[142458]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:26:27 np0005539550 angry_raman[142458]: --> relative data size: 1.0
Nov 29 02:26:27 np0005539550 angry_raman[142458]: --> All data devices are unavailable
Nov 29 02:26:27 np0005539550 systemd[1]: libpod-6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05.scope: Deactivated successfully.
Nov 29 02:26:27 np0005539550 podman[142441]: 2025-11-29 07:26:27.403010094 +0000 UTC m=+1.017806944 container died 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:26:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6d9c22d0ec0e0078b2ccd1ffcccbef395415bfb9351e2c91eb58a3532537e99d-merged.mount: Deactivated successfully.
Nov 29 02:26:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:27 np0005539550 podman[142441]: 2025-11-29 07:26:27.467853654 +0000 UTC m=+1.082650504 container remove 6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_raman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:26:27 np0005539550 systemd[1]: libpod-conmon-6a9c09cdaed71aa2e2d6ac1de84188ea2ad2326f4fba2b838497b8d536882f05.scope: Deactivated successfully.
Nov 29 02:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.059765125 +0000 UTC m=+0.066834782 container create fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:26:28 np0005539550 python3.9[142894]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:28 np0005539550 systemd[1]: Started libpod-conmon-fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc.scope.
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.017150107 +0000 UTC m=+0.024219784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.222813778 +0000 UTC m=+0.229883455 container init fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.230681327 +0000 UTC m=+0.237750984 container start fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.236244947 +0000 UTC m=+0.243314624 container attach fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:26:28 np0005539550 eloquent_carson[142930]: 167 167
Nov 29 02:26:28 np0005539550 systemd[1]: libpod-fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc.scope: Deactivated successfully.
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.238489364 +0000 UTC m=+0.245559041 container died fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:26:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:28.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9f90a6e777b8dc46aae54a0cf5dab90d069dce803c3f163d9c8c16d731ae0139-merged.mount: Deactivated successfully.
Nov 29 02:26:28 np0005539550 podman[142911]: 2025-11-29 07:26:28.355976036 +0000 UTC m=+0.363045693 container remove fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:26:28 np0005539550 systemd[1]: libpod-conmon-fdd2cf4d7da18a98848678b9f895d9d9cba766950af3afba8344590bac758dcc.scope: Deactivated successfully.
Nov 29 02:26:28 np0005539550 podman[143030]: 2025-11-29 07:26:28.517130512 +0000 UTC m=+0.050656863 container create c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:26:28 np0005539550 systemd[1]: Started libpod-conmon-c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109.scope.
Nov 29 02:26:28 np0005539550 podman[143030]: 2025-11-29 07:26:28.491788341 +0000 UTC m=+0.025314712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082abd0b80c66d4e72007ce6e0e9c6738b8582aa9dc133a5de2407cd4734887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082abd0b80c66d4e72007ce6e0e9c6738b8582aa9dc133a5de2407cd4734887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082abd0b80c66d4e72007ce6e0e9c6738b8582aa9dc133a5de2407cd4734887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082abd0b80c66d4e72007ce6e0e9c6738b8582aa9dc133a5de2407cd4734887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:28 np0005539550 podman[143030]: 2025-11-29 07:26:28.624882567 +0000 UTC m=+0.158408928 container init c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:26:28 np0005539550 podman[143030]: 2025-11-29 07:26:28.662724514 +0000 UTC m=+0.196250845 container start c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:26:28 np0005539550 podman[143030]: 2025-11-29 07:26:28.667393382 +0000 UTC m=+0.200919813 container attach c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:26:28 np0005539550 python3.9[143126]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:26:28 np0005539550 ovs-vsctl[143127]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 02:26:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:29.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]: {
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:    "0": [
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:        {
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "devices": [
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "/dev/loop3"
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            ],
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "lv_name": "ceph_lv0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "lv_size": "7511998464",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "name": "ceph_lv0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "tags": {
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.cluster_name": "ceph",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.crush_device_class": "",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.encrypted": "0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.osd_id": "0",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.type": "block",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:                "ceph.vdo": "0"
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            },
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "type": "block",
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:            "vg_name": "ceph_vg0"
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:        }
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]:    ]
Nov 29 02:26:29 np0005539550 hungry_taussig[143074]: }
Nov 29 02:26:29 np0005539550 systemd[1]: libpod-c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109.scope: Deactivated successfully.
Nov 29 02:26:29 np0005539550 podman[143030]: 2025-11-29 07:26:29.499537939 +0000 UTC m=+1.033064300 container died c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:26:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e082abd0b80c66d4e72007ce6e0e9c6738b8582aa9dc133a5de2407cd4734887-merged.mount: Deactivated successfully.
Nov 29 02:26:29 np0005539550 podman[143030]: 2025-11-29 07:26:29.562162933 +0000 UTC m=+1.095689274 container remove c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:26:29 np0005539550 systemd[1]: libpod-conmon-c96a3502a5e4dd39f79ca8c9f41655c9a1d18d7c780a17d253c55ab9ea2ab109.scope: Deactivated successfully.
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.146191075 +0000 UTC m=+0.040172657 container create 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:26:30 np0005539550 systemd[1]: Started libpod-conmon-465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf.scope.
Nov 29 02:26:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.209700411 +0000 UTC m=+0.103682033 container init 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.216711859 +0000 UTC m=+0.110693441 container start 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:26:30 np0005539550 exciting_kalam[143427]: 167 167
Nov 29 02:26:30 np0005539550 systemd[1]: libpod-465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf.scope: Deactivated successfully.
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.128893498 +0000 UTC m=+0.022875100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.224893056 +0000 UTC m=+0.118874658 container attach 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.225214814 +0000 UTC m=+0.119196396 container died 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:26:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6794f79d107408a71eaaa160a9ea230e3cd38028e52e23dd781ba3876373a005-merged.mount: Deactivated successfully.
Nov 29 02:26:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:30.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:30 np0005539550 podman[143387]: 2025-11-29 07:26:30.269117884 +0000 UTC m=+0.163099466 container remove 465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:26:30 np0005539550 systemd[1]: libpod-conmon-465996fd13dab6c50743c0786fcabb7476311ac4d26baaa821250db574b877cf.scope: Deactivated successfully.
Nov 29 02:26:30 np0005539550 podman[143479]: 2025-11-29 07:26:30.422581406 +0000 UTC m=+0.046548789 container create f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:26:30 np0005539550 python3.9[143470]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:26:30 np0005539550 systemd[1]: Started libpod-conmon-f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7.scope.
Nov 29 02:26:30 np0005539550 podman[143479]: 2025-11-29 07:26:30.402473247 +0000 UTC m=+0.026440650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:26:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:26:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c1b00bfc2a27b2a56191c8ecbc837283c4157582322a326401d062ef7464d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c1b00bfc2a27b2a56191c8ecbc837283c4157582322a326401d062ef7464d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c1b00bfc2a27b2a56191c8ecbc837283c4157582322a326401d062ef7464d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1c1b00bfc2a27b2a56191c8ecbc837283c4157582322a326401d062ef7464d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:26:30 np0005539550 podman[143479]: 2025-11-29 07:26:30.520109082 +0000 UTC m=+0.144076465 container init f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:26:30 np0005539550 podman[143479]: 2025-11-29 07:26:30.52872356 +0000 UTC m=+0.152690943 container start f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:26:30 np0005539550 podman[143479]: 2025-11-29 07:26:30.532644979 +0000 UTC m=+0.156612382 container attach f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:26:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:31.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:31 np0005539550 python3.9[143656]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]: {
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:        "osd_id": 0,
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:        "type": "bluestore"
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]:    }
Nov 29 02:26:31 np0005539550 trusting_einstein[143497]: }
Nov 29 02:26:31 np0005539550 systemd[1]: libpod-f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7.scope: Deactivated successfully.
Nov 29 02:26:31 np0005539550 podman[143479]: 2025-11-29 07:26:31.43489401 +0000 UTC m=+1.058861423 container died f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:26:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-be1c1b00bfc2a27b2a56191c8ecbc837283c4157582322a326401d062ef7464d-merged.mount: Deactivated successfully.
Nov 29 02:26:31 np0005539550 podman[143479]: 2025-11-29 07:26:31.511012055 +0000 UTC m=+1.134979438 container remove f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_einstein, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:26:31 np0005539550 systemd[1]: libpod-conmon-f2849e5fbba17c6ec5017f78e459769000e14d957478ca0d45e1e2b0f84e6ab7.scope: Deactivated successfully.
Nov 29 02:26:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:26:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:32.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:32 np0005539550 python3.9[143835]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:32 np0005539550 python3.9[143915]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:33.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:26:33 np0005539550 python3.9[144067]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:34 np0005539550 python3.9[144145]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:34.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:26:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:35.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:26:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 02:26:35 np0005539550 python3.9[144300]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7bafe198-072b-4bbe-98ea-3c543caf65e4 does not exist
Nov 29 02:26:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 912b0629-a2a6-46ad-99a8-53dfb2062eb2 does not exist
Nov 29 02:26:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 693f2e5e-3cc0-4143-be3d-c37e93fa6ef1 does not exist
Nov 29 02:26:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:36.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:26:36 np0005539550 python3.9[144505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:36 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:36 np0005539550 python3.9[144583]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 02:26:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:37.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.9 KiB/s rd, 0 B/s wr, 11 op/s
Nov 29 02:26:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:38.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:38 np0005539550 python3.9[144740]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:26:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:39.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:26:39 np0005539550 python3.9[144818]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Nov 29 02:26:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:26:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:40.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:40 np0005539550 python3.9[144971]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:40 np0005539550 systemd[1]: Reloading.
Nov 29 02:26:40 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:40 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:41.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 29 02:26:41 np0005539550 python3.9[145162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:42 np0005539550 python3.9[145241]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:42.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:42 np0005539550 python3.9[145393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:43.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:43 np0005539550 python3.9[145473]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 29 02:26:44 np0005539550 python3.9[145626]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:26:44 np0005539550 systemd[1]: Reloading.
Nov 29 02:26:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:44 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:26:44 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:26:44 np0005539550 systemd[1]: Starting Create netns directory...
Nov 29 02:26:44 np0005539550 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:26:44 np0005539550 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:26:44 np0005539550 systemd[1]: Finished Create netns directory.
Nov 29 02:26:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:45.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:45 np0005539550 python3.9[145872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Nov 29 02:26:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:46 np0005539550 python3.9[146027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:46.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:46 np0005539550 python3.9[146150]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401205.8073215-1369-100801087544630/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:47.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 29 02:26:48 np0005539550 python3.9[146305]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:26:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:48.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:26:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:49.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:26:49 np0005539550 python3.9[146459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:26:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 02:26:49 np0005539550 python3.9[146582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401208.6607704-1444-202971314503234/.source.json _original_basename=.x7_k21ko follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:50.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:50 np0005539550 python3.9[146735]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:26:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:51.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Nov 29 02:26:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:52.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:53.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:53 np0005539550 python3.9[147167]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 02:26:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 64 op/s
Nov 29 02:26:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:54.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:55.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:55 np0005539550 python3.9[147322]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:26:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 0 B/s wr, 67 op/s
Nov 29 02:26:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:26:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:56 np0005539550 python3.9[147477]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:26:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:26:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:57.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:26:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Nov 29 02:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:58 np0005539550 python3[147655]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:26:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:26:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:26:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:26:58.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:26:59
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'images']
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:26:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:26:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:26:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:26:59.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:26:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Nov 29 02:27:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:00.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:01.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Nov 29 02:27:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:02.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:03.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 29 02:27:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:05.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 02:27:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:07.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:27:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:27:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:27:09 np0005539550 podman[147669]: 2025-11-29 07:27:09.122223568 +0000 UTC m=+11.064893644 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:09.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:09 np0005539550 podman[147854]: 2025-11-29 07:27:09.281388285 +0000 UTC m=+0.050015903 container create a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:27:09 np0005539550 podman[147854]: 2025-11-29 07:27:09.254357593 +0000 UTC m=+0.022985241 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:09 np0005539550 python3[147655]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 02:27:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 02:27:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:11 np0005539550 python3.9[148047]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:11.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 14 op/s
Nov 29 02:27:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:12 np0005539550 python3.9[148204]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:12 np0005539550 python3.9[148280]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:27:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:13.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 02:27:13 np0005539550 python3.9[148433]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401233.077927-1708-57851948701560/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:27:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:14.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:14 np0005539550 python3.9[148510]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:27:14 np0005539550 systemd[1]: Reloading.
Nov 29 02:27:14 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:14 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:15.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:15 np0005539550 python3.9[148623]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:27:15 np0005539550 systemd[1]: Reloading.
Nov 29 02:27:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 11 op/s
Nov 29 02:27:15 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:15 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:15 np0005539550 systemd[1]: Starting ovn_controller container...
Nov 29 02:27:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:27:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051833e5ac2f729981fb269ebabe1a5a80ccbeff1923e7a19f8c8859d90ea82b/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 02:27:16 np0005539550 systemd[1]: Started /usr/bin/podman healthcheck run a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6.
Nov 29 02:27:16 np0005539550 podman[148665]: 2025-11-29 07:27:16.177532792 +0000 UTC m=+0.304282931 container init a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + sudo -E kolla_set_configs
Nov 29 02:27:16 np0005539550 podman[148665]: 2025-11-29 07:27:16.211928421 +0000 UTC m=+0.338678530 container start a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:27:16 np0005539550 edpm-start-podman-container[148665]: ovn_controller
Nov 29 02:27:16 np0005539550 systemd[1]: Created slice User Slice of UID 0.
Nov 29 02:27:16 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 02:27:16 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 02:27:16 np0005539550 systemd[1]: Starting User Manager for UID 0...
Nov 29 02:27:16 np0005539550 edpm-start-podman-container[148664]: Creating additional drop-in dependency for "ovn_controller" (a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6)
Nov 29 02:27:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:27:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:16.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:16 np0005539550 podman[148687]: 2025-11-29 07:27:16.331727464 +0000 UTC m=+0.108516571 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 02:27:16 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:27:16 np0005539550 systemd[148718]: Queued start job for default target Main User Target.
Nov 29 02:27:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:27:16 np0005539550 systemd[148718]: Created slice User Application Slice.
Nov 29 02:27:16 np0005539550 systemd[148718]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 02:27:16 np0005539550 systemd[148718]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:16 np0005539550 systemd[148718]: Reached target Paths.
Nov 29 02:27:16 np0005539550 systemd[148718]: Reached target Timers.
Nov 29 02:27:16 np0005539550 systemd[148718]: Starting D-Bus User Message Bus Socket...
Nov 29 02:27:16 np0005539550 systemd[148718]: Starting Create User's Volatile Files and Directories...
Nov 29 02:27:16 np0005539550 systemd[148718]: Finished Create User's Volatile Files and Directories.
Nov 29 02:27:16 np0005539550 systemd[148718]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:27:16 np0005539550 systemd[148718]: Reached target Sockets.
Nov 29 02:27:16 np0005539550 systemd[148718]: Reached target Basic System.
Nov 29 02:27:16 np0005539550 systemd[148718]: Reached target Main User Target.
Nov 29 02:27:16 np0005539550 systemd[148718]: Startup finished in 146ms.
Nov 29 02:27:16 np0005539550 systemd[1]: Started User Manager for UID 0.
Nov 29 02:27:16 np0005539550 systemd[1]: Started ovn_controller container.
Nov 29 02:27:16 np0005539550 systemd[1]: a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6-41c11dfb2ccb0512.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:27:16 np0005539550 systemd[1]: a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6-41c11dfb2ccb0512.service: Failed with result 'exit-code'.
Nov 29 02:27:16 np0005539550 systemd[1]: Started Session c1 of User root.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: INFO:__main__:Validating config file
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: INFO:__main__:Writing out command to execute
Nov 29 02:27:16 np0005539550 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: ++ cat /run_command
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + ARGS=
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + sudo kolla_copy_cacerts
Nov 29 02:27:16 np0005539550 systemd[1]: Started Session c2 of User root.
Nov 29 02:27:16 np0005539550 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + [[ ! -n '' ]]
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + . kolla_extend_start
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + umask 0022
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.8444] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.8453] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.8467] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.8475] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.8480] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:27:16 np0005539550 kernel: br-int: entered promiscuous mode
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 02:27:16 np0005539550 systemd-udevd[148814]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:16Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.9765] manager: (ovn-479f96-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.9779] manager: (ovn-a37d86-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 02:27:16 np0005539550 NetworkManager[49039]: <info>  [1764401236.9790] manager: (ovn-755ad2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 02:27:16 np0005539550 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 02:27:17 np0005539550 NetworkManager[49039]: <info>  [1764401236.9998] device (genev_sys_6081): carrier: link connected
Nov 29 02:27:17 np0005539550 NetworkManager[49039]: <info>  [1764401237.0001] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 02:27:17 np0005539550 systemd-udevd[148831]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:27:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:17.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 02:27:17 np0005539550 python3.9[148945]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:17 np0005539550 ovs-vsctl[148946]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 02:27:18 np0005539550 python3.9[149099]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:18 np0005539550 ovs-vsctl[149101]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 02:27:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:18.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:19.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:27:19 np0005539550 python3.9[149256]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:27:19 np0005539550 ovs-vsctl[149257]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 02:27:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Nov 29 02:27:19 np0005539550 systemd[1]: session-47.scope: Deactivated successfully.
Nov 29 02:27:19 np0005539550 systemd[1]: session-47.scope: Consumed 57.649s CPU time.
Nov 29 02:27:19 np0005539550 systemd-logind[788]: Session 47 logged out. Waiting for processes to exit.
Nov 29 02:27:19 np0005539550 systemd-logind[788]: Removed session 47.
Nov 29 02:27:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:20.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:21.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 0 B/s wr, 5 op/s
Nov 29 02:27:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:23.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:27:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:25.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:25 np0005539550 systemd-logind[788]: New session 49 of user zuul.
Nov 29 02:27:25 np0005539550 systemd[1]: Started Session 49 of User zuul.
Nov 29 02:27:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:26 np0005539550 python3.9[149497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:26 np0005539550 systemd[1]: Stopping User Manager for UID 0...
Nov 29 02:27:26 np0005539550 systemd[148718]: Activating special unit Exit the Session...
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped target Main User Target.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped target Basic System.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped target Paths.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped target Sockets.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped target Timers.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 02:27:26 np0005539550 systemd[148718]: Closed D-Bus User Message Bus Socket.
Nov 29 02:27:26 np0005539550 systemd[148718]: Stopped Create User's Volatile Files and Directories.
Nov 29 02:27:26 np0005539550 systemd[148718]: Removed slice User Application Slice.
Nov 29 02:27:26 np0005539550 systemd[148718]: Reached target Shutdown.
Nov 29 02:27:26 np0005539550 systemd[148718]: Finished Exit the Session.
Nov 29 02:27:26 np0005539550 systemd[148718]: Reached target Exit the Session.
Nov 29 02:27:26 np0005539550 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539550 systemd[1]: Stopped User Manager for UID 0.
Nov 29 02:27:26 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 02:27:26 np0005539550 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 02:27:26 np0005539550 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 02:27:26 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 02:27:26 np0005539550 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 02:27:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:27.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:27 np0005539550 auditd[706]: Audit daemon rotating log files
Nov 29 02:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:27 np0005539550 python3.9[149656]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:28 np0005539550 python3.9[149809]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539550 python3.9[149963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:29.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:29 np0005539550 python3.9[150115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:30 np0005539550 python3.9[150268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:32 np0005539550 python3.9[150420]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:27:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:32 np0005539550 python3.9[150575]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 02:27:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:34.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:34 np0005539550 python3.9[150729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:35 np0005539550 python3.9[150851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401253.8337057-223-148697340072372/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:37.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:39.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:40 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:27:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:40.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:40 np0005539550 python3.9[151137]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:41.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:42.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:43.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:44 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:27:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:45.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:46 np0005539550 python3.9[151265]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401257.181088-268-226796044146166/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:46 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:46Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Nov 29 02:27:46 np0005539550 ovn_controller[148680]: 2025-11-29T07:27:46Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Nov 29 02:27:46 np0005539550 podman[151451]: 2025-11-29 07:27:46.831311521 +0000 UTC m=+0.097373479 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:27:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:47.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:48 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:27:48 np0005539550 python3.9[151498]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:27:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:48 np0005539550 python3.9[151592]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:27:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:49.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:50.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:50 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf MDS connection to Monitors appears to be laggy; 18.3195s since last acked beacon
Nov 29 02:27:50 np0005539550 ceph-mds[93677]: mds.0.10 skipping upkeep work because connection to Monitors appears laggy
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(31) init, last seen epoch 31, mid-election, bumping
Nov 29 02:27:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:51.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:27:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:27:52 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:27:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:52.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 14m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:27:52 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf  MDS is no longer laggy
Nov 29 02:27:52 np0005539550 python3.9[151751]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:27:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:27:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:53.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:27:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:53 np0005539550 python3.9[151904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:54 np0005539550 python3.9[152028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401273.1801221-379-161837611659317/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:54.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:54 np0005539550 python3.9[152178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:55.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:55 np0005539550 python3.9[152299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401274.3587263-379-163382854524558/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:27:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:56.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:27:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 26d698ed-47b0-4688-b8b7-115498424be2 does not exist
Nov 29 02:27:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fbac032f-8c94-44ec-a9e2-bd293ea961d2 does not exist
Nov 29 02:27:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9ec86e1b-4276-47da-85ed-7f0260c4ab3e does not exist
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:27:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:27:57 np0005539550 python3.9[152554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:57 np0005539550 podman[152610]: 2025-11-29 07:27:57.117897648 +0000 UTC m=+0.023614747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:27:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:27:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:57.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:27:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:27:57 np0005539550 python3.9[152727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401276.5330703-511-270847149826076/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:57 np0005539550 podman[152610]: 2025-11-29 07:27:57.882567711 +0000 UTC m=+0.788284790 container create 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:27:58 np0005539550 python3.9[152878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:27:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:27:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:27:58 np0005539550 ceph-mon[74435]: mon.compute-1 calling monitor election
Nov 29 02:27:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:27:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:27:58 np0005539550 systemd[1]: Started libpod-conmon-2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6.scope.
Nov 29 02:27:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:27:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:27:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:27:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:27:58.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:27:58 np0005539550 python3.9[153004]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401277.6978898-511-1892437217681/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:27:59
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:28:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:27:59.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:00.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:00 np0005539550 podman[152610]: 2025-11-29 07:28:00.621215526 +0000 UTC m=+3.526932635 container init 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:28:00 np0005539550 podman[152610]: 2025-11-29 07:28:00.630097668 +0000 UTC m=+3.535814767 container start 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:28:00 np0005539550 systemd[1]: libpod-2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6.scope: Deactivated successfully.
Nov 29 02:28:00 np0005539550 confident_khorana[152904]: 167 167
Nov 29 02:28:00 np0005539550 conmon[152904]: conmon 2ffffa6910bed3ffc6db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6.scope/container/memory.events
Nov 29 02:28:00 np0005539550 python3.9[153169]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:28:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:01.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:01 np0005539550 podman[152610]: 2025-11-29 07:28:01.345363471 +0000 UTC m=+4.251080570 container attach 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:28:01 np0005539550 podman[152610]: 2025-11-29 07:28:01.34691792 +0000 UTC m=+4.252634989 container died 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:28:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:01 np0005539550 python3.9[153325]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:02 np0005539550 python3.9[153481]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c3519810d1afe1750ce10f55054417b0eddf8e88b8e90b54b729ad0b0a6b2398-merged.mount: Deactivated successfully.
Nov 29 02:28:02 np0005539550 python3.9[153559]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:03.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:03 np0005539550 python3.9[153713]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:04 np0005539550 python3.9[153791]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:04.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:04 np0005539550 podman[152610]: 2025-11-29 07:28:04.638795769 +0000 UTC m=+7.544512848 container remove 2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:28:04 np0005539550 systemd[1]: libpod-conmon-2ffffa6910bed3ffc6db243034040d535294bc5d25d57e20a6c62afe8afa47c6.scope: Deactivated successfully.
Nov 29 02:28:04 np0005539550 podman[153954]: 2025-11-29 07:28:04.794484431 +0000 UTC m=+0.023381434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:04 np0005539550 podman[153954]: 2025-11-29 07:28:04.953788592 +0000 UTC m=+0.182685575 container create bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:28:04 np0005539550 python3.9[153951]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:05 np0005539550 systemd[1]: Started libpod-conmon-bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0.scope.
Nov 29 02:28:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:28:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:05.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:05 np0005539550 podman[153954]: 2025-11-29 07:28:05.54610745 +0000 UTC m=+0.775004453 container init bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:28:05 np0005539550 podman[153954]: 2025-11-29 07:28:05.554794317 +0000 UTC m=+0.783691310 container start bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:05 np0005539550 podman[153954]: 2025-11-29 07:28:05.626949716 +0000 UTC m=+0.855846719 container attach bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:28:06 np0005539550 python3.9[154178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:06 np0005539550 nervous_snyder[153971]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:28:06 np0005539550 nervous_snyder[153971]: --> relative data size: 1.0
Nov 29 02:28:06 np0005539550 nervous_snyder[153971]: --> All data devices are unavailable
Nov 29 02:28:06 np0005539550 systemd[1]: libpod-bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0.scope: Deactivated successfully.
Nov 29 02:28:06 np0005539550 podman[153954]: 2025-11-29 07:28:06.443934597 +0000 UTC m=+1.672831660 container died bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:28:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:06.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:06 np0005539550 python3.9[154268]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:07.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:07 np0005539550 python3.9[154431]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:07 np0005539550 python3.9[154512]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:28:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:28:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:28:08 np0005539550 python3.9[154665]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:08 np0005539550 systemd[1]: Reloading.
Nov 29 02:28:08 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:08 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3a342297555dc693f6a678a98007ce86af312928be960159c853f73bdfe2d8c9-merged.mount: Deactivated successfully.
Nov 29 02:28:09 np0005539550 podman[153954]: 2025-11-29 07:28:09.327204777 +0000 UTC m=+4.556101760 container remove bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_snyder, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:28:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:09.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:09 np0005539550 systemd[1]: libpod-conmon-bea45b227caa648d02d608738c07be1df5e477992c07e311951714607e38f5a0.scope: Deactivated successfully.
Nov 29 02:28:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:09 np0005539550 python3.9[154956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:09.920294165 +0000 UTC m=+0.021141208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:10.065186748 +0000 UTC m=+0.166033771 container create 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:28:10 np0005539550 systemd[1]: Started libpod-conmon-93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2.scope.
Nov 29 02:28:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:28:10 np0005539550 python3.9[155090]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:10.55257422 +0000 UTC m=+0.653421273 container init 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:10.561118573 +0000 UTC m=+0.661965596 container start 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:10 np0005539550 gifted_borg[155093]: 167 167
Nov 29 02:28:10 np0005539550 systemd[1]: libpod-93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2.scope: Deactivated successfully.
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:10.894098226 +0000 UTC m=+0.994945249 container attach 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:28:10 np0005539550 podman[154999]: 2025-11-29 07:28:10.894582018 +0000 UTC m=+0.995429041 container died 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Nov 29 02:28:11 np0005539550 python3.9[155262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7eac69d171c6658ecb0af3e0fa4bbc171952c809a37fe2e8f1b9caa5e36ce874-merged.mount: Deactivated successfully.
Nov 29 02:28:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:11.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:11 np0005539550 podman[154999]: 2025-11-29 07:28:11.543378375 +0000 UTC m=+1.644225398 container remove 93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:28:11 np0005539550 systemd[1]: libpod-conmon-93b0435c44afd9a2b553330c28d6a7315e4c02c7588a5fb8104547b2762850d2.scope: Deactivated successfully.
Nov 29 02:28:11 np0005539550 podman[155349]: 2025-11-29 07:28:11.746849468 +0000 UTC m=+0.088000045 container create 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:11 np0005539550 python3.9[155341]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:11 np0005539550 podman[155349]: 2025-11-29 07:28:11.684405011 +0000 UTC m=+0.025555608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:11 np0005539550 systemd[1]: Started libpod-conmon-1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b.scope.
Nov 29 02:28:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:28:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dca915b070ac20abdb1ccf0cd797a4e9937fefaa6e2397a6adbf37ef429f8a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dca915b070ac20abdb1ccf0cd797a4e9937fefaa6e2397a6adbf37ef429f8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dca915b070ac20abdb1ccf0cd797a4e9937fefaa6e2397a6adbf37ef429f8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dca915b070ac20abdb1ccf0cd797a4e9937fefaa6e2397a6adbf37ef429f8a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:12 np0005539550 podman[155349]: 2025-11-29 07:28:12.429327504 +0000 UTC m=+0.770478101 container init 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:28:12 np0005539550 podman[155349]: 2025-11-29 07:28:12.44523221 +0000 UTC m=+0.786382817 container start 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:28:12 np0005539550 podman[155349]: 2025-11-29 07:28:12.459765342 +0000 UTC m=+0.800915919 container attach 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:12 np0005539550 python3.9[155522]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:28:12 np0005539550 systemd[1]: Reloading.
Nov 29 02:28:12 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:28:12 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:28:12 np0005539550 systemd[1]: Starting Create netns directory...
Nov 29 02:28:12 np0005539550 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:28:12 np0005539550 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:28:12 np0005539550 systemd[1]: Finished Create netns directory.
Nov 29 02:28:13 np0005539550 brave_kilby[155390]: {
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:    "0": [
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:        {
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "devices": [
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "/dev/loop3"
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            ],
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "lv_name": "ceph_lv0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "lv_size": "7511998464",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "name": "ceph_lv0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "tags": {
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.cluster_name": "ceph",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.crush_device_class": "",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.encrypted": "0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.osd_id": "0",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.type": "block",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:                "ceph.vdo": "0"
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            },
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "type": "block",
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:            "vg_name": "ceph_vg0"
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:        }
Nov 29 02:28:13 np0005539550 brave_kilby[155390]:    ]
Nov 29 02:28:13 np0005539550 brave_kilby[155390]: }
Nov 29 02:28:13 np0005539550 systemd[1]: libpod-1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b.scope: Deactivated successfully.
Nov 29 02:28:13 np0005539550 podman[155349]: 2025-11-29 07:28:13.254940819 +0000 UTC m=+1.596091426 container died 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 02:28:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:13.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:13 np0005539550 python3.9[155730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:14.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:14 np0005539550 python3.9[155885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9dca915b070ac20abdb1ccf0cd797a4e9937fefaa6e2397a6adbf37ef429f8a6-merged.mount: Deactivated successfully.
Nov 29 02:28:15 np0005539550 python3.9[156009]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401294.1048753-964-136616550870162/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:15.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:15 np0005539550 podman[155349]: 2025-11-29 07:28:15.434127895 +0000 UTC m=+3.775278472 container remove 1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:28:15 np0005539550 systemd[1]: libpod-conmon-1f56488e35e2037cda8684f0ee9b6ef030646ebb821746bee06022233c65348b.scope: Deactivated successfully.
Nov 29 02:28:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.149943552 +0000 UTC m=+0.110430855 container create a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.065623509 +0000 UTC m=+0.026110832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:16 np0005539550 python3.9[156319]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:28:16 np0005539550 systemd[1]: Started libpod-conmon-a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891.scope.
Nov 29 02:28:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.539479064 +0000 UTC m=+0.499966397 container init a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.549007152 +0000 UTC m=+0.509494455 container start a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:16 np0005539550 crazy_visvesvaraya[156346]: 167 167
Nov 29 02:28:16 np0005539550 systemd[1]: libpod-a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891.scope: Deactivated successfully.
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.567194886 +0000 UTC m=+0.527682189 container attach a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.568215261 +0000 UTC m=+0.528702564 container died a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:28:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9c5e721f9bc45bb1208f297d9a074394c7fa0ce917f0dab169dec58cfa749235-merged.mount: Deactivated successfully.
Nov 29 02:28:16 np0005539550 podman[156255]: 2025-11-29 07:28:16.881663697 +0000 UTC m=+0.842151000 container remove a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:28:16 np0005539550 systemd[1]: libpod-conmon-a7758822885fdac1e43ffb26f6ba60ceccc4605e03a6c568cfd4a55782a3b891.scope: Deactivated successfully.
Nov 29 02:28:17 np0005539550 podman[156475]: 2025-11-29 07:28:17.083396236 +0000 UTC m=+0.078759274 container create 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:28:17 np0005539550 podman[156475]: 2025-11-29 07:28:17.031297037 +0000 UTC m=+0.026660095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:28:17 np0005539550 python3.9[156533]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:28:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:17.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:17 np0005539550 systemd[1]: Started libpod-conmon-1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75.scope.
Nov 29 02:28:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:28:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4231566910c116efd3f9f234967266fa6da88095ac4d4a468eac1cf569582/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4231566910c116efd3f9f234967266fa6da88095ac4d4a468eac1cf569582/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4231566910c116efd3f9f234967266fa6da88095ac4d4a468eac1cf569582/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85d4231566910c116efd3f9f234967266fa6da88095ac4d4a468eac1cf569582/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:28:17 np0005539550 podman[156439]: 2025-11-29 07:28:17.794914907 +0000 UTC m=+0.827466453 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:28:17 np0005539550 podman[156475]: 2025-11-29 07:28:17.810804534 +0000 UTC m=+0.806167602 container init 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:28:17 np0005539550 podman[156475]: 2025-11-29 07:28:17.820752772 +0000 UTC m=+0.816115810 container start 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:28:17 np0005539550 podman[156475]: 2025-11-29 07:28:17.825281715 +0000 UTC m=+0.820644783 container attach 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:28:17 np0005539550 python3.9[156658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401296.6723516-1039-228265131091763/.source.json _original_basename=.cgb8t8g5 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]: {
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:        "osd_id": 0,
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:        "type": "bluestore"
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]:    }
Nov 29 02:28:18 np0005539550 eloquent_galileo[156661]: }
Nov 29 02:28:18 np0005539550 systemd[1]: libpod-1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75.scope: Deactivated successfully.
Nov 29 02:28:18 np0005539550 podman[156475]: 2025-11-29 07:28:18.721339617 +0000 UTC m=+1.716702655 container died 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:28:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-85d4231566910c116efd3f9f234967266fa6da88095ac4d4a468eac1cf569582-merged.mount: Deactivated successfully.
Nov 29 02:28:19 np0005539550 podman[156475]: 2025-11-29 07:28:19.331473329 +0000 UTC m=+2.326836367 container remove 1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:28:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:19.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:28:19 np0005539550 systemd[1]: libpod-conmon-1c6d75b67e57ae27bffc2210c9f24014f8dcf5954020e01bbe4744fd994cfd75.scope: Deactivated successfully.
Nov 29 02:28:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:28:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9dad103f-39a6-4ecf-af4c-864fdd8ad27e does not exist
Nov 29 02:28:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ac4ec0a9-2a6e-478a-956d-0955343e57f1 does not exist
Nov 29 02:28:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev abab2252-61a7-468b-bac6-ab6e588ea809 does not exist
Nov 29 02:28:21 np0005539550 python3.9[156890]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:28:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:21.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:23.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:23 np0005539550 python3.9[157332]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 02:28:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:28:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:24.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:24 np0005539550 python3.9[157487]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:28:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:25 np0005539550 python3.9[157691]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:28:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:27.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:27 np0005539550 python3[157868]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:28:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:29.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:30.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:31.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:33.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:35.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:37.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:39.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:40.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:28:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:41.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:28:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:43.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:44 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:28:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:45.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:47.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:48 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:28:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:48.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:49.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:50.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:51.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:52 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:28:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 check_health: resetting beacon timeouts due to mon delay (slow election?) of 14.2549 seconds
Nov 29 02:28:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:28:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:52.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:54 np0005539550 podman[158014]: 2025-11-29 07:28:54.405667304 +0000 UTC m=+6.120023725 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:28:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:54.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:28:55 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(35) init, last seen epoch 35, mid-election, bumping
Nov 29 02:28:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:56.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:28:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:28:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:57.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:28:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:28:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:28:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:28:58.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:28:59
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'vms', '.rgw.root']
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:28:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:28:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:28:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:28:59.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:28:59 np0005539550 podman[157886]: 2025-11-29 07:28:59.60978843 +0000 UTC m=+31.610824711 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:28:59 np0005539550 podman[158098]: 2025-11-29 07:28:59.798225899 +0000 UTC m=+0.070917129 container create 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:28:59 np0005539550 podman[158098]: 2025-11-29 07:28:59.749510454 +0000 UTC m=+0.022201704 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:28:59 np0005539550 python3[157868]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:29:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:29:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:29:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:29:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:29:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:01.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:02.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:03.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:04 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:29:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:05.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 16m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:29:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:29:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:29:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:06.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:29:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:07.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 587 at 2025-11-29T07:28:35.515905+0000)
Nov 29 02:29:07 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0[74431]: 2025-11-29T07:29:07.590+0000 7fcbca960640 -1 mon.compute-0@0(leader) e3 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 587 at 2025-11-29T07:28:35.515905+0000)
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:29:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:29:07 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:29:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:29:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:29:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:09.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:09 np0005539550 python3.9[158343]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops. (SLOW_OPS)
Nov 29 02:29:11 np0005539550 python3.9[158499]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:29:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:29:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:11 np0005539550 python3.9[158575]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:29:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:12.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:13 np0005539550 python3.9[158727]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764401352.0090415-1303-107642654691248/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:29:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:13.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:14.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:14 np0005539550 python3.9[158804]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:14 np0005539550 systemd[1]: Reloading.
Nov 29 02:29:15 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:15 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:15.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:16 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:29:16 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:29:16 np0005539550 python3.9[158915]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:29:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:16.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:16 np0005539550 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 02:29:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e1eb466b6103cbac5817136da1c5256252598e07a85efd6e0870d852e000219/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e1eb466b6103cbac5817136da1c5256252598e07a85efd6e0870d852e000219/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:16 np0005539550 systemd[1]: Started /usr/bin/podman healthcheck run 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00.
Nov 29 02:29:16 np0005539550 podman[158957]: 2025-11-29 07:29:16.832634205 +0000 UTC m=+0.123802392 container init 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + sudo -E kolla_set_configs
Nov 29 02:29:16 np0005539550 podman[158957]: 2025-11-29 07:29:16.856269801 +0000 UTC m=+0.147437968 container start 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:29:16 np0005539550 edpm-start-podman-container[158957]: ovn_metadata_agent
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Validating config file
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Copying service configuration files
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Writing out command to execute
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 02:29:16 np0005539550 edpm-start-podman-container[158956]: Creating additional drop-in dependency for "ovn_metadata_agent" (870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00)
Nov 29 02:29:16 np0005539550 podman[158980]: 2025-11-29 07:29:16.921153611 +0000 UTC m=+0.053925137 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: ++ cat /run_command
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + CMD=neutron-ovn-metadata-agent
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + ARGS=
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + sudo kolla_copy_cacerts
Nov 29 02:29:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + [[ ! -n '' ]]
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + . kolla_extend_start
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + umask 0022
Nov 29 02:29:16 np0005539550 ovn_metadata_agent[158973]: + exec neutron-ovn-metadata-agent
Nov 29 02:29:17 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:17 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:17 np0005539550 systemd[1]: Started ovn_metadata_agent container.
Nov 29 02:29:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops.)
Nov 29 02:29:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:29:17 np0005539550 ceph-mon[74435]: Health check failed: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops. (SLOW_OPS)
Nov 29 02:29:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:17.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:18 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 3 slow ops, oldest one blocked for 33 sec, daemons [mon.compute-0,mon.compute-1] have slow ops.)
Nov 29 02:29:18 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.854 158978 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.855 158978 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.855 158978 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.856 158978 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.857 158978 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.858 158978 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.859 158978 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.860 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.861 158978 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.862 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.863 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.864 158978 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.865 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.866 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.867 158978 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.868 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.869 158978 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.870 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.871 158978 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.872 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.873 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.874 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.875 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.876 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.877 158978 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.878 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.879 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.880 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.881 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.882 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.883 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.884 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.885 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.886 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.887 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.888 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.889 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.890 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.891 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.892 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.893 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.893 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.893 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.893 158978 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.893 158978 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.902 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.902 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.902 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.902 158978 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.902 158978 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.916 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8 (UUID: a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.943 158978 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.943 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.943 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.943 158978 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.947 158978 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.957 158978 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.964 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], external_ids={}, name=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, nb_cfg_timestamp=1764401244863, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.965 158978 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fdd12388f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.966 158978 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.970 158978 DEBUG oslo_service.service [-] Started child 159086 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.974 158978 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpccvt1503/privsep.sock']#033[00m
Nov 29 02:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.974 159086 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-422367'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:18.999 159086 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.000 159086 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.000 159086 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.005 159086 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.011 159086 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.017 159086 INFO eventlet.wsgi.server [-] (159086) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:29:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:19.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:19 np0005539550 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 02:29:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.679 158978 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.680 158978 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpccvt1503/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.561 159091 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.565 159091 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.567 159091 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.567 159091 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159091#033[00m
Nov 29 02:29:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:19.682 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c59f76-93c7-44e4-97d1-eacb87e8197d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:20 np0005539550 systemd[1]: session-49.scope: Deactivated successfully.
Nov 29 02:29:20 np0005539550 systemd[1]: session-49.scope: Consumed 56.760s CPU time.
Nov 29 02:29:20 np0005539550 systemd-logind[788]: Session 49 logged out. Waiting for processes to exit.
Nov 29 02:29:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:20.379 159091 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:29:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:20.379 159091 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:29:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:20.379 159091 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:29:20 np0005539550 systemd-logind[788]: Removed session 49.
Nov 29 02:29:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:21.173 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[45e69d35-f810-4b1c-abba-57f772cefdea]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:21.177 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, column=external_ids, values=({'neutron:ovn-metadata-id': '4b0f2d70-5817-5bc4-92b6-c0eb50c68970'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:21.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:29:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:29:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:29:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:29:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:29:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:29:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:22.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:29:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:23.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:24.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:25.930 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:26 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f57b6cd-0707-45a4-8dae-6c86145f1a10 does not exist
Nov 29 02:29:26 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0d3c2696-e45b-4f15-8a29-b75f8b2f6594 does not exist
Nov 29 02:29:26 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1ad9e84b-a4e3-442e-aebc-6d58ab80ff8e does not exist
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:29:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:29:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:26.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:26 np0005539550 podman[159420]: 2025-11-29 07:29:26.945769261 +0000 UTC m=+0.049628118 container create b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:29:26 np0005539550 systemd[1]: Started libpod-conmon-b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028.scope.
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:26.922437412 +0000 UTC m=+0.026296299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:27.056695875 +0000 UTC m=+0.160554752 container init b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:27.06471831 +0000 UTC m=+0.168577167 container start b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:27.068973139 +0000 UTC m=+0.172832016 container attach b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:29:27 np0005539550 elegant_cray[159436]: 167 167
Nov 29 02:29:27 np0005539550 systemd[1]: libpod-b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028.scope: Deactivated successfully.
Nov 29 02:29:27 np0005539550 conmon[159436]: conmon b02ba0d61e09e0f7311d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028.scope/container/memory.events
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:27.072762646 +0000 UTC m=+0.176621503 container died b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:29:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6083510af9dba6cc582513f9bc8a4291cf3f1d921d47f959837534230bb57a78-merged.mount: Deactivated successfully.
Nov 29 02:29:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:27.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:27 np0005539550 podman[159420]: 2025-11-29 07:29:27.496947018 +0000 UTC m=+0.600805875 container remove b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:29:27 np0005539550 systemd[1]: libpod-conmon-b02ba0d61e09e0f7311da6a8a6e3298bb28e07a64bc55703d95271920fc9f028.scope: Deactivated successfully.
Nov 29 02:29:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:27 np0005539550 podman[159466]: 2025-11-29 07:29:27.701391772 +0000 UTC m=+0.078207328 container create 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:29:27 np0005539550 podman[159454]: 2025-11-29 07:29:27.707703098 +0000 UTC m=+0.100585215 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:29:27 np0005539550 podman[159466]: 2025-11-29 07:29:27.647967498 +0000 UTC m=+0.024783084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:27 np0005539550 systemd[1]: Started libpod-conmon-7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7.scope.
Nov 29 02:29:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:27 np0005539550 podman[159466]: 2025-11-29 07:29:27.799812167 +0000 UTC m=+0.176627733 container init 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:27 np0005539550 podman[159466]: 2025-11-29 07:29:27.808252082 +0000 UTC m=+0.185067638 container start 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:29:27 np0005539550 podman[159466]: 2025-11-29 07:29:27.81291709 +0000 UTC m=+0.189732646 container attach 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:29:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:28.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:28 np0005539550 unruffled_diffie[159503]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:29:28 np0005539550 unruffled_diffie[159503]: --> relative data size: 1.0
Nov 29 02:29:28 np0005539550 unruffled_diffie[159503]: --> All data devices are unavailable
Nov 29 02:29:28 np0005539550 systemd[1]: libpod-7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7.scope: Deactivated successfully.
Nov 29 02:29:28 np0005539550 podman[159466]: 2025-11-29 07:29:28.809563242 +0000 UTC m=+1.186378828 container died 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:29:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:29:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7b22db18a61a0b3b05eb784785573464200913443c1b865ad1dcf5e581d48e87-merged.mount: Deactivated successfully.
Nov 29 02:29:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:29.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:29 np0005539550 podman[159466]: 2025-11-29 07:29:29.443797748 +0000 UTC m=+1.820613304 container remove 7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_diffie, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:29:29 np0005539550 systemd[1]: libpod-conmon-7732fe9f46e3105624bdbd72347b2b14c9765d3530b0bae09048eac7c59ac0b7.scope: Deactivated successfully.
Nov 29 02:29:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.008094509 +0000 UTC m=+0.039162116 container create 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:29:30 np0005539550 systemd[1]: Started libpod-conmon-0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7.scope.
Nov 29 02:29:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.075673991 +0000 UTC m=+0.106741618 container init 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.083033981 +0000 UTC m=+0.114101588 container start 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:29.990146024 +0000 UTC m=+0.021213651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.086642764 +0000 UTC m=+0.117710391 container attach 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:29:30 np0005539550 jovial_beaver[159687]: 167 167
Nov 29 02:29:30 np0005539550 systemd[1]: libpod-0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7.scope: Deactivated successfully.
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.090380811 +0000 UTC m=+0.121448428 container died 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:29:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3ab071e1f65655f482016e97737463c755d1a89fe492f55dc8cf4f3128d5855c-merged.mount: Deactivated successfully.
Nov 29 02:29:30 np0005539550 podman[159671]: 2025-11-29 07:29:30.129354321 +0000 UTC m=+0.160421928 container remove 0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:29:30 np0005539550 systemd[1]: libpod-conmon-0dd6b2c641512cece295d764265fc0b100d3834830317f9729361b97aeda47f7.scope: Deactivated successfully.
Nov 29 02:29:30 np0005539550 podman[159710]: 2025-11-29 07:29:30.274052815 +0000 UTC m=+0.024769663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:30 np0005539550 podman[159710]: 2025-11-29 07:29:30.592344391 +0000 UTC m=+0.343061219 container create 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:29:30 np0005539550 systemd[1]: Started libpod-conmon-472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0.scope.
Nov 29 02:29:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119477b27364638d69d7e6784c82aff68731a22b545cc1d9cc80c699d27663bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119477b27364638d69d7e6784c82aff68731a22b545cc1d9cc80c699d27663bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119477b27364638d69d7e6784c82aff68731a22b545cc1d9cc80c699d27663bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119477b27364638d69d7e6784c82aff68731a22b545cc1d9cc80c699d27663bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:30 np0005539550 podman[159710]: 2025-11-29 07:29:30.679225148 +0000 UTC m=+0.429942006 container init 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:29:30 np0005539550 podman[159710]: 2025-11-29 07:29:30.687274854 +0000 UTC m=+0.437991682 container start 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:29:30 np0005539550 podman[159710]: 2025-11-29 07:29:30.69528469 +0000 UTC m=+0.446001568 container attach 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:29:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]: {
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:    "0": [
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:        {
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "devices": [
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "/dev/loop3"
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            ],
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "lv_name": "ceph_lv0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "lv_size": "7511998464",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "name": "ceph_lv0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "tags": {
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.cluster_name": "ceph",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.crush_device_class": "",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.encrypted": "0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.osd_id": "0",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.type": "block",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:                "ceph.vdo": "0"
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            },
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "type": "block",
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:            "vg_name": "ceph_vg0"
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:        }
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]:    ]
Nov 29 02:29:31 np0005539550 brave_maxwell[159727]: }
Nov 29 02:29:31 np0005539550 systemd[1]: libpod-472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0.scope: Deactivated successfully.
Nov 29 02:29:31 np0005539550 podman[159736]: 2025-11-29 07:29:31.53136662 +0000 UTC m=+0.040132508 container died 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:29:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-119477b27364638d69d7e6784c82aff68731a22b545cc1d9cc80c699d27663bf-merged.mount: Deactivated successfully.
Nov 29 02:29:32 np0005539550 podman[159736]: 2025-11-29 07:29:32.483480683 +0000 UTC m=+0.992246561 container remove 472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:29:32 np0005539550 systemd[1]: libpod-conmon-472b9b211ef60b92b7c9f10c68e3fafc366bf2f770551899a0d83b63332e8fc0.scope: Deactivated successfully.
Nov 29 02:29:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.106139472 +0000 UTC m=+0.038581243 container create f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:29:33 np0005539550 systemd[1]: Started libpod-conmon-f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a.scope.
Nov 29 02:29:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.165517384 +0000 UTC m=+0.097959185 container init f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.172490205 +0000 UTC m=+0.104931976 container start f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:29:33 np0005539550 blissful_wiles[159907]: 167 167
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.177481341 +0000 UTC m=+0.109923132 container attach f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:29:33 np0005539550 systemd[1]: libpod-f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a.scope: Deactivated successfully.
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.178410092 +0000 UTC m=+0.110851863 container died f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.087722906 +0000 UTC m=+0.020164687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3878b5ba3883f55634409a965161ea9def429533412d283d4dcca1ea28e5dc7c-merged.mount: Deactivated successfully.
Nov 29 02:29:33 np0005539550 podman[159891]: 2025-11-29 07:29:33.265124706 +0000 UTC m=+0.197566467 container remove f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:29:33 np0005539550 systemd[1]: libpod-conmon-f5f5d80cb6529d1fd406480114a51aaff23c2fa3dff52dca4bb98e21b7f3881a.scope: Deactivated successfully.
Nov 29 02:29:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:33.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:33 np0005539550 podman[159931]: 2025-11-29 07:29:33.450010859 +0000 UTC m=+0.050311064 container create 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:29:33 np0005539550 systemd[1]: Started libpod-conmon-807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024.scope.
Nov 29 02:29:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:29:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04d6f211673345d105a4d8fcfa6e54bdb2aaf2aedbbd1771d940773aae74be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04d6f211673345d105a4d8fcfa6e54bdb2aaf2aedbbd1771d940773aae74be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04d6f211673345d105a4d8fcfa6e54bdb2aaf2aedbbd1771d940773aae74be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:33 np0005539550 podman[159931]: 2025-11-29 07:29:33.42538968 +0000 UTC m=+0.025689915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:29:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d04d6f211673345d105a4d8fcfa6e54bdb2aaf2aedbbd1771d940773aae74be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:29:33 np0005539550 podman[159931]: 2025-11-29 07:29:33.539705591 +0000 UTC m=+0.140005826 container init 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:29:33 np0005539550 podman[159931]: 2025-11-29 07:29:33.550265676 +0000 UTC m=+0.150565891 container start 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:29:33 np0005539550 podman[159931]: 2025-11-29 07:29:33.555826154 +0000 UTC m=+0.156126459 container attach 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:29:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]: {
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:        "osd_id": 0,
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:        "type": "bluestore"
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]:    }
Nov 29 02:29:34 np0005539550 jolly_blackwell[159947]: }
Nov 29 02:29:34 np0005539550 systemd[1]: libpod-807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024.scope: Deactivated successfully.
Nov 29 02:29:34 np0005539550 podman[159931]: 2025-11-29 07:29:34.503584555 +0000 UTC m=+1.103884780 container died 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:29:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7d04d6f211673345d105a4d8fcfa6e54bdb2aaf2aedbbd1771d940773aae74be-merged.mount: Deactivated successfully.
Nov 29 02:29:34 np0005539550 podman[159931]: 2025-11-29 07:29:34.564445562 +0000 UTC m=+1.164745777 container remove 807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:29:34 np0005539550 systemd[1]: libpod-conmon-807eece099d35293747e365411137d1bc080906b88e12b52cc6862c4a4cb7024.scope: Deactivated successfully.
Nov 29 02:29:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:29:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:34.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:35.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:36.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:37.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:38.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:39.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.395 158978 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.396 158978 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.396 158978 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.396 158978 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.396 158978 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.397 158978 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.398 158978 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.399 158978 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.400 158978 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.401 158978 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.402 158978 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.403 158978 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.404 158978 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.405 158978 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.406 158978 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.407 158978 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.408 158978 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.409 158978 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.410 158978 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.411 158978 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.412 158978 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.413 158978 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.414 158978 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.415 158978 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.416 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.417 158978 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.418 158978 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.419 158978 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.420 158978 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.420 158978 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.420 158978 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.420 158978 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.420 158978 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.421 158978 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.422 158978 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.423 158978 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.424 158978 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.425 158978 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.426 158978 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.427 158978 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.428 158978 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.429 158978 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.430 158978 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.431 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.432 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.433 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.434 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:29:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:29:40.435 158978 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:29:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:40.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:29:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:41.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:42.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3005058d-f74e-4ade-9534-c4f572f21aff does not exist
Nov 29 02:29:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 68104206-c51c-4088-8a3c-6306c4b34145 does not exist
Nov 29 02:29:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5a87e37c-128f-4845-a25c-97c9e3f90d0d does not exist
Nov 29 02:29:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:44.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:44 np0005539550 systemd-logind[788]: New session 50 of user zuul.
Nov 29 02:29:45 np0005539550 systemd[1]: Started Session 50 of User zuul.
Nov 29 02:29:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:45.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:29:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:46 np0005539550 python3.9[160190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:29:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:46.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:47 np0005539550 podman[160269]: 2025-11-29 07:29:47.308122169 +0000 UTC m=+0.048842150 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:29:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:29:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:29:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:48.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:49 np0005539550 python3.9[160416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:29:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:29:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:49.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:29:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:51.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:51 np0005539550 python3.9[160582]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:29:51 np0005539550 systemd[1]: Reloading.
Nov 29 02:29:52 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:29:52 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:29:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:52.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:53.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:54.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:55 np0005539550 python3.9[160769]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:29:55 np0005539550 network[160786]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:29:55 np0005539550 network[160787]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:29:55 np0005539550 network[160788]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:29:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:55.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:29:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:56.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:29:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:57.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:57 np0005539550 podman[160889]: 2025-11-29 07:29:57.849463221 +0000 UTC m=+0.094246529 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:29:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:29:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:29:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:29:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:29:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:29:59
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'volumes']
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:29:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:29:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:29:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:29:59.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:29:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:30:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:30:01 np0005539550 python3.9[161080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:01.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:02 np0005539550 python3.9[161233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:02.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:02 np0005539550 python3.9[161387]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:03.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:03 np0005539550 python3.9[161540]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:04 np0005539550 python3.9[161694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000022s ======
Nov 29 02:30:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Nov 29 02:30:05 np0005539550 python3.9[161847]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:05.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:05 np0005539550 python3.9[162000]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:30:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539550 python3.9[162204]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:07.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:07 np0005539550 python3.9[162356]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:30:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:30:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:08 np0005539550 python3.9[162509]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:30:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:09 np0005539550 python3.9[162661]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:09.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:09 np0005539550 python3.9[162813]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:10 np0005539550 python3.9[162966]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:10 np0005539550 python3.9[163118]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:11.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:12 np0005539550 python3.9[163271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:13 np0005539550 python3.9[163423]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:13.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:14 np0005539550 python3.9[163575]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:14 np0005539550 python3.9[163728]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:15 np0005539550 python3.9[163880]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:15.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:15 np0005539550 python3.9[164032]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539550 python3.9[164185]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:30:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539550 podman[164337]: 2025-11-29 07:30:17.437593648 +0000 UTC m=+0.077928152 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:30:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:17.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:17 np0005539550 python3.9[164338]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:18 np0005539550 python3.9[164510]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:30:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:30:18.896 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:30:18.896 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:30:18.896 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:30:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:19.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:19 np0005539550 python3.9[164662]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:30:19 np0005539550 systemd[1]: Reloading.
Nov 29 02:30:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:19 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:30:19 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:30:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:20 np0005539550 python3.9[164850]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 02:30:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 02:30:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:22 np0005539550 python3.9[165003]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:22 np0005539550 python3.9[165157]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:23 np0005539550 python3.9[165310]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:23.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:24 np0005539550 python3.9[165463]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:24 np0005539550 python3.9[165617]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:24.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:25 np0005539550 python3.9[165770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:30:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:25.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:26 np0005539550 python3.9[165974]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 02:30:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:26.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:27 np0005539550 python3.9[166127]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:30:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:27.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:28 np0005539550 podman[166234]: 2025-11-29 07:30:28.367485087 +0000 UTC m=+0.100582429 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:30:28 np0005539550 python3.9[166310]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:30:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:28.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:29.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:30.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:30 np0005539550 python3.9[166472]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:30:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:31.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:31 np0005539550 python3.9[166556]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:30:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:32.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:34.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:35.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:37.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:38.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:40.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:42.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:44.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:30:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:30:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:30:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:30:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:45.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:47.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:48 np0005539550 podman[166891]: 2025-11-29 07:30:48.315039854 +0000 UTC m=+0.056235415 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:30:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:48.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:49.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:51.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:30:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bf55f322-4678-448a-a122-622a243200a5 does not exist
Nov 29 02:30:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7076d5e3-00fd-44ba-9174-39a47ef6e766 does not exist
Nov 29 02:30:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0edd7c0d-903f-42f2-a855-48e5c3b97991 does not exist
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:30:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:30:52 np0005539550 podman[167092]: 2025-11-29 07:30:52.737957495 +0000 UTC m=+0.050653735 container create 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:30:52 np0005539550 systemd[1]: Started libpod-conmon-637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633.scope.
Nov 29 02:30:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:30:52 np0005539550 podman[167092]: 2025-11-29 07:30:52.719497007 +0000 UTC m=+0.032193267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:30:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:53.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:53 np0005539550 podman[167092]: 2025-11-29 07:30:53.744979784 +0000 UTC m=+1.057676024 container init 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:30:53 np0005539550 podman[167092]: 2025-11-29 07:30:53.754028503 +0000 UTC m=+1.066724743 container start 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:30:53 np0005539550 zealous_elion[167108]: 167 167
Nov 29 02:30:53 np0005539550 systemd[1]: libpod-637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633.scope: Deactivated successfully.
Nov 29 02:30:53 np0005539550 podman[167092]: 2025-11-29 07:30:53.764036667 +0000 UTC m=+1.076732927 container attach 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:30:53 np0005539550 podman[167092]: 2025-11-29 07:30:53.764603241 +0000 UTC m=+1.077299511 container died 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:30:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ff3e917ffbf1ee217e1af4e03ea174eb0fa7bc3fbb9d11e5f890f05f8523409c-merged.mount: Deactivated successfully.
Nov 29 02:30:53 np0005539550 podman[167092]: 2025-11-29 07:30:53.825485653 +0000 UTC m=+1.138181893 container remove 637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:30:53 np0005539550 systemd[1]: libpod-conmon-637752c763a83291c4be6138c96c92e50c061bfc3dc81043115c92b68d602633.scope: Deactivated successfully.
Nov 29 02:30:54 np0005539550 podman[167133]: 2025-11-29 07:30:54.06616475 +0000 UTC m=+0.042738113 container create 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:54 np0005539550 podman[167133]: 2025-11-29 07:30:54.046518293 +0000 UTC m=+0.023091676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:54 np0005539550 systemd[1]: Started libpod-conmon-8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c.scope.
Nov 29 02:30:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:30:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:54 np0005539550 podman[167133]: 2025-11-29 07:30:54.414930945 +0000 UTC m=+0.391504348 container init 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:30:54 np0005539550 podman[167133]: 2025-11-29 07:30:54.423920783 +0000 UTC m=+0.400494146 container start 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:30:54 np0005539550 podman[167133]: 2025-11-29 07:30:54.428400166 +0000 UTC m=+0.404973549 container attach 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:30:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:30:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:30:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:54.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:55 np0005539550 nervous_chatterjee[167149]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:30:55 np0005539550 nervous_chatterjee[167149]: --> relative data size: 1.0
Nov 29 02:30:55 np0005539550 nervous_chatterjee[167149]: --> All data devices are unavailable
Nov 29 02:30:55 np0005539550 systemd[1]: libpod-8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c.scope: Deactivated successfully.
Nov 29 02:30:55 np0005539550 podman[167133]: 2025-11-29 07:30:55.434517793 +0000 UTC m=+1.411091166 container died 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:30:55 np0005539550 systemd[1]: libpod-8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c.scope: Consumed 1.002s CPU time.
Nov 29 02:30:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-49558515d2bb018284784d5acaaf93471ba11710ac3e08e28869e4595f6726bb-merged.mount: Deactivated successfully.
Nov 29 02:30:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:55.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:56 np0005539550 podman[167133]: 2025-11-29 07:30:56.040529225 +0000 UTC m=+2.017102588 container remove 8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:30:56 np0005539550 systemd[1]: libpod-conmon-8947e73bccc74360dc060bf71a5452393b611dcab87bbca2b873b3939878cc8c.scope: Deactivated successfully.
Nov 29 02:30:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:56.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:56 np0005539550 podman[167326]: 2025-11-29 07:30:56.716185149 +0000 UTC m=+0.023805744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:56 np0005539550 podman[167326]: 2025-11-29 07:30:56.970410409 +0000 UTC m=+0.278030974 container create b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:30:57 np0005539550 systemd[1]: Started libpod-conmon-b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af.scope.
Nov 29 02:30:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:30:57 np0005539550 podman[167326]: 2025-11-29 07:30:57.378698642 +0000 UTC m=+0.686319227 container init b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:30:57 np0005539550 podman[167326]: 2025-11-29 07:30:57.389006733 +0000 UTC m=+0.696627288 container start b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:30:57 np0005539550 xenodochial_zhukovsky[167342]: 167 167
Nov 29 02:30:57 np0005539550 systemd[1]: libpod-b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af.scope: Deactivated successfully.
Nov 29 02:30:57 np0005539550 podman[167326]: 2025-11-29 07:30:57.655792692 +0000 UTC m=+0.963413287 container attach b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:30:57 np0005539550 podman[167326]: 2025-11-29 07:30:57.657223518 +0000 UTC m=+0.964844083 container died b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:30:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:30:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:30:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:30:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:30:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-291cc8e18537ad39f65c0862e0bb28f2546c5b451519d4c0ef41d30951c0d589-merged.mount: Deactivated successfully.
Nov 29 02:30:58 np0005539550 podman[167326]: 2025-11-29 07:30:58.435083963 +0000 UTC m=+1.742704528 container remove b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_zhukovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:30:58 np0005539550 systemd[1]: libpod-conmon-b9c1719445c011575d4acec61331048ba344270c6cbc0bcd0783b3d215eb36af.scope: Deactivated successfully.
Nov 29 02:30:58 np0005539550 podman[167364]: 2025-11-29 07:30:58.632417252 +0000 UTC m=+0.099271766 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 02:30:58 np0005539550 podman[167380]: 2025-11-29 07:30:58.613998635 +0000 UTC m=+0.039373128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:30:58 np0005539550 podman[167380]: 2025-11-29 07:30:58.707951824 +0000 UTC m=+0.133326307 container create ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 02:30:58 np0005539550 systemd[1]: Started libpod-conmon-ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e.scope.
Nov 29 02:30:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:30:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:30:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:30:58.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:30:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:30:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea430458b52379b603f6d9e1a5da72c34c8c907b7ddc9d67ed2328c474b0d7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea430458b52379b603f6d9e1a5da72c34c8c907b7ddc9d67ed2328c474b0d7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea430458b52379b603f6d9e1a5da72c34c8c907b7ddc9d67ed2328c474b0d7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea430458b52379b603f6d9e1a5da72c34c8c907b7ddc9d67ed2328c474b0d7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:30:58 np0005539550 podman[167380]: 2025-11-29 07:30:58.818203648 +0000 UTC m=+0.243578151 container init ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:30:58 np0005539550 podman[167380]: 2025-11-29 07:30:58.827992526 +0000 UTC m=+0.253367009 container start ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:30:58 np0005539550 podman[167380]: 2025-11-29 07:30:58.833573468 +0000 UTC m=+0.258947951 container attach ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:30:59
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes']
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:30:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:30:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:30:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:30:59.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:30:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]: {
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:    "0": [
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:        {
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "devices": [
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "/dev/loop3"
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            ],
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "lv_name": "ceph_lv0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "lv_size": "7511998464",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "name": "ceph_lv0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "tags": {
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.cluster_name": "ceph",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.crush_device_class": "",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.encrypted": "0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.osd_id": "0",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.type": "block",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:                "ceph.vdo": "0"
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            },
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "type": "block",
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:            "vg_name": "ceph_vg0"
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:        }
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]:    ]
Nov 29 02:30:59 np0005539550 affectionate_cartwright[167410]: }
Nov 29 02:30:59 np0005539550 systemd[1]: libpod-ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e.scope: Deactivated successfully.
Nov 29 02:30:59 np0005539550 podman[167380]: 2025-11-29 07:30:59.753253815 +0000 UTC m=+1.178628298 container died ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:30:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-aea430458b52379b603f6d9e1a5da72c34c8c907b7ddc9d67ed2328c474b0d7c-merged.mount: Deactivated successfully.
Nov 29 02:30:59 np0005539550 podman[167380]: 2025-11-29 07:30:59.944142051 +0000 UTC m=+1.369516534 container remove ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:30:59 np0005539550 systemd[1]: libpod-conmon-ec959305084fdd90081756fdee367dfce8041381dc83a07f3642d06b1fec073e.scope: Deactivated successfully.
Nov 29 02:31:00 np0005539550 podman[167572]: 2025-11-29 07:31:00.585459956 +0000 UTC m=+0.062045653 container create d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:31:00 np0005539550 podman[167572]: 2025-11-29 07:31:00.546399736 +0000 UTC m=+0.022985453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:00.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:00 np0005539550 systemd[1]: Started libpod-conmon-d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91.scope.
Nov 29 02:31:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:31:01 np0005539550 podman[167572]: 2025-11-29 07:31:01.005545447 +0000 UTC m=+0.482131164 container init d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:31:01 np0005539550 podman[167572]: 2025-11-29 07:31:01.014899304 +0000 UTC m=+0.491485051 container start d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:31:01 np0005539550 wonderful_chandrasekhar[167588]: 167 167
Nov 29 02:31:01 np0005539550 systemd[1]: libpod-d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91.scope: Deactivated successfully.
Nov 29 02:31:01 np0005539550 podman[167572]: 2025-11-29 07:31:01.039380614 +0000 UTC m=+0.515966411 container attach d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:31:01 np0005539550 podman[167572]: 2025-11-29 07:31:01.04040146 +0000 UTC m=+0.516987247 container died d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:31:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c1a493d1e16a288471095c143df7860d4d51cdf07f2eeeb03ff2f2c37a394823-merged.mount: Deactivated successfully.
Nov 29 02:31:01 np0005539550 podman[167572]: 2025-11-29 07:31:01.300738745 +0000 UTC m=+0.777324442 container remove d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:31:01 np0005539550 systemd[1]: libpod-conmon-d7a8532ec392a3497f59a56448768f960add05ef11fd4856f90a277590c21a91.scope: Deactivated successfully.
Nov 29 02:31:01 np0005539550 podman[167612]: 2025-11-29 07:31:01.476420365 +0000 UTC m=+0.038245070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:31:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:01.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:01 np0005539550 podman[167612]: 2025-11-29 07:31:01.698536622 +0000 UTC m=+0.260361307 container create 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:31:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:01 np0005539550 systemd[1]: Started libpod-conmon-22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246.scope.
Nov 29 02:31:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:31:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b2e034422b93a143195051b539f71d62977d33b167de77cf0535099179a585/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b2e034422b93a143195051b539f71d62977d33b167de77cf0535099179a585/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b2e034422b93a143195051b539f71d62977d33b167de77cf0535099179a585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9b2e034422b93a143195051b539f71d62977d33b167de77cf0535099179a585/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:31:01 np0005539550 podman[167612]: 2025-11-29 07:31:01.862182487 +0000 UTC m=+0.424007192 container init 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:01 np0005539550 podman[167612]: 2025-11-29 07:31:01.869902123 +0000 UTC m=+0.431726808 container start 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:31:01 np0005539550 podman[167612]: 2025-11-29 07:31:01.985140272 +0000 UTC m=+0.546964947 container attach 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]: {
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:        "osd_id": 0,
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:        "type": "bluestore"
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]:    }
Nov 29 02:31:02 np0005539550 quirky_shtern[167629]: }
Nov 29 02:31:02 np0005539550 systemd[1]: libpod-22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246.scope: Deactivated successfully.
Nov 29 02:31:02 np0005539550 podman[167651]: 2025-11-29 07:31:02.766249899 +0000 UTC m=+0.024419439 container died 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:31:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:02.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:03.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f9b2e034422b93a143195051b539f71d62977d33b167de77cf0535099179a585-merged.mount: Deactivated successfully.
Nov 29 02:31:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:04.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:05.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:06 np0005539550 podman[167651]: 2025-11-29 07:31:06.019989093 +0000 UTC m=+3.278158633 container remove 22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:31:06 np0005539550 systemd[1]: libpod-conmon-22c3e417bbdf67f65f9a97a7c7852f64ed5c80aaa48f5878479a2ddd6b519246.scope: Deactivated successfully.
Nov 29 02:31:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:31:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:31:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:07.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:31:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:31:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:08 np0005539550 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:31:08 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:31:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:08.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e6f37727-0dc6-41df-99b7-1a1869101b0a does not exist
Nov 29 02:31:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 97511940-16d1-44bc-9e61-0f94c4c044be does not exist
Nov 29 02:31:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a5148ba6-c80c-427a-8ee2-f8100a7d50d2 does not exist
Nov 29 02:31:09 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 29 02:31:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:09.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:10.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:11.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:31:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:12.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:13.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:15.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:16.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:31:18.897 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:31:18.899 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:31:18.899 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:31:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:18.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:19 np0005539550 podman[167783]: 2025-11-29 07:31:19.334812699 +0000 UTC m=+0.062641006 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:31:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:19.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:20 np0005539550 kernel: SELinux:  Converting 2771 SID table entries...
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:31:20 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:31:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:20.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:21.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:23.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:24.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:26.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:27.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:28.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 02:31:29 np0005539550 podman[167816]: 2025-11-29 07:31:29.365944592 +0000 UTC m=+0.088570322 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:31:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:29.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:30.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:31.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:34.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:35.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:36.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:38.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:40.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:42.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:44.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:46.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:47.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:48.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:49.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:50 np0005539550 podman[177253]: 2025-11-29 07:31:50.144151816 +0000 UTC m=+0.069499510 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 02:31:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:31:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:31:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:50.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:31:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:51.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:31:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:31:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:53.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:54.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).paxos(paxos updating c 503..1209) accept timeout, calling fresh election
Nov 29 02:31:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:31:55 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(38) init, last seen epoch 38
Nov 29 02:31:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:31:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:56 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:31:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:56.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:57.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:31:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:31:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:31:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:31:58.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:31:59
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data', 'volumes', '.rgw.root']
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:31:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:31:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:31:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:31:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:31:59.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:32:00 np0005539550 podman[184183]: 2025-11-29 07:32:00.359813168 +0000 UTC m=+0.094791109 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 19m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:32:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:32:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:32:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 2798 writes, 12K keys, 2796 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 2798 writes, 2796 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1063 writes, 4474 keys, 1062 commit groups, 1.0 writes per commit group, ingest: 7.69 MB, 0.01 MB/s#012Interval WAL: 1064 writes, 1063 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     67.1      0.19              0.04         4    0.048       0      0       0.0       0.0#012  L6      1/0    9.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.2     66.7     58.9      0.48              0.07         3    0.159     13K   1283       0.0       0.0#012 Sum      1/0    9.14 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.2     47.6     61.2      0.67              0.11         7    0.096     13K   1283       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.2     47.9     61.6      0.67              0.11         6    0.111     13K   1283       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     66.7     58.9      0.48              0.07         3    0.159     13K   1283       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     68.6      0.19              0.04         3    0.062       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.013, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.04 GB write, 0.03 MB/s write, 0.03 GB read, 0.03 MB/s read, 0.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 1.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00011 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(55,947.73 KB,0.304448%) FilterBlock(8,43.86 KB,0.0140893%) IndexBlock(8,98.31 KB,0.0315817%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:32:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:01.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:02 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:32:02 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:32:02 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:32:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:32:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:03.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:32:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:03.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:05.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:07.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:32:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:32:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:09.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:10 np0005539550 podman[185030]: 2025-11-29 07:32:10.870032436 +0000 UTC m=+0.082947360 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:32:11 np0005539550 podman[185030]: 2025-11-29 07:32:11.012961662 +0000 UTC m=+0.225876596 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:32:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:11.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:11 np0005539550 podman[185182]: 2025-11-29 07:32:11.734503989 +0000 UTC m=+0.059579968 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:32:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:11.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:11 np0005539550 podman[185182]: 2025-11-29 07:32:11.772267984 +0000 UTC m=+0.097343953 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:32:12 np0005539550 podman[185251]: 2025-11-29 07:32:12.000205991 +0000 UTC m=+0.054558381 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, io.openshift.expose-services=, version=2.2.4)
Nov 29 02:32:12 np0005539550 podman[185251]: 2025-11-29 07:32:12.057369068 +0000 UTC m=+0.111721458 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, release=1793, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 29 02:32:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:32:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:13.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:15.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:32:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:32:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:32:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:32:18.898 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:32:18.900 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:32:18.900 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:32:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:32:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:19.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:20 np0005539550 podman[185424]: 2025-11-29 07:32:20.369398164 +0000 UTC m=+0.087166697 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:32:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:21.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:21.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:23.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:32:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:32:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:23.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fd67a519-bbe1-4a7e-9ad2-258dece8f51a does not exist
Nov 29 02:32:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a1512003-36b0-4dba-80fc-2db87bdb238b does not exist
Nov 29 02:32:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e1b254d8-72c4-48e2-8cd2-244e7f737618 does not exist
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:32:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:32:24 np0005539550 kernel: SELinux:  Converting 2772 SID table entries...
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability open_perms=1
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability always_check_network=0
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 02:32:24 np0005539550 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 02:32:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:25 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 02:32:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:25.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:25 np0005539550 podman[185593]: 2025-11-29 07:32:25.854631806 +0000 UTC m=+0.026181594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:26 np0005539550 podman[185593]: 2025-11-29 07:32:26.46621236 +0000 UTC m=+0.637762138 container create d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:32:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:32:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:32:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:27.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:27 np0005539550 systemd[1]: Started libpod-conmon-d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca.scope.
Nov 29 02:32:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:28 np0005539550 podman[185593]: 2025-11-29 07:32:28.299018444 +0000 UTC m=+2.470568232 container init d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:32:28 np0005539550 podman[185593]: 2025-11-29 07:32:28.31267145 +0000 UTC m=+2.484221218 container start d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:32:28 np0005539550 gracious_perlman[185612]: 167 167
Nov 29 02:32:28 np0005539550 systemd[1]: libpod-d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca.scope: Deactivated successfully.
Nov 29 02:32:28 np0005539550 podman[185593]: 2025-11-29 07:32:28.913999945 +0000 UTC m=+3.085549743 container attach d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:32:28 np0005539550 podman[185593]: 2025-11-29 07:32:28.914423136 +0000 UTC m=+3.085972924 container died d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:32:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-67d74456aa19ddbe17c7b2a1c07498c02b992e11617f7b54b22eace5d44ddc1b-merged.mount: Deactivated successfully.
Nov 29 02:32:29 np0005539550 podman[185593]: 2025-11-29 07:32:29.719317531 +0000 UTC m=+3.890867299 container remove d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_perlman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:32:29 np0005539550 systemd[1]: libpod-conmon-d4f9a0af2de5cb45f8eaca2c00b71ab504171a6bee1684c5f10de4262f4765ca.scope: Deactivated successfully.
Nov 29 02:32:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:29.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:29 np0005539550 podman[185639]: 2025-11-29 07:32:29.928372211 +0000 UTC m=+0.094358238 container create 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:32:29 np0005539550 podman[185639]: 2025-11-29 07:32:29.85601167 +0000 UTC m=+0.021997717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:30 np0005539550 systemd[1]: Started libpod-conmon-2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2.scope.
Nov 29 02:32:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:30 np0005539550 podman[185639]: 2025-11-29 07:32:30.243479745 +0000 UTC m=+0.409465802 container init 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:30 np0005539550 podman[185639]: 2025-11-29 07:32:30.252283067 +0000 UTC m=+0.418269094 container start 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:32:30 np0005539550 podman[185639]: 2025-11-29 07:32:30.26898097 +0000 UTC m=+0.434966997 container attach 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:32:30 np0005539550 podman[185711]: 2025-11-29 07:32:30.519284853 +0000 UTC m=+0.090681705 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:32:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:30 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:32:31 np0005539550 dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Nov 29 02:32:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:31 np0005539550 gifted_turing[185657]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:32:31 np0005539550 gifted_turing[185657]: --> relative data size: 1.0
Nov 29 02:32:31 np0005539550 gifted_turing[185657]: --> All data devices are unavailable
Nov 29 02:32:31 np0005539550 systemd[1]: libpod-2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2.scope: Deactivated successfully.
Nov 29 02:32:31 np0005539550 podman[185639]: 2025-11-29 07:32:31.180061243 +0000 UTC m=+1.346047290 container died 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:32:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4a15b477e317a9a0dd3f18a9b7b9af71b6c9cab79a93b02a97102fad6be54350-merged.mount: Deactivated successfully.
Nov 29 02:32:31 np0005539550 podman[185639]: 2025-11-29 07:32:31.24198931 +0000 UTC m=+1.407975337 container remove 2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_turing, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:32:31 np0005539550 systemd[1]: libpod-conmon-2f29e57f36b70b0c286902d7480c25726bd919b7e548a9e61aee02865d4fdbf2.scope: Deactivated successfully.
Nov 29 02:32:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:31.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.860056748 +0000 UTC m=+0.045009029 container create b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:32:31 np0005539550 systemd[1]: Started libpod-conmon-b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a.scope.
Nov 29 02:32:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.838900393 +0000 UTC m=+0.023852694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.952257491 +0000 UTC m=+0.137209792 container init b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.96088162 +0000 UTC m=+0.145833931 container start b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:32:31 np0005539550 xenodochial_mcclintock[185939]: 167 167
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.966304537 +0000 UTC m=+0.151256838 container attach b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:32:31 np0005539550 systemd[1]: libpod-b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a.scope: Deactivated successfully.
Nov 29 02:32:31 np0005539550 conmon[185939]: conmon b5691d721bda0468442a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a.scope/container/memory.events
Nov 29 02:32:31 np0005539550 podman[185921]: 2025-11-29 07:32:31.968501162 +0000 UTC m=+0.153453443 container died b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 02:32:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1078009a44b9115aa78ec5c5c1bc0c4e53c7223f5f4fb03df1b778bbb52479b4-merged.mount: Deactivated successfully.
Nov 29 02:32:32 np0005539550 podman[185921]: 2025-11-29 07:32:32.011583572 +0000 UTC m=+0.196535853 container remove b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:32:32 np0005539550 systemd[1]: libpod-conmon-b5691d721bda0468442adc5b12bd5dea13583a4bb6b9e58e82eba81f6fa1972a.scope: Deactivated successfully.
Nov 29 02:32:32 np0005539550 podman[185971]: 2025-11-29 07:32:32.203824887 +0000 UTC m=+0.067352725 container create bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:32:32 np0005539550 podman[185971]: 2025-11-29 07:32:32.166277227 +0000 UTC m=+0.029805085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:32 np0005539550 systemd[1]: Started libpod-conmon-bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d.scope.
Nov 29 02:32:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f19ca4828bc578638f8fa7f9c3af502b6549c6f6c288348c39c49ba0bc1b60c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f19ca4828bc578638f8fa7f9c3af502b6549c6f6c288348c39c49ba0bc1b60c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f19ca4828bc578638f8fa7f9c3af502b6549c6f6c288348c39c49ba0bc1b60c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f19ca4828bc578638f8fa7f9c3af502b6549c6f6c288348c39c49ba0bc1b60c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:32 np0005539550 podman[185971]: 2025-11-29 07:32:32.433082318 +0000 UTC m=+0.296610176 container init bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:32:32 np0005539550 podman[185971]: 2025-11-29 07:32:32.441524591 +0000 UTC m=+0.305052429 container start bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:32:32 np0005539550 podman[185971]: 2025-11-29 07:32:32.452249423 +0000 UTC m=+0.315777261 container attach bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:32:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]: {
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:    "0": [
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:        {
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "devices": [
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "/dev/loop3"
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            ],
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "lv_name": "ceph_lv0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "lv_size": "7511998464",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "name": "ceph_lv0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "tags": {
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.cluster_name": "ceph",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.crush_device_class": "",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.encrypted": "0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.osd_id": "0",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.type": "block",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:                "ceph.vdo": "0"
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            },
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "type": "block",
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:            "vg_name": "ceph_vg0"
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:        }
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]:    ]
Nov 29 02:32:33 np0005539550 strange_heyrovsky[186003]: }
Nov 29 02:32:33 np0005539550 systemd[1]: libpod-bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d.scope: Deactivated successfully.
Nov 29 02:32:33 np0005539550 podman[185971]: 2025-11-29 07:32:33.258089602 +0000 UTC m=+1.121617460 container died bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:32:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5f19ca4828bc578638f8fa7f9c3af502b6549c6f6c288348c39c49ba0bc1b60c-merged.mount: Deactivated successfully.
Nov 29 02:32:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:33.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:33 np0005539550 podman[185971]: 2025-11-29 07:32:33.782512811 +0000 UTC m=+1.646040649 container remove bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:32:33 np0005539550 systemd[1]: libpod-conmon-bea4b5d79449c785f7437e180ef7fa273000257de90afdfd66a923fd42abf95d.scope: Deactivated successfully.
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.332120128 +0000 UTC m=+0.040074195 container create 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:32:34 np0005539550 systemd[1]: Started libpod-conmon-8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804.scope.
Nov 29 02:32:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.315291852 +0000 UTC m=+0.023245949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.427176773 +0000 UTC m=+0.135130860 container init 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.435840132 +0000 UTC m=+0.143794199 container start 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.441494625 +0000 UTC m=+0.149448692 container attach 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:32:34 np0005539550 focused_kepler[186191]: 167 167
Nov 29 02:32:34 np0005539550 systemd[1]: libpod-8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804.scope: Deactivated successfully.
Nov 29 02:32:34 np0005539550 podman[186172]: 2025-11-29 07:32:34.442810579 +0000 UTC m=+0.150764686 container died 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:32:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b294c9ab2e8a76e38e631c1386ee9a8d0fdd485ea6ba9670325e994f709a3c2d-merged.mount: Deactivated successfully.
Nov 29 02:32:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:35 np0005539550 podman[186172]: 2025-11-29 07:32:35.558291474 +0000 UTC m=+1.266245541 container remove 8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kepler, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:35 np0005539550 systemd[1]: libpod-conmon-8d8e31a79628ff4cd807bc5d65e15170d88f9dd93565293e9bcf1684d07e7804.scope: Deactivated successfully.
Nov 29 02:32:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:35 np0005539550 podman[186217]: 2025-11-29 07:32:35.762391328 +0000 UTC m=+0.077214745 container create b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:32:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:35.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:35 np0005539550 podman[186217]: 2025-11-29 07:32:35.713500291 +0000 UTC m=+0.028323738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:32:35 np0005539550 systemd[1]: Started libpod-conmon-b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7.scope.
Nov 29 02:32:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:32:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da62b29e0b99b9ce64d5f438b70f509ced5fe36899dbbf1d06e3a12773b9b606/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da62b29e0b99b9ce64d5f438b70f509ced5fe36899dbbf1d06e3a12773b9b606/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da62b29e0b99b9ce64d5f438b70f509ced5fe36899dbbf1d06e3a12773b9b606/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da62b29e0b99b9ce64d5f438b70f509ced5fe36899dbbf1d06e3a12773b9b606/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:32:36 np0005539550 podman[186217]: 2025-11-29 07:32:36.164699188 +0000 UTC m=+0.479522635 container init b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:32:36 np0005539550 podman[186217]: 2025-11-29 07:32:36.173218603 +0000 UTC m=+0.488042020 container start b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:32:36 np0005539550 podman[186217]: 2025-11-29 07:32:36.182931159 +0000 UTC m=+0.497754586 container attach b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]: {
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:        "osd_id": 0,
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:        "type": "bluestore"
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]:    }
Nov 29 02:32:37 np0005539550 unruffled_ramanujan[186235]: }
Nov 29 02:32:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:37 np0005539550 systemd[1]: libpod-b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7.scope: Deactivated successfully.
Nov 29 02:32:37 np0005539550 podman[186217]: 2025-11-29 07:32:37.091566589 +0000 UTC m=+1.406390006 container died b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:32:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da62b29e0b99b9ce64d5f438b70f509ced5fe36899dbbf1d06e3a12773b9b606-merged.mount: Deactivated successfully.
Nov 29 02:32:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:39.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:39 np0005539550 podman[186217]: 2025-11-29 07:32:39.849143783 +0000 UTC m=+4.163967200 container remove b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:39 np0005539550 systemd[1]: libpod-conmon-b3ab84bf23fa18b5d723dbec3ac88683e8e970066230a9f30c388906fd87beb7.scope: Deactivated successfully.
Nov 29 02:32:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:32:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:32:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 06919577-3788-4779-9424-4bb83091a8bc does not exist
Nov 29 02:32:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 483a5eb3-342b-4c0e-be07-0ef87d00390d does not exist
Nov 29 02:32:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5a6552d7-5688-4845-a41d-ce17669e5442 does not exist
Nov 29 02:32:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:45.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:45.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:47.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:47.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:32:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:49.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:32:50 np0005539550 podman[187143]: 2025-11-29 07:32:50.635984061 +0000 UTC m=+0.105908551 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:32:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:51.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 29 02:32:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:52.917936) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:32:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 29 02:32:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401572918147, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2189, "num_deletes": 250, "total_data_size": 4167281, "memory_usage": 4255840, "flush_reason": "Manual Compaction"}
Nov 29 02:32:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 29 02:32:52 np0005539550 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 02:32:52 np0005539550 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 02:32:52 np0005539550 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 02:32:52 np0005539550 systemd[1]: sshd.service: Consumed 15.103s CPU time, read 32.0K from disk, written 272.0K to disk.
Nov 29 02:32:52 np0005539550 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 02:32:52 np0005539550 systemd[1]: Stopping sshd-keygen.target...
Nov 29 02:32:52 np0005539550 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:32:52 np0005539550 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:32:52 np0005539550 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 02:32:52 np0005539550 systemd[1]: Reached target sshd-keygen.target.
Nov 29 02:32:53 np0005539550 systemd[1]: Starting OpenSSH server daemon...
Nov 29 02:32:53 np0005539550 systemd[1]: Started OpenSSH server daemon.
Nov 29 02:32:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:53.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401574021801, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 4036698, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11044, "largest_seqno": 13232, "table_properties": {"data_size": 4026679, "index_size": 6385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20239, "raw_average_key_size": 19, "raw_value_size": 4006450, "raw_average_value_size": 3931, "num_data_blocks": 283, "num_entries": 1019, "num_filter_entries": 1019, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401182, "oldest_key_time": 1764401182, "file_creation_time": 1764401572, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:32:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 1104634 microseconds, and 26809 cpu microseconds.
Nov 29 02:32:54 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:32:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:54.022613) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 4036698 bytes OK
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:54.022651) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:55.664588) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:55.664671) EVENT_LOG_v1 {"time_micros": 1764401575664658, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:55.664701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4158356, prev total WAL file size 4202254, number of live WAL files 2.
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:55.668394) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3942KB)], [26(9360KB)]
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401575668522, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 13621503, "oldest_snapshot_seqno": -1}
Nov 29 02:32:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:32:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:55.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:32:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:32:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:32:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:32:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:32:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:32:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:32:59
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'vms']
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:32:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:32:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:32:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:32:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:32:59.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:01 np0005539550 podman[187391]: 2025-11-29 07:33:01.36288488 +0000 UTC m=+0.099738025 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:33:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:01.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4517 keys, 13069208 bytes, temperature: kUnknown
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583268833, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 13069208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13031600, "index_size": 25210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111735, "raw_average_key_size": 24, "raw_value_size": 12942663, "raw_average_value_size": 2865, "num_data_blocks": 1079, "num_entries": 4517, "num_filter_entries": 4517, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401575, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.269210) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 13069208 bytes
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.488008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 1.8 rd, 1.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 9.1 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(6.6) write-amplify(3.2) OK, records in: 5037, records dropped: 520 output_compression: NoCompression
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.488052) EVENT_LOG_v1 {"time_micros": 1764401583488031, "job": 10, "event": "compaction_finished", "compaction_time_micros": 7600468, "compaction_time_cpu_micros": 29970, "output_level": 6, "num_output_files": 1, "total_output_size": 13069208, "num_input_records": 5037, "num_output_records": 4517, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583489621, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583491733, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:32:55.668177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.491915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.491921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.491923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.491924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.491926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.501393) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583501426, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 268, "num_deletes": 250, "total_data_size": 46559, "memory_usage": 53392, "flush_reason": "Manual Compaction"}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583558067, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 47089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13233, "largest_seqno": 13500, "table_properties": {"data_size": 45231, "index_size": 87, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 3804, "raw_average_key_size": 14, "raw_value_size": 41662, "raw_average_value_size": 156, "num_data_blocks": 4, "num_entries": 266, "num_filter_entries": 266, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401574, "oldest_key_time": 1764401574, "file_creation_time": 1764401583, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 56721 microseconds, and 988 cpu microseconds.
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(41) init, last seen epoch 41, mid-election, bumping
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.558113) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 47089 bytes OK
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.558132) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.592707) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.592761) EVENT_LOG_v1 {"time_micros": 1764401583592751, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.592785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 44493, prev total WAL file size 44807, number of live WAL files 2.
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.606342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(45KB)], [29(12MB)]
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583606394, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 13116297, "oldest_snapshot_seqno": -1}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4524 keys, 13110755 bytes, temperature: kUnknown
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583779491, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13110755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13073099, "index_size": 25258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 111902, "raw_average_key_size": 24, "raw_value_size": 12984021, "raw_average_value_size": 2870, "num_data_blocks": 1081, "num_entries": 4524, "num_filter_entries": 4524, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401583, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:03.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.779736) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13110755 bytes
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.826357) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.7 rd, 75.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 12.5 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(557.0) write-amplify(278.4) OK, records in: 4783, records dropped: 259 output_compression: NoCompression
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.826429) EVENT_LOG_v1 {"time_micros": 1764401583826410, "job": 12, "event": "compaction_finished", "compaction_time_micros": 173176, "compaction_time_cpu_micros": 28660, "output_level": 6, "num_output_files": 1, "total_output_size": 13110755, "num_input_records": 4783, "num_output_records": 4524, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583826663, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401583829166, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.606256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.829921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.829934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.829936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.829938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:03.829940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:03 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:33:03 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:33:04 np0005539550 systemd[1]: Reloading.
Nov 29 02:33:04 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:33:04 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:33:04 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 20m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:33:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:33:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:05.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:07.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:33:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:33:08 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:33:08 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:33:08 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:08 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:33:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:33:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:09.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:11.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:13.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:15 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:33:15 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:33:15 np0005539550 systemd[1]: man-db-cache-update.service: Consumed 11.394s CPU time.
Nov 29 02:33:15 np0005539550 systemd[1]: run-rf285020f6f844ea586f3bd13488a1c2c.service: Deactivated successfully.
Nov 29 02:33:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:15.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:17.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:33:18.900 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:33:18.901 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:33:18.901 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:33:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:33:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:19.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:21.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:21 np0005539550 podman[195953]: 2025-11-29 07:33:21.344724312 +0000 UTC m=+0.068195326 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:33:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:21.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:23.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:25.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:27.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:29.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:29.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:31.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:32 np0005539550 podman[196029]: 2025-11-29 07:33:32.360712204 +0000 UTC m=+0.106360054 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:33:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:33.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:33.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:33:37 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(45) init, last seen epoch 45, mid-election, bumping
Nov 29 02:33:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:40 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:33:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:41.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 20m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.407130) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622407174, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 413, "num_deletes": 251, "total_data_size": 317633, "memory_usage": 325768, "flush_reason": "Manual Compaction"}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622425342, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 293346, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13508, "largest_seqno": 13913, "table_properties": {"data_size": 290959, "index_size": 487, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5878, "raw_average_key_size": 18, "raw_value_size": 286135, "raw_average_value_size": 891, "num_data_blocks": 22, "num_entries": 321, "num_filter_entries": 321, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401583, "oldest_key_time": 1764401583, "file_creation_time": 1764401622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18266 microseconds, and 1875 cpu microseconds.
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.425395) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 293346 bytes OK
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.425418) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.429977) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.430019) EVENT_LOG_v1 {"time_micros": 1764401622430011, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.430042) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 315103, prev total WAL file size 315103, number of live WAL files 2.
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.430472) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(286KB)], [32(12MB)]
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622430497, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 13404101, "oldest_snapshot_seqno": -1}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4326 keys, 10268172 bytes, temperature: kUnknown
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622513519, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 10268172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10234524, "index_size": 21762, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 108806, "raw_average_key_size": 25, "raw_value_size": 10151495, "raw_average_value_size": 2346, "num_data_blocks": 916, "num_entries": 4326, "num_filter_entries": 4326, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.513794) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 10268172 bytes
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.537328) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.3 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.5 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(80.7) write-amplify(35.0) OK, records in: 4845, records dropped: 519 output_compression: NoCompression
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.537407) EVENT_LOG_v1 {"time_micros": 1764401622537370, "job": 14, "event": "compaction_finished", "compaction_time_micros": 83110, "compaction_time_cpu_micros": 23669, "output_level": 6, "num_output_files": 1, "total_output_size": 10268172, "num_input_records": 4845, "num_output_records": 4326, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622537785, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401622540969, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.430432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.541118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.541125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.541127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.541128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:33:42.541130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:33:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:43.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:33:43 np0005539550 ceph-mon[74435]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:33:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:33:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:33:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:33:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dba6c619-460a-4cb6-80e3-19dd5749c7f6 does not exist
Nov 29 02:33:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ef0639f1-3e38-49bd-a530-b00c6e1b28d2 does not exist
Nov 29 02:33:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 94997e6c-8bae-423f-92e7-e6c83d62b139 does not exist
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:45.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.174193564 +0000 UTC m=+0.044457205 container create 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:33:46 np0005539550 systemd[1]: Started libpod-conmon-28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661.scope.
Nov 29 02:33:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.1540451 +0000 UTC m=+0.024308751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.251718241 +0000 UTC m=+0.121981922 container init 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.258350931 +0000 UTC m=+0.128614582 container start 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.261407358 +0000 UTC m=+0.131671029 container attach 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:33:46 np0005539550 magical_nightingale[196470]: 167 167
Nov 29 02:33:46 np0005539550 systemd[1]: libpod-28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661.scope: Deactivated successfully.
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.264482117 +0000 UTC m=+0.134745768 container died 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:33:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3525ff78b523317786d988d59f163e09b8e14c7d80e1ef1457366db4903afa87-merged.mount: Deactivated successfully.
Nov 29 02:33:46 np0005539550 podman[196454]: 2025-11-29 07:33:46.330042469 +0000 UTC m=+0.200306120 container remove 28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:33:46 np0005539550 systemd[1]: libpod-conmon-28676135fee14cc3039dbb726da3f5c8569f45f5fe07d21597fb8d72ac037661.scope: Deactivated successfully.
Nov 29 02:33:46 np0005539550 podman[196496]: 2025-11-29 07:33:46.526263483 +0000 UTC m=+0.040828822 container create 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:33:46 np0005539550 systemd[1]: Started libpod-conmon-6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73.scope.
Nov 29 02:33:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:46 np0005539550 podman[196496]: 2025-11-29 07:33:46.50928917 +0000 UTC m=+0.023854549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:46 np0005539550 podman[196496]: 2025-11-29 07:33:46.61044627 +0000 UTC m=+0.125011649 container init 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:33:46 np0005539550 podman[196496]: 2025-11-29 07:33:46.618842194 +0000 UTC m=+0.133407533 container start 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:33:46 np0005539550 podman[196496]: 2025-11-29 07:33:46.622703493 +0000 UTC m=+0.137268862 container attach 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:33:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:47.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:47 np0005539550 silly_brahmagupta[196513]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:33:47 np0005539550 silly_brahmagupta[196513]: --> relative data size: 1.0
Nov 29 02:33:47 np0005539550 silly_brahmagupta[196513]: --> All data devices are unavailable
Nov 29 02:33:47 np0005539550 systemd[1]: libpod-6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73.scope: Deactivated successfully.
Nov 29 02:33:47 np0005539550 podman[196496]: 2025-11-29 07:33:47.546937734 +0000 UTC m=+1.061503103 container died 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:33:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f408856774b8d7c63b0183e45beb3c178663a96147495948cf9ca3bea2a76632-merged.mount: Deactivated successfully.
Nov 29 02:33:47 np0005539550 podman[196496]: 2025-11-29 07:33:47.686300299 +0000 UTC m=+1.200865638 container remove 6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brahmagupta, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:33:47 np0005539550 systemd[1]: libpod-conmon-6faa2c128eb946e6b986999328e77af29f717ede05dd8f34c2eaf8309e95ea73.scope: Deactivated successfully.
Nov 29 02:33:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.302471323 +0000 UTC m=+0.043553652 container create b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:33:48 np0005539550 systemd[1]: Started libpod-conmon-b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8.scope.
Nov 29 02:33:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.370329944 +0000 UTC m=+0.111412303 container init b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.378149153 +0000 UTC m=+0.119231482 container start b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.286327221 +0000 UTC m=+0.027409570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.38274588 +0000 UTC m=+0.123828209 container attach b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:33:48 np0005539550 romantic_rosalind[196701]: 167 167
Nov 29 02:33:48 np0005539550 systemd[1]: libpod-b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8.scope: Deactivated successfully.
Nov 29 02:33:48 np0005539550 conmon[196701]: conmon b41a7d10c3caaf614bb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8.scope/container/memory.events
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.384518086 +0000 UTC m=+0.125600415 container died b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:33:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ed95f270ac6d0896f59a2129d2f46dc1b2a33ad8e345220ae3ac8bba1b71c372-merged.mount: Deactivated successfully.
Nov 29 02:33:48 np0005539550 podman[196684]: 2025-11-29 07:33:48.421537469 +0000 UTC m=+0.162619798 container remove b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_rosalind, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:33:48 np0005539550 systemd[1]: libpod-conmon-b41a7d10c3caaf614bb510b4145c906323523c4394f59e67e89602296019d0b8.scope: Deactivated successfully.
Nov 29 02:33:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:48 np0005539550 podman[196725]: 2025-11-29 07:33:48.58430391 +0000 UTC m=+0.045235205 container create b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:33:48 np0005539550 systemd[1]: Started libpod-conmon-b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e.scope.
Nov 29 02:33:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3cb0b25985b71a8780beb069ea5a398f8443ebb5c35800f4d3fc43cb78ad5a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3cb0b25985b71a8780beb069ea5a398f8443ebb5c35800f4d3fc43cb78ad5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3cb0b25985b71a8780beb069ea5a398f8443ebb5c35800f4d3fc43cb78ad5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df3cb0b25985b71a8780beb069ea5a398f8443ebb5c35800f4d3fc43cb78ad5a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:48 np0005539550 podman[196725]: 2025-11-29 07:33:48.563765766 +0000 UTC m=+0.024697081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:48 np0005539550 podman[196725]: 2025-11-29 07:33:48.668422095 +0000 UTC m=+0.129353420 container init b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:33:48 np0005539550 podman[196725]: 2025-11-29 07:33:48.677266451 +0000 UTC m=+0.138197746 container start b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:33:48 np0005539550 podman[196725]: 2025-11-29 07:33:48.681208391 +0000 UTC m=+0.142139686 container attach b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:33:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:49.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]: {
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:    "0": [
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:        {
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "devices": [
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "/dev/loop3"
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            ],
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "lv_name": "ceph_lv0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "lv_size": "7511998464",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "name": "ceph_lv0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "tags": {
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.cluster_name": "ceph",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.crush_device_class": "",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.encrypted": "0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.osd_id": "0",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.type": "block",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:                "ceph.vdo": "0"
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            },
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "type": "block",
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:            "vg_name": "ceph_vg0"
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:        }
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]:    ]
Nov 29 02:33:49 np0005539550 crazy_sammet[196741]: }
Nov 29 02:33:49 np0005539550 systemd[1]: libpod-b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e.scope: Deactivated successfully.
Nov 29 02:33:49 np0005539550 podman[196750]: 2025-11-29 07:33:49.511726832 +0000 UTC m=+0.025356027 container died b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:33:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-df3cb0b25985b71a8780beb069ea5a398f8443ebb5c35800f4d3fc43cb78ad5a-merged.mount: Deactivated successfully.
Nov 29 02:33:49 np0005539550 podman[196750]: 2025-11-29 07:33:49.576102414 +0000 UTC m=+0.089731599 container remove b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:33:49 np0005539550 systemd[1]: libpod-conmon-b8a54f10678e2f6a27ca6e93a61cf8d1b13e4754e73574497c53d14b83752e0e.scope: Deactivated successfully.
Nov 29 02:33:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.171200411 +0000 UTC m=+0.041263483 container create 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:33:50 np0005539550 systemd[1]: Started libpod-conmon-7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b.scope.
Nov 29 02:33:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.152460484 +0000 UTC m=+0.022523576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.25899076 +0000 UTC m=+0.129053832 container init 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.26564406 +0000 UTC m=+0.135707122 container start 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.269375775 +0000 UTC m=+0.139438837 container attach 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:33:50 np0005539550 determined_pascal[196922]: 167 167
Nov 29 02:33:50 np0005539550 systemd[1]: libpod-7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b.scope: Deactivated successfully.
Nov 29 02:33:50 np0005539550 conmon[196922]: conmon 7b0491d3b7f1bbee6b51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b.scope/container/memory.events
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.271654163 +0000 UTC m=+0.141717225 container died 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:33:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6b3de4238d79f1b475348b037de064b8ca88318003282d0d7ab16f2bb943137c-merged.mount: Deactivated successfully.
Nov 29 02:33:50 np0005539550 podman[196904]: 2025-11-29 07:33:50.312156606 +0000 UTC m=+0.182219668 container remove 7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:33:50 np0005539550 systemd[1]: libpod-conmon-7b0491d3b7f1bbee6b51e9c12c8b8c99303117f5de82214148c1f44a34892a5b.scope: Deactivated successfully.
Nov 29 02:33:50 np0005539550 podman[196945]: 2025-11-29 07:33:50.544182964 +0000 UTC m=+0.116793040 container create e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:33:50 np0005539550 podman[196945]: 2025-11-29 07:33:50.452100345 +0000 UTC m=+0.024710441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:33:50 np0005539550 systemd[1]: Started libpod-conmon-e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab.scope.
Nov 29 02:33:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:33:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db5cec3861bd0784a835d360d18aee8be7bc430ca5e1db9d01b023e3c83ea8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db5cec3861bd0784a835d360d18aee8be7bc430ca5e1db9d01b023e3c83ea8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db5cec3861bd0784a835d360d18aee8be7bc430ca5e1db9d01b023e3c83ea8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2db5cec3861bd0784a835d360d18aee8be7bc430ca5e1db9d01b023e3c83ea8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:33:50 np0005539550 podman[196945]: 2025-11-29 07:33:50.745640212 +0000 UTC m=+0.318250308 container init e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:33:50 np0005539550 podman[196945]: 2025-11-29 07:33:50.752703372 +0000 UTC m=+0.325313448 container start e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:33:51 np0005539550 podman[196945]: 2025-11-29 07:33:51.011273687 +0000 UTC m=+0.583883773 container attach e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:33:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:51.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]: {
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:        "osd_id": 0,
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:        "type": "bluestore"
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]:    }
Nov 29 02:33:51 np0005539550 compassionate_pascal[196961]: }
Nov 29 02:33:51 np0005539550 systemd[1]: libpod-e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab.scope: Deactivated successfully.
Nov 29 02:33:51 np0005539550 podman[196945]: 2025-11-29 07:33:51.65954036 +0000 UTC m=+1.232150456 container died e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:33:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2db5cec3861bd0784a835d360d18aee8be7bc430ca5e1db9d01b023e3c83ea8b-merged.mount: Deactivated successfully.
Nov 29 02:33:51 np0005539550 podman[196945]: 2025-11-29 07:33:51.748216521 +0000 UTC m=+1.320826597 container remove e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 02:33:51 np0005539550 systemd[1]: libpod-conmon-e6f154445bb58fd4f2c3b9bee6b380fb98039f2ee3652510ec1696f3e535d6ab.scope: Deactivated successfully.
Nov 29 02:33:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:33:51 np0005539550 podman[197032]: 2025-11-29 07:33:51.796600955 +0000 UTC m=+0.091104614 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:33:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:33:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev aa4b122c-cbf6-43a0-87bb-8d8f2a9ccb1b does not exist
Nov 29 02:33:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 873b0777-9899-4eff-975b-02b6a33dee5d does not exist
Nov 29 02:33:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 75436047-0e61-4609-8919-b108514f276b does not exist
Nov 29 02:33:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:51.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:33:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:53.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:53.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:55.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:55.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:33:56 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(49) init, last seen epoch 49, mid-election, bumping
Nov 29 02:33:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:33:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:57.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 21m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:33:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:33:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:57.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:33:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:33:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:33:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:33:59.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:33:59
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.meta', 'vms']
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:33:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:33:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:33:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:33:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:33:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:34:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:03 np0005539550 podman[197117]: 2025-11-29 07:34:03.33960142 +0000 UTC m=+0.083781478 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, tcib_managed=true)
Nov 29 02:34:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:03.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:03.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:04 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:34:04 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:34:04 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:34:04 np0005539550 ceph-mon[74435]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:34:04 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:34:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:34:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:34:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:05.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:07.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:34:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:34:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:34:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:09.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:10 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:34:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:11.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:11.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:13.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:13.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:15.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:15.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:17.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:17.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:34:18.900 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:34:18.902 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:34:18.902 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:34:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb607f36f0 =====
Nov 29 02:34:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb607f36f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:34:19 np0005539550 radosgw[93278]: beast: 0x7fdb607f36f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:19.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:34:20 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:34:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).paxos(paxos updating c 754..1281) accept timeout, calling fresh election
Nov 29 02:34:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:34:20 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(52) init, last seen epoch 52
Nov 29 02:34:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:34:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:22 np0005539550 podman[197203]: 2025-11-29 07:34:22.323369869 +0000 UTC m=+0.053824934 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 02:34:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:23.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:24 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:34:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:34:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:25.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.pdhsqi(active, since 21m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:34:27 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:34:27 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:34:27 np0005539550 ceph-mon[74435]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:34:27 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:27 np0005539550 ceph-mon[74435]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:34:27 np0005539550 ceph-mon[74435]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:27.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:27.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:29.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:29.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:31 np0005539550 systemd[1]: session-50.scope: Deactivated successfully.
Nov 29 02:34:31 np0005539550 systemd[1]: session-50.scope: Consumed 2min 5.808s CPU time.
Nov 29 02:34:31 np0005539550 systemd-logind[788]: Session 50 logged out. Waiting for processes to exit.
Nov 29 02:34:31 np0005539550 systemd-logind[788]: Removed session 50.
Nov 29 02:34:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:31.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:34:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1201.4 total, 600.0 interval#012Cumulative writes: 8500 writes, 34K keys, 8500 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8500 writes, 1711 syncs, 4.97 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 558 writes, 870 keys, 558 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 558 writes, 255 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1201.4 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Nov 29 02:34:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:33.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:34 np0005539550 podman[197278]: 2025-11-29 07:34:34.349980978 +0000 UTC m=+0.084264290 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 02:34:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:39.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:39.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:41.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:34:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:41.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:34:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 17.0813
Nov 29 02:34:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:43.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:45.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 02:34:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:47.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:47.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 22.1699
Nov 29 02:34:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:49.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:34:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:52 np0005539550 podman[197388]: 2025-11-29 07:34:52.441629456 +0000 UTC m=+0.054555633 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 29 02:34:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:34:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:34:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:34:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 27.172
Nov 29 02:34:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:34:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:34:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:55.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:34:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:34:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:57.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:57.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:34:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f3c0e4b3-b223-412a-9ed0-50a4c9fc69c7 does not exist
Nov 29 02:34:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f8770187-eae8-48f1-82f3-07050d7e1041 does not exist
Nov 29 02:34:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b948c1ca-c947-492b-b7f4-4b7a5a1e9a9c does not exist
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 32.1741
Nov 29 02:34:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:34:59 np0005539550 podman[197654]: 2025-11-29 07:34:58.911827027 +0000 UTC m=+0.040683128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:34:59
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms', 'backups', 'volumes', '.rgw.root']
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:34:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.pdhsqi(active, since 22m), standbys: compute-2.zfrvoq
Nov 29 02:34:59 np0005539550 podman[197654]: 2025-11-29 07:34:59.706273568 +0000 UTC m=+0.835129679 container create 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:34:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:34:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:34:59 np0005539550 systemd[1]: Started libpod-conmon-35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043.scope.
Nov 29 02:34:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:34:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:34:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:34:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:34:59.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:34:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:34:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:34:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:34:59.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:00 np0005539550 podman[197654]: 2025-11-29 07:35:00.067211333 +0000 UTC m=+1.196067504 container init 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:35:00 np0005539550 podman[197654]: 2025-11-29 07:35:00.076788297 +0000 UTC m=+1.205644378 container start 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:35:00 np0005539550 condescending_dirac[197670]: 167 167
Nov 29 02:35:00 np0005539550 systemd[1]: libpod-35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043.scope: Deactivated successfully.
Nov 29 02:35:00 np0005539550 podman[197654]: 2025-11-29 07:35:00.112005125 +0000 UTC m=+1.240861206 container attach 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 29 02:35:00 np0005539550 podman[197654]: 2025-11-29 07:35:00.11256854 +0000 UTC m=+1.241424621 container died 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:35:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-55642498699a9197c5df36170b7cb17a5ffdb8a6df1f53aab83bc75ae0e4db55-merged.mount: Deactivated successfully.
Nov 29 02:35:00 np0005539550 podman[197654]: 2025-11-29 07:35:00.285172252 +0000 UTC m=+1.414028323 container remove 35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:35:00 np0005539550 systemd[1]: libpod-conmon-35e7e072c1cc6b8bf7dce1a8072e1b2c72d1c3c32078e84dc3c754065c5a1043.scope: Deactivated successfully.
Nov 29 02:35:00 np0005539550 podman[197695]: 2025-11-29 07:35:00.488102357 +0000 UTC m=+0.050684094 container create cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:35:00 np0005539550 systemd[1]: Started libpod-conmon-cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf.scope.
Nov 29 02:35:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:35:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:00 np0005539550 podman[197695]: 2025-11-29 07:35:00.468717373 +0000 UTC m=+0.031299140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:35:00 np0005539550 podman[197695]: 2025-11-29 07:35:00.572207152 +0000 UTC m=+0.134788919 container init cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:35:00 np0005539550 podman[197695]: 2025-11-29 07:35:00.579615351 +0000 UTC m=+0.142197088 container start cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:35:00 np0005539550 podman[197695]: 2025-11-29 07:35:00.584402463 +0000 UTC m=+0.146984230 container attach cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:35:01 np0005539550 bold_bhabha[197711]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:35:01 np0005539550 bold_bhabha[197711]: --> relative data size: 1.0
Nov 29 02:35:01 np0005539550 bold_bhabha[197711]: --> All data devices are unavailable
Nov 29 02:35:01 np0005539550 systemd[1]: libpod-cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf.scope: Deactivated successfully.
Nov 29 02:35:01 np0005539550 podman[197726]: 2025-11-29 07:35:01.557536701 +0000 UTC m=+0.026529717 container died cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:35:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f8c0b63c4cb57b9949b1993ca2ecbb597775617972543e5d2b98865f4874e0fa-merged.mount: Deactivated successfully.
Nov 29 02:35:01 np0005539550 podman[197726]: 2025-11-29 07:35:01.625019223 +0000 UTC m=+0.094012229 container remove cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_bhabha, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:35:01 np0005539550 systemd[1]: libpod-conmon-cb681bf6d91b127d05bd4246d77ea47518d6ca356c577782efc95c56a23a7faf.scope: Deactivated successfully.
Nov 29 02:35:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:01.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.244765228 +0000 UTC m=+0.022007522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.429754836 +0000 UTC m=+0.206997110 container create 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:35:02 np0005539550 systemd[1]: Started libpod-conmon-9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920.scope.
Nov 29 02:35:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.702520773 +0000 UTC m=+0.479763037 container init 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.71144837 +0000 UTC m=+0.488690634 container start 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 02:35:02 np0005539550 systemd[1]: libpod-9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920.scope: Deactivated successfully.
Nov 29 02:35:02 np0005539550 festive_torvalds[197898]: 167 167
Nov 29 02:35:02 np0005539550 conmon[197898]: conmon 9d190b46185fac57989c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920.scope/container/memory.events
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.76083859 +0000 UTC m=+0.538080854 container attach 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:35:02 np0005539550 podman[197882]: 2025-11-29 07:35:02.761406515 +0000 UTC m=+0.538648769 container died 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Nov 29 02:35:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-35612e5f8d0530efda58ccf92521e1de05067471376a17045cf96cd92e9d4220-merged.mount: Deactivated successfully.
Nov 29 02:35:03 np0005539550 podman[197882]: 2025-11-29 07:35:03.001475627 +0000 UTC m=+0.778717891 container remove 9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_torvalds, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:35:03 np0005539550 systemd[1]: libpod-conmon-9d190b46185fac57989c084ba9a3f1973951332d735d508efc2a653b7e273920.scope: Deactivated successfully.
Nov 29 02:35:03 np0005539550 podman[197924]: 2025-11-29 07:35:03.140638796 +0000 UTC m=+0.024084045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:35:03 np0005539550 ceph-mgr[74726]: ms_deliver_dispatch: unhandled message 0x555bc8e7f880 mgrreport(mgr.compute-1.fchyan +0-0 packed 54) v9 from mgr.24104 192.168.122.101:0/1646141000
Nov 29 02:35:03 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:03 np0005539550 podman[197924]: 2025-11-29 07:35:03.391516945 +0000 UTC m=+0.274962214 container create 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:35:03 np0005539550 systemd[1]: Started libpod-conmon-731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab.scope.
Nov 29 02:35:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:35:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b939eb3c1c4e7fde1aad49f62c443fc5eb9a64cb2669e53e563705d01e01f128/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b939eb3c1c4e7fde1aad49f62c443fc5eb9a64cb2669e53e563705d01e01f128/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b939eb3c1c4e7fde1aad49f62c443fc5eb9a64cb2669e53e563705d01e01f128/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b939eb3c1c4e7fde1aad49f62c443fc5eb9a64cb2669e53e563705d01e01f128/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:03 np0005539550 podman[197924]: 2025-11-29 07:35:03.734785188 +0000 UTC m=+0.618230407 container init 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:35:03 np0005539550 podman[197924]: 2025-11-29 07:35:03.742104635 +0000 UTC m=+0.625549854 container start 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:35:03 np0005539550 podman[197924]: 2025-11-29 07:35:03.793318831 +0000 UTC m=+0.676764050 container attach 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:35:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 37.3224
Nov 29 02:35:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:03.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:04 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:04 np0005539550 cool_burnell[197941]: {
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:    "0": [
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:        {
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "devices": [
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "/dev/loop3"
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            ],
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "lv_name": "ceph_lv0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "lv_size": "7511998464",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "name": "ceph_lv0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "tags": {
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.cluster_name": "ceph",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.crush_device_class": "",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.encrypted": "0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.osd_id": "0",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.type": "block",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:                "ceph.vdo": "0"
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            },
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "type": "block",
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:            "vg_name": "ceph_vg0"
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:        }
Nov 29 02:35:04 np0005539550 cool_burnell[197941]:    ]
Nov 29 02:35:04 np0005539550 cool_burnell[197941]: }
Nov 29 02:35:04 np0005539550 systemd[1]: libpod-731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab.scope: Deactivated successfully.
Nov 29 02:35:04 np0005539550 podman[197924]: 2025-11-29 07:35:04.596223518 +0000 UTC m=+1.479668737 container died 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:35:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b939eb3c1c4e7fde1aad49f62c443fc5eb9a64cb2669e53e563705d01e01f128-merged.mount: Deactivated successfully.
Nov 29 02:35:05 np0005539550 podman[197924]: 2025-11-29 07:35:05.073436178 +0000 UTC m=+1.956881407 container remove 731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:35:05 np0005539550 systemd[1]: libpod-conmon-731c7580027ee354a5988bc753810802719f5c2d7d7c5ed2d502f53ef0198eab.scope: Deactivated successfully.
Nov 29 02:35:05 np0005539550 podman[197951]: 2025-11-29 07:35:05.236526668 +0000 UTC m=+0.613900488 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 02:35:05 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.700761347 +0000 UTC m=+0.082045373 container create 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.63968131 +0000 UTC m=+0.020965356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:35:05 np0005539550 systemd[1]: Started libpod-conmon-72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2.scope.
Nov 29 02:35:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.784314558 +0000 UTC m=+0.165598614 container init 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.792288222 +0000 UTC m=+0.173572248 container start 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:35:05 np0005539550 heuristic_aryabhata[198145]: 167 167
Nov 29 02:35:05 np0005539550 systemd[1]: libpod-72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2.scope: Deactivated successfully.
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.811895182 +0000 UTC m=+0.193179208 container attach 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:35:05 np0005539550 podman[198129]: 2025-11-29 07:35:05.812799605 +0000 UTC m=+0.194083631 container died 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:35:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3ebcc90dab5321702ed7ad355cf56a3a4feb005396dc5138f9b48f10e44964e1-merged.mount: Deactivated successfully.
Nov 29 02:35:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:05.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:05.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:06 np0005539550 podman[198129]: 2025-11-29 07:35:06.190309253 +0000 UTC m=+0.571593279 container remove 72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:35:06 np0005539550 systemd[1]: libpod-conmon-72dc47bbc96383ea17667c0945eb633d053e87cd06f715f3b977c6e0eef0d7e2.scope: Deactivated successfully.
Nov 29 02:35:06 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:06 np0005539550 podman[198170]: 2025-11-29 07:35:06.355543257 +0000 UTC m=+0.032530551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:35:06 np0005539550 podman[198170]: 2025-11-29 07:35:06.465819249 +0000 UTC m=+0.142806513 container create 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:35:06 np0005539550 systemd[1]: Started libpod-conmon-9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144.scope.
Nov 29 02:35:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:35:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca8bc6bccae84767871cf08b9f07c9713a3dbcf9b95f355bb2c32769aaaf1a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca8bc6bccae84767871cf08b9f07c9713a3dbcf9b95f355bb2c32769aaaf1a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca8bc6bccae84767871cf08b9f07c9713a3dbcf9b95f355bb2c32769aaaf1a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ca8bc6bccae84767871cf08b9f07c9713a3dbcf9b95f355bb2c32769aaaf1a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:35:06 np0005539550 podman[198170]: 2025-11-29 07:35:06.554243444 +0000 UTC m=+0.231230718 container init 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:35:06 np0005539550 podman[198170]: 2025-11-29 07:35:06.561028458 +0000 UTC m=+0.238015722 container start 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:35:06 np0005539550 podman[198170]: 2025-11-29 07:35:06.564677391 +0000 UTC m=+0.241664655 container attach 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:07 np0005539550 great_wiles[198186]: {
Nov 29 02:35:07 np0005539550 great_wiles[198186]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:35:07 np0005539550 great_wiles[198186]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:35:07 np0005539550 great_wiles[198186]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:35:07 np0005539550 great_wiles[198186]:        "osd_id": 0,
Nov 29 02:35:07 np0005539550 great_wiles[198186]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:35:07 np0005539550 great_wiles[198186]:        "type": "bluestore"
Nov 29 02:35:07 np0005539550 great_wiles[198186]:    }
Nov 29 02:35:07 np0005539550 great_wiles[198186]: }
Nov 29 02:35:07 np0005539550 systemd[1]: libpod-9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144.scope: Deactivated successfully.
Nov 29 02:35:07 np0005539550 podman[198170]: 2025-11-29 07:35:07.479024809 +0000 UTC m=+1.156012073 container died 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:35:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6ca8bc6bccae84767871cf08b9f07c9713a3dbcf9b95f355bb2c32769aaaf1a0-merged.mount: Deactivated successfully.
Nov 29 02:35:07 np0005539550 podman[198170]: 2025-11-29 07:35:07.5324058 +0000 UTC m=+1.209393064 container remove 9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_wiles, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:35:07 np0005539550 systemd[1]: libpod-conmon-9666904f79bc7f5d85876631aec3bb621931b8a3a2ceb085f891f7542dbef144.scope: Deactivated successfully.
Nov 29 02:35:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:35:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:35:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 56e29b06-5be3-465a-88ab-537ceb1313c5 does not exist
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev aa90828e-804d-4200-a189-5c6178529442 does not exist
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 43459a6b-aa5b-48dd-a62b-166c34de1f74 does not exist
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:35:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:35:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:07.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:07.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:35:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 42.3254
Nov 29 02:35:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:09 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:09.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:09.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:10 np0005539550 ceph-mon[74435]: Health check failed: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:10 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:11 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:11.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:11.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:12 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:13 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 47.3266
Nov 29 02:35:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:14 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:15 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops)
Nov 29 02:35:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:15.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:16 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 13 slow ops, oldest one blocked for 60 sec, mon.compute-1 has slow ops)
Nov 29 02:35:16 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:17 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:35:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:17.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:35:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:35:18 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(55) init, last seen epoch 55, mid-election, bumping
Nov 29 02:35:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:18 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:35:18.901 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:35:18.902 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:35:18.902 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.pdhsqi(active, since 22m), standbys: compute-2.zfrvoq
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.fchyan started
Nov 29 02:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:35:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:19.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:20 np0005539550 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.fchyan 192.168.122.101:0/1646141000; not ready for session (expect reconnect)
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.pdhsqi(active, since 22m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.fchyan", "id": "compute-1.fchyan"} v 0) v1
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "mgr metadata", "who": "compute-1.fchyan", "id": "compute-1.fchyan"}]: dispatch
Nov 29 02:35:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:21.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539550 podman[198329]: 2025-11-29 07:35:23.321694351 +0000 UTC m=+0.058136704 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 02:35:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:23.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:23.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:24 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:35:24 np0005539550 ceph-mon[74435]: Health check failed: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:35:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:25.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops)
Nov 29 02:35:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:27 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 16 slow ops, oldest one blocked for 70 sec, mon.compute-1 has slow ops)
Nov 29 02:35:27 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:27.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:29.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:29.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:31.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:32.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:33.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:34.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:35.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:36.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:36 np0005539550 podman[198405]: 2025-11-29 07:35:36.378721111 +0000 UTC m=+0.114762948 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 02:35:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:37.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:38.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:39.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:41.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:42.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:49.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:50.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:51.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:52.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:53.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:54 np0005539550 podman[198490]: 2025-11-29 07:35:54.31575417 +0000 UTC m=+0.053268690 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 02:35:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:55.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:56.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:35:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:35:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:57.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:35:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:35:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:35:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:35:58.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:35:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:35:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:35:59
Nov 29 02:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'images', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Nov 29 02:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:36:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:35:59.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000050s ======
Nov 29 02:36:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:00.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 29 02:36:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:04.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:06.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:07 np0005539550 podman[198514]: 2025-11-29 07:36:07.350752002 +0000 UTC m=+0.092172746 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:36:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:36:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:36:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:36:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:36:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:36:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:08.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:36:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 50555309-9ff7-4a90-bca1-b792408412ad does not exist
Nov 29 02:36:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f0ef2c9a-86e9-4bae-bb18-09f83ba45927 does not exist
Nov 29 02:36:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2592ad0c-c51e-4f96-887d-1daa29b5e161 does not exist
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:36:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:10 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:36:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.463455886 +0000 UTC m=+0.044787676 container create db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:36:10 np0005539550 systemd[1]: Started libpod-conmon-db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814.scope.
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.444510736 +0000 UTC m=+0.025842546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.556922863 +0000 UTC m=+0.138254673 container init db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.564188047 +0000 UTC m=+0.145519837 container start db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.568823574 +0000 UTC m=+0.150155384 container attach db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:10 np0005539550 flamboyant_aryabhata[198830]: 167 167
Nov 29 02:36:10 np0005539550 systemd[1]: libpod-db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814.scope: Deactivated successfully.
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.57102745 +0000 UTC m=+0.152359260 container died db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:36:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-856d8124aa62a0a0e1cfc3bd2af9de36c024c9e14cbdb70f41f5e22f10af7e52-merged.mount: Deactivated successfully.
Nov 29 02:36:10 np0005539550 podman[198813]: 2025-11-29 07:36:10.623312584 +0000 UTC m=+0.204644374 container remove db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_aryabhata, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:10 np0005539550 systemd[1]: libpod-conmon-db308d6741f9a96566c53760794316ec84793502e5d285d697e3bee174a8e814.scope: Deactivated successfully.
Nov 29 02:36:10 np0005539550 podman[198855]: 2025-11-29 07:36:10.785073901 +0000 UTC m=+0.037602263 container create 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:36:10 np0005539550 systemd[1]: Started libpod-conmon-5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d.scope.
Nov 29 02:36:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:10 np0005539550 podman[198855]: 2025-11-29 07:36:10.86202836 +0000 UTC m=+0.114556742 container init 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:36:10 np0005539550 podman[198855]: 2025-11-29 07:36:10.767208769 +0000 UTC m=+0.019737151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:10 np0005539550 podman[198855]: 2025-11-29 07:36:10.870269279 +0000 UTC m=+0.122797641 container start 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:36:10 np0005539550 podman[198855]: 2025-11-29 07:36:10.874312691 +0000 UTC m=+0.126841073 container attach 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:36:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:36:11 np0005539550 friendly_meninsky[198872]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:36:11 np0005539550 friendly_meninsky[198872]: --> relative data size: 1.0
Nov 29 02:36:11 np0005539550 friendly_meninsky[198872]: --> All data devices are unavailable
Nov 29 02:36:11 np0005539550 systemd[1]: libpod-5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d.scope: Deactivated successfully.
Nov 29 02:36:11 np0005539550 podman[198887]: 2025-11-29 07:36:11.771159815 +0000 UTC m=+0.025064726 container died 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:36:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-aea17ff40aca487297dab12133c07b44ca26b2eb0095f81e830c1374c0b12c1b-merged.mount: Deactivated successfully.
Nov 29 02:36:11 np0005539550 podman[198887]: 2025-11-29 07:36:11.826887646 +0000 UTC m=+0.080792527 container remove 5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:36:11 np0005539550 systemd[1]: libpod-conmon-5a04c40e8b1e5d6d3f6e73d8ee423577d75b9fc7f2cce652ed67e9417a37058d.scope: Deactivated successfully.
Nov 29 02:36:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:12.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:12.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.486980614 +0000 UTC m=+0.048303864 container create ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:36:12 np0005539550 systemd[1]: Started libpod-conmon-ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0.scope.
Nov 29 02:36:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.465415628 +0000 UTC m=+0.026738908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.575851005 +0000 UTC m=+0.137174275 container init ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.582630197 +0000 UTC m=+0.143953447 container start ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.586891965 +0000 UTC m=+0.148215235 container attach ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:12 np0005539550 dreamy_mahavira[199110]: 167 167
Nov 29 02:36:12 np0005539550 systemd[1]: libpod-ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0.scope: Deactivated successfully.
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.588444364 +0000 UTC m=+0.149767614 container died ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:36:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-186cbe2eb4acf30e1a9ddbef8384ee637b9248b454890137d12876b3a4fbdfa6-merged.mount: Deactivated successfully.
Nov 29 02:36:12 np0005539550 podman[199093]: 2025-11-29 07:36:12.620607658 +0000 UTC m=+0.181930908 container remove ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mahavira, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:12 np0005539550 systemd[1]: libpod-conmon-ee1081f6c284a82fe4c2495a42d6126437c5910489c3933a2cebd95922d072a0.scope: Deactivated successfully.
Nov 29 02:36:12 np0005539550 podman[199131]: 2025-11-29 07:36:12.757655229 +0000 UTC m=+0.022969192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:13 np0005539550 podman[199131]: 2025-11-29 07:36:13.195233882 +0000 UTC m=+0.460547815 container create cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:13 np0005539550 systemd[1]: Started libpod-conmon-cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85.scope.
Nov 29 02:36:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b197c159cc74735fdf50c25c705cc82e0b396ef507a11d9394b39ba76b124f77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b197c159cc74735fdf50c25c705cc82e0b396ef507a11d9394b39ba76b124f77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b197c159cc74735fdf50c25c705cc82e0b396ef507a11d9394b39ba76b124f77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b197c159cc74735fdf50c25c705cc82e0b396ef507a11d9394b39ba76b124f77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:13 np0005539550 podman[199131]: 2025-11-29 07:36:13.301721179 +0000 UTC m=+0.567035132 container init cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:36:13 np0005539550 podman[199131]: 2025-11-29 07:36:13.310660616 +0000 UTC m=+0.575974559 container start cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:36:13 np0005539550 podman[199131]: 2025-11-29 07:36:13.322775232 +0000 UTC m=+0.588089175 container attach cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:36:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:14.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]: {
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:    "0": [
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:        {
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "devices": [
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "/dev/loop3"
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            ],
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "lv_name": "ceph_lv0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "lv_size": "7511998464",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "name": "ceph_lv0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "tags": {
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.cluster_name": "ceph",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.crush_device_class": "",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.encrypted": "0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.osd_id": "0",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.type": "block",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:                "ceph.vdo": "0"
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            },
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "type": "block",
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:            "vg_name": "ceph_vg0"
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:        }
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]:    ]
Nov 29 02:36:14 np0005539550 infallible_thompson[199147]: }
Nov 29 02:36:14 np0005539550 systemd[1]: libpod-cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539550 podman[199131]: 2025-11-29 07:36:14.076586884 +0000 UTC m=+1.341900837 container died cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:36:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b197c159cc74735fdf50c25c705cc82e0b396ef507a11d9394b39ba76b124f77-merged.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539550 podman[199131]: 2025-11-29 07:36:14.146895765 +0000 UTC m=+1.412209698 container remove cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_thompson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:36:14 np0005539550 systemd[1]: libpod-conmon-cdc355aeda2d61028de6fdfcb83296169a6568f9352e15e2630604fea7544c85.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:14 np0005539550 systemd-logind[788]: New session 51 of user zuul.
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.746923382 +0000 UTC m=+0.052208834 container create c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:36:14 np0005539550 systemd[1]: Started Session 51 of User zuul.
Nov 29 02:36:14 np0005539550 systemd[1]: Started libpod-conmon-c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f.scope.
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.718276896 +0000 UTC m=+0.023562368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.879938841 +0000 UTC m=+0.185224313 container init c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.886711142 +0000 UTC m=+0.191996594 container start c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:36:14 np0005539550 goofy_poincare[199329]: 167 167
Nov 29 02:36:14 np0005539550 systemd[1]: libpod-c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f.scope: Deactivated successfully.
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.900386548 +0000 UTC m=+0.205672030 container attach c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.901311392 +0000 UTC m=+0.206596864 container died c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:36:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-aa5d1b152e77886411ef4ad0263ea0e24ebe9fc945be683a39c14d88dedeb1da-merged.mount: Deactivated successfully.
Nov 29 02:36:14 np0005539550 podman[199309]: 2025-11-29 07:36:14.940940195 +0000 UTC m=+0.246225647 container remove c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:36:14 np0005539550 systemd[1]: libpod-conmon-c821d4e80d35d03b2125f73918e2d5f23f70f53d40a476cbf84b4da0ebbda71f.scope: Deactivated successfully.
Nov 29 02:36:15 np0005539550 podman[199407]: 2025-11-29 07:36:15.132004234 +0000 UTC m=+0.052328696 container create 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:36:15 np0005539550 systemd[1]: Started libpod-conmon-983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba.scope.
Nov 29 02:36:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:36:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e295323b9545967f6730ca220a60257d2d7a904a2e0fc29a2349d16924ea406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:15 np0005539550 podman[199407]: 2025-11-29 07:36:15.106663423 +0000 UTC m=+0.026987945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:36:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e295323b9545967f6730ca220a60257d2d7a904a2e0fc29a2349d16924ea406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e295323b9545967f6730ca220a60257d2d7a904a2e0fc29a2349d16924ea406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e295323b9545967f6730ca220a60257d2d7a904a2e0fc29a2349d16924ea406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:36:15 np0005539550 podman[199407]: 2025-11-29 07:36:15.220681119 +0000 UTC m=+0.141005571 container init 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:36:15 np0005539550 podman[199407]: 2025-11-29 07:36:15.228407825 +0000 UTC m=+0.148732247 container start 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:15 np0005539550 podman[199407]: 2025-11-29 07:36:15.231906664 +0000 UTC m=+0.152231106 container attach 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:36:15 np0005539550 python3.9[199500]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:15 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:15 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:15 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]: {
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:        "osd_id": 0,
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:        "type": "bluestore"
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]:    }
Nov 29 02:36:16 np0005539550 blissful_mayer[199456]: }
Nov 29 02:36:16 np0005539550 systemd[1]: libpod-983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba.scope: Deactivated successfully.
Nov 29 02:36:16 np0005539550 podman[199407]: 2025-11-29 07:36:16.138215767 +0000 UTC m=+1.058540209 container died 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:36:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3e295323b9545967f6730ca220a60257d2d7a904a2e0fc29a2349d16924ea406-merged.mount: Deactivated successfully.
Nov 29 02:36:16 np0005539550 podman[199407]: 2025-11-29 07:36:16.199801397 +0000 UTC m=+1.120125819 container remove 983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:36:16 np0005539550 systemd[1]: libpod-conmon-983b518e729e5ae77c54669e3dc3eb67b968c3c982d39cbd0acd2068e26275ba.scope: Deactivated successfully.
Nov 29 02:36:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:36:16 np0005539550 python3.9[199717]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:16 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:36:16 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:16 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ed46d39d-c341-4000-a339-ea91beea2229 does not exist
Nov 29 02:36:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 182b12fc-589f-417e-9046-d90edcd8f4b0 does not exist
Nov 29 02:36:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 892ed3fb-9f81-4d34-b45b-6107658f1fe4 does not exist
Nov 29 02:36:17 np0005539550 python3.9[199930]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:17 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:18.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:18 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:36:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:36:18.903 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:36:18.904 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:36:18.905 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:36:19 np0005539550 python3.9[200147]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:19 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:19 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:19 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:36:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:36:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:20.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:21 np0005539550 python3.9[200338]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:21 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:22.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:22 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:22 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:22.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:22 np0005539550 python3.9[200529]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:23 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:23 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:23 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:24.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:24 np0005539550 python3.9[200719]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:24 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:24.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:24 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:24 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:24 np0005539550 podman[200759]: 2025-11-29 07:36:24.622817443 +0000 UTC m=+0.066551747 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:36:25 np0005539550 python3.9[200929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:26 np0005539550 python3.9[201084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:26 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:26 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:26 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:26.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:28.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:28.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:28 np0005539550 python3.9[201275]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 02:36:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:28 np0005539550 systemd[1]: Reloading.
Nov 29 02:36:28 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:36:28 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:36:29 np0005539550 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 02:36:29 np0005539550 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 02:36:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:30.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:30.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:30 np0005539550 python3.9[201468]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:31 np0005539550 python3.9[201623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:31 np0005539550 python3.9[201778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:32.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:32.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:32 np0005539550 python3.9[201984]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:33 np0005539550 python3.9[202139]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:34.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:36:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:34.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:34 np0005539550 python3.9[202295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:35 np0005539550 python3.9[202450]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:36:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:36:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 02:36:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:36.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:36:36 np0005539550 python3.9[202605]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:36:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:36.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:36:36 np0005539550 python3.9[202761]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:37 np0005539550 podman[202888]: 2025-11-29 07:36:37.529770679 +0000 UTC m=+0.098316581 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:36:37 np0005539550 python3.9[202936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:38.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Nov 29 02:36:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:36:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:38.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:36:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:38 np0005539550 python3.9[203098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:39 np0005539550 python3.9[203253]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:40.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s
Nov 29 02:36:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:40.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:40 np0005539550 python3.9[203409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:41 np0005539550 python3.9[203564]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 02:36:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:42.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 24 op/s
Nov 29 02:36:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:42.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:44.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:44 np0005539550 python3.9[203721]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 29 02:36:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:44 np0005539550 python3.9[203873]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:45 np0005539550 python3.9[204025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:46.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 29 02:36:46 np0005539550 python3.9[204178]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:46 np0005539550 python3.9[204330]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:47 np0005539550 python3.9[204482]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:36:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:48.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 02:36:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:48.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:50.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 0 B/s wr, 28 op/s
Nov 29 02:36:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:50.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:51 np0005539550 python3.9[204636]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 02:36:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:52.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:52 np0005539550 python3.9[204763]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401810.9657347-1633-241779990262926/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:53 np0005539550 python3.9[204964]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:53 np0005539550 python3.9[205089]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401812.7146382-1633-278244079039541/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:36:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:54.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:36:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 22 op/s
Nov 29 02:36:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:36:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:54.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:36:54 np0005539550 python3.9[205242]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:55 np0005539550 podman[205268]: 2025-11-29 07:36:55.353659465 +0000 UTC m=+0.094821123 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 02:36:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:56.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 7.9 KiB/s rd, 0 B/s wr, 13 op/s
Nov 29 02:36:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:56.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:56 np0005539550 python3.9[205389]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401814.0509496-1633-269935932266809/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:57 np0005539550 python3.9[205541]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:57 np0005539550 python3.9[205666]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401816.665576-1633-149356751523501/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:36:58.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:36:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:36:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 02:36:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:36:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:36:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:36:58.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:36:58 np0005539550 python3.9[205819]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:36:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:36:58 np0005539550 python3.9[205944]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401817.9312305-1633-216457787449559/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:36:59
Nov 29 02:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'backups']
Nov 29 02:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:36:59 np0005539550 python3.9[206096]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:00.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Nov 29 02:37:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:00.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:00 np0005539550 python3.9[206222]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401819.154722-1633-66466746348470/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:00 np0005539550 python3.9[206374]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:01 np0005539550 python3.9[206497]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401820.4621353-1633-180600520773453/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:02.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:02 np0005539550 python3.9[206650]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:37:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:02 np0005539550 python3.9[206775]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764401821.6323888-1633-280973445943447/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:04.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 02:37:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:04.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:04 np0005539550 python3.9[206928]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 02:37:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Nov 29 02:37:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:06.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:07 np0005539550 python3.9[207082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:07 np0005539550 podman[207206]: 2025-11-29 07:37:07.876911825 +0000 UTC m=+0.108198321 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller)
Nov 29 02:37:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:37:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:37:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:37:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:37:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:37:08 np0005539550 python3.9[207252]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:08.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 02:37:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:08 np0005539550 python3.9[207412]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:09 np0005539550 python3.9[207564]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:09 np0005539550 python3.9[207716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:10.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 02:37:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:10 np0005539550 python3.9[207869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:11 np0005539550 python3.9[208021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539550 python3.9[208173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:12.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 0 B/s wr, 25 op/s
Nov 29 02:37:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:12.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:12 np0005539550 python3.9[208327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:13 np0005539550 python3.9[208528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:14.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539550 python3.9[208681]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Nov 29 02:37:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:14.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:14 np0005539550 python3.9[208833]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:14.763973) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834764093, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1754, "num_deletes": 252, "total_data_size": 3624352, "memory_usage": 3674536, "flush_reason": "Manual Compaction"}
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401834909592, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2346660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13914, "largest_seqno": 15667, "table_properties": {"data_size": 2340468, "index_size": 3135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16344, "raw_average_key_size": 21, "raw_value_size": 2326657, "raw_average_value_size": 3021, "num_data_blocks": 140, "num_entries": 770, "num_filter_entries": 770, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401623, "oldest_key_time": 1764401623, "file_creation_time": 1764401834, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 145688 microseconds, and 7682 cpu microseconds.
Nov 29 02:37:14 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:14.909681) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2346660 bytes OK
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:14.909708) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.188994) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.189073) EVENT_LOG_v1 {"time_micros": 1764401835189062, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.189101) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 3616910, prev total WAL file size 3616910, number of live WAL files 2.
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.190703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2291KB)], [35(10027KB)]
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835190822, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 12614832, "oldest_snapshot_seqno": -1}
Nov 29 02:37:15 np0005539550 python3.9[208985]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4642 keys, 9906869 bytes, temperature: kUnknown
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835433517, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9906869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9873346, "index_size": 20805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11653, "raw_key_size": 116121, "raw_average_key_size": 25, "raw_value_size": 9786940, "raw_average_value_size": 2108, "num_data_blocks": 876, "num_entries": 4642, "num_filter_entries": 4642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764401835, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.433809) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9906869 bytes
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.452139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 52.0 rd, 40.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.8 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(9.6) write-amplify(4.2) OK, records in: 5096, records dropped: 454 output_compression: NoCompression
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.452191) EVENT_LOG_v1 {"time_micros": 1764401835452172, "job": 16, "event": "compaction_finished", "compaction_time_micros": 242780, "compaction_time_cpu_micros": 37761, "output_level": 6, "num_output_files": 1, "total_output_size": 9906869, "num_input_records": 5096, "num_output_records": 4642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835452931, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764401835455058, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.190486) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.455094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.455101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.455105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.455106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:15 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:37:15.455108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:37:16 np0005539550 python3.9[209137]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:16.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Nov 29 02:37:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:16.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:17 np0005539550 python3.9[209290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:17 np0005539550 python3.9[209413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401836.5794492-2296-163319873879400/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:18.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Nov 29 02:37:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:18.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:18 np0005539550 python3.9[209664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:37:18.905 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:37:18.906 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:37:18.907 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:37:19 np0005539550 python3.9[209818]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401837.8629873-2296-223158904146571/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:37:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:37:19 np0005539550 python3.9[209970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:37:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:37:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:20.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:20 np0005539550 python3.9[210094]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401839.1541228-2296-238639107710925/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Nov 29 02:37:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:20.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:20 np0005539550 python3.9[210246]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d2acea6a-9752-4ca2-9f67-97c9dfa28145 does not exist
Nov 29 02:37:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 91b97d12-52b3-4912-b124-54fc4e46c7e6 does not exist
Nov 29 02:37:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ab5a2437-caa6-489c-816f-d2b77ab4230c does not exist
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:37:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:37:21 np0005539550 python3.9[210369]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401840.3318782-2296-160392142793749/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:21 np0005539550 podman[210654]: 2025-11-29 07:37:21.764652285 +0000 UTC m=+0.025550858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:21 np0005539550 python3.9[210673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:21 np0005539550 podman[210654]: 2025-11-29 07:37:21.982180504 +0000 UTC m=+0.243079057 container create c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:37:22 np0005539550 systemd[1]: Started libpod-conmon-c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9.scope.
Nov 29 02:37:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:22.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 31 op/s
Nov 29 02:37:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:22.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:22 np0005539550 python3.9[210803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401841.4701078-2296-21302675227658/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:22 np0005539550 podman[210654]: 2025-11-29 07:37:22.623400604 +0000 UTC m=+0.884299217 container init c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:22 np0005539550 podman[210654]: 2025-11-29 07:37:22.633092289 +0000 UTC m=+0.893990842 container start c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 29 02:37:22 np0005539550 heuristic_swirles[210701]: 167 167
Nov 29 02:37:22 np0005539550 systemd[1]: libpod-c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9.scope: Deactivated successfully.
Nov 29 02:37:22 np0005539550 conmon[210701]: conmon c501e8f8238c5baed82a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9.scope/container/memory.events
Nov 29 02:37:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:37:22 np0005539550 podman[210654]: 2025-11-29 07:37:22.728403973 +0000 UTC m=+0.989302556 container attach c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:37:22 np0005539550 podman[210654]: 2025-11-29 07:37:22.73262944 +0000 UTC m=+0.993527993 container died c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:37:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d1ae5ab6b43944b3721cf6dc50c4c6350d10a816c69b673ff61c87cf14ed7ce5-merged.mount: Deactivated successfully.
Nov 29 02:37:22 np0005539550 podman[210654]: 2025-11-29 07:37:22.784973796 +0000 UTC m=+1.045872349 container remove c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:37:22 np0005539550 systemd[1]: libpod-conmon-c501e8f8238c5baed82a54b555f3b9eaf83272b8b93c15b8edcc0ceb0f9488d9.scope: Deactivated successfully.
Nov 29 02:37:23 np0005539550 podman[210947]: 2025-11-29 07:37:22.955596517 +0000 UTC m=+0.036477924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:23 np0005539550 python3.9[210989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:23 np0005539550 podman[210947]: 2025-11-29 07:37:23.479974827 +0000 UTC m=+0.560856214 container create 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:37:23 np0005539550 systemd[1]: Started libpod-conmon-42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8.scope.
Nov 29 02:37:23 np0005539550 python3.9[211112]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401842.6851196-2296-59021162669008/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:24.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539550 podman[210947]: 2025-11-29 07:37:24.23382608 +0000 UTC m=+1.314707527 container init 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:37:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 29 02:37:24 np0005539550 podman[210947]: 2025-11-29 07:37:24.243635378 +0000 UTC m=+1.324516765 container start 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:37:24 np0005539550 podman[210947]: 2025-11-29 07:37:24.269724309 +0000 UTC m=+1.350605706 container attach 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:37:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:24.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:24 np0005539550 python3.9[211270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:24 np0005539550 python3.9[211395]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401843.928962-2296-239318201186303/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:25 np0005539550 loving_germain[211115]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:37:25 np0005539550 loving_germain[211115]: --> relative data size: 1.0
Nov 29 02:37:25 np0005539550 loving_germain[211115]: --> All data devices are unavailable
Nov 29 02:37:25 np0005539550 systemd[1]: libpod-42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8.scope: Deactivated successfully.
Nov 29 02:37:25 np0005539550 podman[210947]: 2025-11-29 07:37:25.193867514 +0000 UTC m=+2.274748901 container died 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:37:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7e6e4aaa5a732d3c5bd961ec07c113c2d28db8a3d4d73644c7a52b55dd1ce2fc-merged.mount: Deactivated successfully.
Nov 29 02:37:25 np0005539550 podman[210947]: 2025-11-29 07:37:25.261318463 +0000 UTC m=+2.342199850 container remove 42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:37:25 np0005539550 systemd[1]: libpod-conmon-42dbacb77213140222ae4ecb3f58e244b098c6800c0ea64e0d79e6ddfb45bad8.scope: Deactivated successfully.
Nov 29 02:37:25 np0005539550 podman[211616]: 2025-11-29 07:37:25.499167297 +0000 UTC m=+0.060259048 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:37:25 np0005539550 python3.9[211617]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.881981982 +0000 UTC m=+0.045945814 container create 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:37:25 np0005539550 systemd[1]: Started libpod-conmon-0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4.scope.
Nov 29 02:37:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.861117784 +0000 UTC m=+0.025081646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.968476403 +0000 UTC m=+0.132440225 container init 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.978991409 +0000 UTC m=+0.142955241 container start 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:25 np0005539550 festive_bose[211840]: 167 167
Nov 29 02:37:25 np0005539550 systemd[1]: libpod-0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4.scope: Deactivated successfully.
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.988197102 +0000 UTC m=+0.152160934 container attach 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:37:25 np0005539550 podman[211799]: 2025-11-29 07:37:25.988682244 +0000 UTC m=+0.152646076 container died 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1c5646c033be3b2c0e92b295b435c538f4467e81d39fefba69a01e4a2bebebd8-merged.mount: Deactivated successfully.
Nov 29 02:37:26 np0005539550 podman[211799]: 2025-11-29 07:37:26.040153408 +0000 UTC m=+0.204117240 container remove 0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 29 02:37:26 np0005539550 systemd[1]: libpod-conmon-0dd494910c784ba1b9277d5dc2d04c5bcb9d04e3113c56c97bc7e4ce70e27cf4.scope: Deactivated successfully.
Nov 29 02:37:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:26.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:26 np0005539550 python3.9[211873]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401845.1362448-2296-202961024009282/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:26 np0005539550 podman[211893]: 2025-11-29 07:37:26.208335458 +0000 UTC m=+0.051050524 container create a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 02:37:26 np0005539550 systemd[1]: Started libpod-conmon-a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17.scope.
Nov 29 02:37:26 np0005539550 podman[211893]: 2025-11-29 07:37:26.191193433 +0000 UTC m=+0.033908509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea096cd02a331e8c17dfd2c87811e48192cb643c6f65d6ce8e0b93edff962b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea096cd02a331e8c17dfd2c87811e48192cb643c6f65d6ce8e0b93edff962b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea096cd02a331e8c17dfd2c87811e48192cb643c6f65d6ce8e0b93edff962b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ea096cd02a331e8c17dfd2c87811e48192cb643c6f65d6ce8e0b93edff962b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:26.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:26 np0005539550 podman[211893]: 2025-11-29 07:37:26.423934098 +0000 UTC m=+0.266649194 container init a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:37:26 np0005539550 podman[211893]: 2025-11-29 07:37:26.431144411 +0000 UTC m=+0.273859477 container start a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:37:26 np0005539550 podman[211893]: 2025-11-29 07:37:26.444792726 +0000 UTC m=+0.287507782 container attach a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:37:26 np0005539550 python3.9[212066]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]: {
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:    "0": [
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:        {
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "devices": [
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "/dev/loop3"
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            ],
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "lv_name": "ceph_lv0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "lv_size": "7511998464",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "name": "ceph_lv0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "tags": {
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.cluster_name": "ceph",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.crush_device_class": "",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.encrypted": "0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.osd_id": "0",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.type": "block",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:                "ceph.vdo": "0"
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            },
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "type": "block",
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:            "vg_name": "ceph_vg0"
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:        }
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]:    ]
Nov 29 02:37:27 np0005539550 jolly_lalande[211933]: }
Nov 29 02:37:27 np0005539550 systemd[1]: libpod-a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17.scope: Deactivated successfully.
Nov 29 02:37:27 np0005539550 podman[211893]: 2025-11-29 07:37:27.228387831 +0000 UTC m=+1.071102897 container died a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:37:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1ea096cd02a331e8c17dfd2c87811e48192cb643c6f65d6ce8e0b93edff962b8-merged.mount: Deactivated successfully.
Nov 29 02:37:27 np0005539550 podman[211893]: 2025-11-29 07:37:27.283653481 +0000 UTC m=+1.126368547 container remove a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:37:27 np0005539550 systemd[1]: libpod-conmon-a0270fb37c33415f70d053ae2b09bf1552cc9f2527588f9d2f7b88c8e457ed17.scope: Deactivated successfully.
Nov 29 02:37:27 np0005539550 python3.9[212193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401846.3488228-2296-239445604619650/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.902585846 +0000 UTC m=+0.044562239 container create 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:37:27 np0005539550 systemd[1]: Started libpod-conmon-6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293.scope.
Nov 29 02:37:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.884142949 +0000 UTC m=+0.026119362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.981263709 +0000 UTC m=+0.123240122 container init 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.989362954 +0000 UTC m=+0.131339347 container start 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.993811947 +0000 UTC m=+0.135788360 container attach 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:37:27 np0005539550 condescending_moser[212512]: 167 167
Nov 29 02:37:27 np0005539550 systemd[1]: libpod-6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293.scope: Deactivated successfully.
Nov 29 02:37:27 np0005539550 conmon[212512]: conmon 6f5ef277c9d4d93e78fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293.scope/container/memory.events
Nov 29 02:37:27 np0005539550 podman[212495]: 2025-11-29 07:37:27.996353681 +0000 UTC m=+0.138330104 container died 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:37:28 np0005539550 python3.9[212494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-67ba95dcab2bfc52a4d83702e1070e17f337ede2c7587fa5a132877977973308-merged.mount: Deactivated successfully.
Nov 29 02:37:28 np0005539550 podman[212495]: 2025-11-29 07:37:28.048102752 +0000 UTC m=+0.190079135 container remove 6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:28 np0005539550 systemd[1]: libpod-conmon-6f5ef277c9d4d93e78fb4050254a14a832fab58b1f1360529658574bd01f7293.scope: Deactivated successfully.
Nov 29 02:37:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:28.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:28 np0005539550 podman[212571]: 2025-11-29 07:37:28.20044883 +0000 UTC m=+0.039760978 container create debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:37:28 np0005539550 systemd[1]: Started libpod-conmon-debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827.scope.
Nov 29 02:37:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 29 02:37:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:37:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99056f6e6519736336495e379c5f692a34a7db06ecf851d1d5064d122b5f6ec1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99056f6e6519736336495e379c5f692a34a7db06ecf851d1d5064d122b5f6ec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99056f6e6519736336495e379c5f692a34a7db06ecf851d1d5064d122b5f6ec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99056f6e6519736336495e379c5f692a34a7db06ecf851d1d5064d122b5f6ec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:37:28 np0005539550 podman[212571]: 2025-11-29 07:37:28.276169668 +0000 UTC m=+0.115481836 container init debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:37:28 np0005539550 podman[212571]: 2025-11-29 07:37:28.184050505 +0000 UTC m=+0.023362683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:37:28 np0005539550 podman[212571]: 2025-11-29 07:37:28.288565162 +0000 UTC m=+0.127877310 container start debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:37:28 np0005539550 podman[212571]: 2025-11-29 07:37:28.29281208 +0000 UTC m=+0.132124228 container attach debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:37:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:28.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:28 np0005539550 python3.9[212680]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401847.517888-2296-187932091830098/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]: {
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:        "osd_id": 0,
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:        "type": "bluestore"
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]:    }
Nov 29 02:37:29 np0005539550 peaceful_almeida[212623]: }
Nov 29 02:37:29 np0005539550 systemd[1]: libpod-debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827.scope: Deactivated successfully.
Nov 29 02:37:29 np0005539550 podman[212571]: 2025-11-29 07:37:29.193429789 +0000 UTC m=+1.032741997 container died debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:37:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-99056f6e6519736336495e379c5f692a34a7db06ecf851d1d5064d122b5f6ec1-merged.mount: Deactivated successfully.
Nov 29 02:37:29 np0005539550 podman[212571]: 2025-11-29 07:37:29.251259694 +0000 UTC m=+1.090571842 container remove debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:37:29 np0005539550 systemd[1]: libpod-conmon-debb3622e442c6a855dcb03b1d484be72793409a0ce3b80ed7b6e720ad55a827.scope: Deactivated successfully.
Nov 29 02:37:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:37:29 np0005539550 python3.9[212843]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:37:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 53540d8a-1016-46b7-bbc6-575a05688bd5 does not exist
Nov 29 02:37:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b01dca05-e659-4e31-a62f-a94f68956c65 does not exist
Nov 29 02:37:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e4ea6993-d16a-4016-a505-6cb88aa77fff does not exist
Nov 29 02:37:29 np0005539550 python3.9[212983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401848.7993057-2296-14341758794083/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:30.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:37:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Nov 29 02:37:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:30.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:30 np0005539550 python3.9[213186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:31 np0005539550 python3.9[213309]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401850.0207396-2296-272817528908641/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:31 np0005539550 python3.9[213461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:32.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:37:32 np0005539550 python3.9[213585]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401851.2032073-2296-231247617755166/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:32.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:32 np0005539550 python3.9[213762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:37:33 np0005539550 python3.9[213910]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401852.3880138-2296-135687199109346/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:34.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Nov 29 02:37:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:34.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:35 np0005539550 python3.9[214061]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:37:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:36.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:36 np0005539550 python3.9[214217]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 02:37:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:38.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:38 np0005539550 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 02:37:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:37:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:37:38 np0005539550 podman[214247]: 2025-11-29 07:37:38.387731659 +0000 UTC m=+0.102553928 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:37:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:40.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:40.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:42.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:44.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:44.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:44 np0005539550 python3.9[214405]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:45 np0005539550 python3.9[214557]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:45 np0005539550 python3.9[214709]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:46.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:46 np0005539550 python3.9[214862]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:47 np0005539550 python3.9[215014]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:48.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:48 np0005539550 python3.9[215167]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:37:49 np0005539550 python3.9[215319]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:49 np0005539550 python3.9[215471]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:50.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:50 np0005539550 python3.9[215624]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:50.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:50 np0005539550 python3.9[215776]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:37:52 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:37:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:52.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:52 np0005539550 python3.9[215929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:52 np0005539550 systemd[1]: Reloading.
Nov 29 02:37:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:52 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:52 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:52.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:52 np0005539550 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 02:37:52 np0005539550 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 02:37:52 np0005539550 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 02:37:52 np0005539550 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 02:37:52 np0005539550 systemd[1]: Starting libvirt logging daemon...
Nov 29 02:37:52 np0005539550 systemd[1]: Started libvirt logging daemon.
Nov 29 02:37:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).paxos(paxos updating c 754..1469) accept timeout, calling fresh election
Nov 29 02:37:53 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:37:53 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(58) init, last seen epoch 58
Nov 29 02:37:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:37:53 np0005539550 python3.9[216173]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:53 np0005539550 systemd[1]: Reloading.
Nov 29 02:37:53 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:53 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:53 np0005539550 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 02:37:53 np0005539550 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 02:37:53 np0005539550 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 02:37:53 np0005539550 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 02:37:53 np0005539550 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 02:37:53 np0005539550 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 02:37:53 np0005539550 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:37:53 np0005539550 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:37:54 np0005539550 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 02:37:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:54 np0005539550 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 02:37:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:54.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:54 np0005539550 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 02:37:54 np0005539550 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 02:37:54 np0005539550 python3.9[216391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:54 np0005539550 systemd[1]: Reloading.
Nov 29 02:37:54 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:54 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:55 np0005539550 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 02:37:55 np0005539550 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 02:37:55 np0005539550 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 02:37:55 np0005539550 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 02:37:55 np0005539550 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:37:55 np0005539550 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:37:55 np0005539550 setroubleshoot[216315]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b6764583-b5a3-4441-bdc1-14a7de148396
Nov 29 02:37:55 np0005539550 setroubleshoot[216315]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:37:55 np0005539550 setroubleshoot[216315]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l b6764583-b5a3-4441-bdc1-14a7de148396
Nov 29 02:37:55 np0005539550 setroubleshoot[216315]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 02:37:55 np0005539550 python3.9[216613]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:55 np0005539550 systemd[1]: Reloading.
Nov 29 02:37:55 np0005539550 podman[216615]: 2025-11-29 07:37:55.903512655 +0000 UTC m=+0.070279875 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 02:37:55 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:55 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:56 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:37:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:37:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:56.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:37:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:56 np0005539550 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 02:37:56 np0005539550 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 02:37:56 np0005539550 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 02:37:56 np0005539550 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 02:37:56 np0005539550 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 02:37:56 np0005539550 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 02:37:56 np0005539550 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 02:37:56 np0005539550 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 02:37:56 np0005539550 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 02:37:56 np0005539550 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 02:37:56 np0005539550 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:37:56 np0005539550 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:37:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:56.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:57 np0005539550 python3.9[216848]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:37:57 np0005539550 systemd[1]: Reloading.
Nov 29 02:37:57 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:37:57 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:37:57 np0005539550 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 02:37:57 np0005539550 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 02:37:57 np0005539550 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 02:37:57 np0005539550 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 02:37:57 np0005539550 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 02:37:57 np0005539550 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 02:37:57 np0005539550 systemd[1]: Starting libvirt secret daemon...
Nov 29 02:37:57 np0005539550 systemd[1]: Started libvirt secret daemon.
Nov 29 02:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:37:58.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:37:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:37:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:37:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:37:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:37:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:37:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:37:59
Nov 29 02:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'vms']
Nov 29 02:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:37:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:37:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:37:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:37:59 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(61) init, last seen epoch 61, mid-election, bumping
Nov 29 02:38:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:38:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:00.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf MDS connection to Monitors appears to be laggy; 16.3504s since last acked beacon
Nov 29 02:38:00 np0005539550 ceph-mds[93677]: mds.0.10 skipping upkeep work because connection to Monitors appears laggy
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.pdhsqi(active, since 25m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:38:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:38:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf  MDS is no longer laggy
Nov 29 02:38:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:02.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:02.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:38:02 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:38:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:04.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:04.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:05 np0005539550 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 02:38:05 np0005539550 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.020s CPU time.
Nov 29 02:38:05 np0005539550 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 02:38:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:06.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:06 np0005539550 python3.9[217066]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:06.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:06 np0005539550 python3.9[217219]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:07 np0005539550 python3.9[217371]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:38:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:38:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:38:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:38:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:38:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:08.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:08 np0005539550 podman[217500]: 2025-11-29 07:38:08.592357507 +0000 UTC m=+0.087830548 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:38:08 np0005539550 python3.9[217539]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:38:09 np0005539550 python3.9[217702]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:10.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539550 python3.9[217824]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401889.145679-3370-17779681066235/.source.xml follow=False _original_basename=secret.xml.j2 checksum=adf02dc8f6a63a8cc45a7e93e335963254ff5ce7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:10.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:10 np0005539550 python3.9[217976]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine b66774a7-56d9-5535-bd8c-681234404870#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:11 np0005539550 python3.9[218138]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:12.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:12.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:14.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539550 python3.9[218653]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:14.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:14 np0005539550 python3.9[218805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:15 np0005539550 python3.9[218928]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401894.4716017-3535-79222204617543/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:16.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:16 np0005539550 python3.9[219081]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:16.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:17 np0005539550 python3.9[219233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:17 np0005539550 python3.9[219311]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:18.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:18 np0005539550 python3.9[219464]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:18 np0005539550 python3.9[219542]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.atooiijl recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:38:18.907 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:38:18.908 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:38:18.908 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:38:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:38:19 np0005539550 python3.9[219694]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:20 np0005539550 python3.9[219773]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:20.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:20 np0005539550 python3.9[219925]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:22.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:22 np0005539550 python3[220079]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 02:38:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:22 np0005539550 python3.9[220231]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:23 np0005539550 python3.9[220309]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:24 np0005539550 python3.9[220462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:24 np0005539550 python3.9[220540]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:25 np0005539550 python3.9[220692]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:25 np0005539550 python3.9[220770]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:26 np0005539550 podman[220902]: 2025-11-29 07:38:26.321929184 +0000 UTC m=+0.060037246 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:38:26 np0005539550 python3.9[220930]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:26 np0005539550 python3.9[221021]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:28 np0005539550 python3.9[221174]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:28.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:28.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:28 np0005539550 python3.9[221299]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764401907.5017543-3910-215180804657079/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:30.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:30.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:38:31 np0005539550 python3.9[221582]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6aef5b1e-b009-455c-84fd-6a3da442b1d5 does not exist
Nov 29 02:38:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 023bb857-2984-486d-88cd-f172e43159b4 does not exist
Nov 29 02:38:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a7a95a6-92fe-4d66-b71a-f841181e19ae does not exist
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:38:31 np0005539550 podman[221874]: 2025-11-29 07:38:31.974814291 +0000 UTC m=+0.045127240 container create b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:38:32 np0005539550 systemd[1]: Started libpod-conmon-b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec.scope.
Nov 29 02:38:32 np0005539550 python3.9[221859]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:31.957190616 +0000 UTC m=+0.027503585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:32.071536252 +0000 UTC m=+0.141849221 container init b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:32.081846902 +0000 UTC m=+0.152159851 container start b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:32.086006387 +0000 UTC m=+0.156319346 container attach b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:32 np0005539550 systemd[1]: libpod-b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec.scope: Deactivated successfully.
Nov 29 02:38:32 np0005539550 objective_zhukovsky[221891]: 167 167
Nov 29 02:38:32 np0005539550 conmon[221891]: conmon b99f8321acb7a1520c54 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec.scope/container/memory.events
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:32.089799983 +0000 UTC m=+0.160112932 container died b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:38:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-26e708fd4760bd14dc52b414a27216f7d2e575b59c9072c6330cf314737d7cb6-merged.mount: Deactivated successfully.
Nov 29 02:38:32 np0005539550 podman[221874]: 2025-11-29 07:38:32.14076371 +0000 UTC m=+0.211076659 container remove b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:38:32 np0005539550 systemd[1]: libpod-conmon-b99f8321acb7a1520c54565d42c7da7c49ef4210ce3e49fde9283f8f37989bec.scope: Deactivated successfully.
Nov 29 02:38:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:32.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:32 np0005539550 podman[221948]: 2025-11-29 07:38:32.321477811 +0000 UTC m=+0.042506554 container create bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:38:32 np0005539550 systemd[1]: Started libpod-conmon-bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636.scope.
Nov 29 02:38:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:32 np0005539550 podman[221948]: 2025-11-29 07:38:32.304464891 +0000 UTC m=+0.025493574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:32 np0005539550 podman[221948]: 2025-11-29 07:38:32.417259018 +0000 UTC m=+0.138287701 container init bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:38:32 np0005539550 podman[221948]: 2025-11-29 07:38:32.424831439 +0000 UTC m=+0.145860102 container start bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:38:32 np0005539550 podman[221948]: 2025-11-29 07:38:32.428716727 +0000 UTC m=+0.149745390 container attach bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:38:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:32.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:33 np0005539550 python3.9[222093]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:33 np0005539550 vibrant_chebyshev[222007]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:38:33 np0005539550 vibrant_chebyshev[222007]: --> relative data size: 1.0
Nov 29 02:38:33 np0005539550 vibrant_chebyshev[222007]: --> All data devices are unavailable
Nov 29 02:38:33 np0005539550 systemd[1]: libpod-bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636.scope: Deactivated successfully.
Nov 29 02:38:33 np0005539550 podman[221948]: 2025-11-29 07:38:33.307451817 +0000 UTC m=+1.028480510 container died bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:38:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-65af28285ef0dbbe5f61f330be0876d7893459bb06bb460d128ce8d6ecff1eb7-merged.mount: Deactivated successfully.
Nov 29 02:38:33 np0005539550 podman[221948]: 2025-11-29 07:38:33.373376691 +0000 UTC m=+1.094405354 container remove bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:33 np0005539550 systemd[1]: libpod-conmon-bfb643d850f5a546bdf420914e894851114570c2f1f4e4f9fc5aaa85b6a71636.scope: Deactivated successfully.
Nov 29 02:38:33 np0005539550 python3.9[222419]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:33 np0005539550 podman[222463]: 2025-11-29 07:38:33.982011012 +0000 UTC m=+0.042613956 container create eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 02:38:34 np0005539550 systemd[1]: Started libpod-conmon-eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea.scope.
Nov 29 02:38:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:33.963528046 +0000 UTC m=+0.024131020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:34.063092069 +0000 UTC m=+0.123695033 container init eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:34.071824209 +0000 UTC m=+0.132427143 container start eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:34.076108537 +0000 UTC m=+0.136711501 container attach eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:38:34 np0005539550 vibrant_galileo[222504]: 167 167
Nov 29 02:38:34 np0005539550 systemd[1]: libpod-eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea.scope: Deactivated successfully.
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:34.079202736 +0000 UTC m=+0.139805680 container died eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-39f62bf59c9d93307da37fd87ce0d9fd646cb7676706336e12cfa0123c15fe6a-merged.mount: Deactivated successfully.
Nov 29 02:38:34 np0005539550 podman[222463]: 2025-11-29 07:38:34.11979456 +0000 UTC m=+0.180397504 container remove eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_galileo, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:38:34 np0005539550 systemd[1]: libpod-conmon-eccb23a550fb8ba6d0166c0c048e43962206b9594e74498c5c1dd4f46836ecea.scope: Deactivated successfully.
Nov 29 02:38:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:34 np0005539550 podman[222604]: 2025-11-29 07:38:34.279669895 +0000 UTC m=+0.043016026 container create 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:38:34 np0005539550 systemd[1]: Started libpod-conmon-3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc.scope.
Nov 29 02:38:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89882186fe9deca85296ff84489ee51311aad0cc3410c47b57d2fab05f6790eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89882186fe9deca85296ff84489ee51311aad0cc3410c47b57d2fab05f6790eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89882186fe9deca85296ff84489ee51311aad0cc3410c47b57d2fab05f6790eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89882186fe9deca85296ff84489ee51311aad0cc3410c47b57d2fab05f6790eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:34 np0005539550 podman[222604]: 2025-11-29 07:38:34.261261001 +0000 UTC m=+0.024607152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:34 np0005539550 podman[222604]: 2025-11-29 07:38:34.362846655 +0000 UTC m=+0.126192816 container init 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 02:38:34 np0005539550 podman[222604]: 2025-11-29 07:38:34.370122918 +0000 UTC m=+0.133469049 container start 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:38:34 np0005539550 podman[222604]: 2025-11-29 07:38:34.37691684 +0000 UTC m=+0.140263001 container attach 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 02:38:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:34.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:34 np0005539550 python3.9[222673]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:38:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]: {
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:    "0": [
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:        {
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "devices": [
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "/dev/loop3"
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            ],
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "lv_name": "ceph_lv0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "lv_size": "7511998464",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "name": "ceph_lv0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "tags": {
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.cluster_name": "ceph",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.crush_device_class": "",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.encrypted": "0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.osd_id": "0",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.type": "block",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:                "ceph.vdo": "0"
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            },
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "type": "block",
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:            "vg_name": "ceph_vg0"
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:        }
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]:    ]
Nov 29 02:38:35 np0005539550 kind_mcclintock[222668]: }
Nov 29 02:38:35 np0005539550 systemd[1]: libpod-3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc.scope: Deactivated successfully.
Nov 29 02:38:35 np0005539550 podman[222604]: 2025-11-29 07:38:35.182773819 +0000 UTC m=+0.946119950 container died 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-89882186fe9deca85296ff84489ee51311aad0cc3410c47b57d2fab05f6790eb-merged.mount: Deactivated successfully.
Nov 29 02:38:35 np0005539550 python3.9[222829]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:38:35 np0005539550 podman[222604]: 2025-11-29 07:38:35.239634714 +0000 UTC m=+1.002980845 container remove 3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:38:35 np0005539550 systemd[1]: libpod-conmon-3c0c314ec185c7b754f324256ffc7ca23322201ce737cd83f1d9a84989559ecc.scope: Deactivated successfully.
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.818818542 +0000 UTC m=+0.043089678 container create 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:35 np0005539550 systemd[1]: Started libpod-conmon-46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e.scope.
Nov 29 02:38:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.889408264 +0000 UTC m=+0.113679420 container init 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.897063907 +0000 UTC m=+0.121335043 container start 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.801799503 +0000 UTC m=+0.026070659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:35 np0005539550 vigorous_sutherland[223154]: 167 167
Nov 29 02:38:35 np0005539550 systemd[1]: libpod-46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e.scope: Deactivated successfully.
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.901718025 +0000 UTC m=+0.125989191 container attach 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.902123945 +0000 UTC m=+0.126395081 container died 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:38:35 np0005539550 python3.9[223129]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0ce74a36d9903e2a9941275993d9d2f7d2e7b0a44fe330060913184c79c43b40-merged.mount: Deactivated successfully.
Nov 29 02:38:35 np0005539550 podman[223138]: 2025-11-29 07:38:35.941476138 +0000 UTC m=+0.165747274 container remove 46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:38:35 np0005539550 systemd[1]: libpod-conmon-46d430d1bd411cfa3d17a3ba521947c7c43b711f85c93a3e27ad98b3e989d98e.scope: Deactivated successfully.
Nov 29 02:38:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539550 podman[223203]: 2025-11-29 07:38:36.082567239 +0000 UTC m=+0.024645573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:38:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:36 np0005539550 podman[223203]: 2025-11-29 07:38:36.339391571 +0000 UTC m=+0.281469895 container create 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:38:36 np0005539550 systemd[1]: Started libpod-conmon-73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e.scope.
Nov 29 02:38:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:38:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd270aedee98b6a29bc447a572166fc9c828f75bf9c785afe4ca951cd053c312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd270aedee98b6a29bc447a572166fc9c828f75bf9c785afe4ca951cd053c312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd270aedee98b6a29bc447a572166fc9c828f75bf9c785afe4ca951cd053c312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd270aedee98b6a29bc447a572166fc9c828f75bf9c785afe4ca951cd053c312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:38:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:36.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:36 np0005539550 podman[223203]: 2025-11-29 07:38:36.477902277 +0000 UTC m=+0.419980581 container init 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:38:36 np0005539550 podman[223203]: 2025-11-29 07:38:36.484561935 +0000 UTC m=+0.426640229 container start 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:38:36 np0005539550 podman[223203]: 2025-11-29 07:38:36.48790552 +0000 UTC m=+0.429983814 container attach 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:38:36 np0005539550 python3.9[223350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:37 np0005539550 python3.9[223473]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401916.2191982-4126-127617373638604/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]: {
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:        "osd_id": 0,
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:        "type": "bluestore"
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]:    }
Nov 29 02:38:37 np0005539550 relaxed_driscoll[223317]: }
Nov 29 02:38:37 np0005539550 systemd[1]: libpod-73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e.scope: Deactivated successfully.
Nov 29 02:38:37 np0005539550 podman[223203]: 2025-11-29 07:38:37.369937602 +0000 UTC m=+1.312015906 container died 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:38:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cd270aedee98b6a29bc447a572166fc9c828f75bf9c785afe4ca951cd053c312-merged.mount: Deactivated successfully.
Nov 29 02:38:37 np0005539550 podman[223203]: 2025-11-29 07:38:37.427781271 +0000 UTC m=+1.369859565 container remove 73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:38:37 np0005539550 systemd[1]: libpod-conmon-73601d56d66a9ad4c3896f3de9eea551f47da264ade307d90656eca022a9490e.scope: Deactivated successfully.
Nov 29 02:38:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:38:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:38:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7cf537ef-d738-45fc-8677-6c54b312920c does not exist
Nov 29 02:38:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e6d6863d-a284-44e8-83fa-7ade73441ed9 does not exist
Nov 29 02:38:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 468d178f-0b83-497f-ab9e-4ef26697aa93 does not exist
Nov 29 02:38:37 np0005539550 python3.9[223669]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:38:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:38.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:38 np0005539550 python3.9[223831]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401917.4740446-4171-175953585716369/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:39 np0005539550 podman[223955]: 2025-11-29 07:38:39.086426793 +0000 UTC m=+0.093247324 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 02:38:39 np0005539550 python3.9[224001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:38:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:39 np0005539550 python3.9[224131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401918.768075-4216-72246453037667/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:38:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:40.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:40 np0005539550 python3.9[224284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:38:40 np0005539550 systemd[1]: Reloading.
Nov 29 02:38:40 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:40 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:41 np0005539550 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 02:38:42 np0005539550 python3.9[224476]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 02:38:42 np0005539550 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:42 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:42 np0005539550 systemd[1]: Reloading.
Nov 29 02:38:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:42.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:42 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:38:42 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:38:43 np0005539550 systemd[1]: session-51.scope: Deactivated successfully.
Nov 29 02:38:43 np0005539550 systemd[1]: session-51.scope: Consumed 1min 27.293s CPU time.
Nov 29 02:38:43 np0005539550 systemd-logind[788]: Session 51 logged out. Waiting for processes to exit.
Nov 29 02:38:43 np0005539550 systemd-logind[788]: Removed session 51.
Nov 29 02:38:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:44.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:48.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:50 np0005539550 systemd-logind[788]: New session 52 of user zuul.
Nov 29 02:38:50 np0005539550 systemd[1]: Started Session 52 of User zuul.
Nov 29 02:38:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:50.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:51 np0005539550 python3.9[224731]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:38:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:52.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:53 np0005539550 python3.9[224886]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:38:53 np0005539550 network[224903]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:38:53 np0005539550 network[224904]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:38:53 np0005539550 network[224905]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:38:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:38:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:38:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:38:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:38:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:38:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:56.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:56.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:56 np0005539550 podman[225057]: 2025-11-29 07:38:56.731941126 +0000 UTC m=+0.064143770 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 02:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:38:58.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:38:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:38:58 np0005539550 python3.9[225248]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 02:38:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:38:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:38:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:38:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:38:58.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:38:59
Nov 29 02:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'vms']
Nov 29 02:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:38:59 np0005539550 python3.9[225332]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:38:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:00.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:02.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:02.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:04.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:04.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:06 np0005539550 python3.9[225489]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:06.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:07 np0005539550 python3.9[225641]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:39:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:39:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:39:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:39:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:39:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:08.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:08 np0005539550 python3.9[225795]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:39:09 np0005539550 podman[225919]: 2025-11-29 07:39:09.281425141 +0000 UTC m=+0.129616653 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:39:09 np0005539550 python3.9[225959]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:10 np0005539550 python3.9[226124]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:10.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:10.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:10 np0005539550 python3.9[226247]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401949.653401-250-208278624581036/.source.iscsi _original_basename=.8ccgyz1w follow=False checksum=25243d84afaed4a6101bb0e95ecc67ee20e04ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:11 np0005539550 python3.9[226399]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:12.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:12.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:12 np0005539550 python3.9[226552]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539550 python3.9[226705]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:14 np0005539550 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 02:39:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:14.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:15 np0005539550 python3.9[226911]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:15 np0005539550 systemd[1]: Reloading.
Nov 29 02:39:15 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:15 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:15 np0005539550 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:39:15 np0005539550 systemd[1]: Starting Open-iSCSI...
Nov 29 02:39:15 np0005539550 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 02:39:15 np0005539550 systemd[1]: Started Open-iSCSI.
Nov 29 02:39:15 np0005539550 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 02:39:15 np0005539550 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 02:39:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:16.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:16.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:16 np0005539550 python3.9[227112]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:39:17 np0005539550 network[227129]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:39:17 np0005539550 network[227130]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:39:17 np0005539550 network[227131]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:39:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:18.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:18.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:39:18.909 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:39:18.910 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:39:18.910 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:39:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:39:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:20.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:20.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:22.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:22 np0005539550 python3.9[227406]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:39:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:22.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:23 np0005539550 python3.9[227558]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 02:39:23 np0005539550 python3.9[227714]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:24.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:24 np0005539550 python3.9[227838]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401963.4229324-481-209478141902105/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:24.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:25 np0005539550 python3.9[227990]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:26.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:26.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:26 np0005539550 python3.9[228143]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:39:26 np0005539550 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:39:26 np0005539550 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:39:26 np0005539550 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:39:26 np0005539550 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:39:26 np0005539550 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:39:26 np0005539550 podman[228145]: 2025-11-29 07:39:26.872725078 +0000 UTC m=+0.067183437 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:39:27 np0005539550 python3.9[228316]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:28.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:28 np0005539550 python3.9[228469]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:29 np0005539550 python3.9[228622]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:30.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:30 np0005539550 python3.9[228775]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:30.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:31 np0005539550 python3.9[228898]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401969.9555995-655-8903241125857/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:31 np0005539550 python3.9[229050]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:39:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:32.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:32.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:32 np0005539550 python3.9[229204]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:33 np0005539550 python3.9[229356]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:34.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:34 np0005539550 python3.9[229532]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:34.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:35 np0005539550 python3.9[229711]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:35 np0005539550 python3.9[229863]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:36.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:36 np0005539550 python3.9[230016]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:36.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:37 np0005539550 python3.9[230168]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:38 np0005539550 python3.9[230321]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:39:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:38.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:38.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:38 np0005539550 python3.9[230588]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:39:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:39:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:39:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:39:39 np0005539550 podman[230729]: 2025-11-29 07:39:39.5976694 +0000 UTC m=+0.090615918 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 02:39:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:39 np0005539550 python3.9[230775]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 26291394-dc95-4734-a5d8-2338923874ac does not exist
Nov 29 02:39:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6b4d75a0-653f-4dc1-b345-0e50511f66df does not exist
Nov 29 02:39:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 766fc33a-1678-4a4e-bd6f-18684c377187 does not exist
Nov 29 02:39:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:39:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:39:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:39:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:39:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:39:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:40 np0005539550 python3.9[231037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:40.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.594215673 +0000 UTC m=+0.040939654 container create 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:39:40 np0005539550 systemd[1]: Started libpod-conmon-8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719.scope.
Nov 29 02:39:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.574262529 +0000 UTC m=+0.020986540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.687490517 +0000 UTC m=+0.134214528 container init 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.696047083 +0000 UTC m=+0.142771074 container start 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.69908418 +0000 UTC m=+0.145808161 container attach 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:39:40 np0005539550 eloquent_varahamihira[231115]: 167 167
Nov 29 02:39:40 np0005539550 systemd[1]: libpod-8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719.scope: Deactivated successfully.
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.702834155 +0000 UTC m=+0.149558136 container died 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-37526b523d644f1ff7efd7da27c2e9aeaff4cd8d8d48805327189e537f7af876-merged.mount: Deactivated successfully.
Nov 29 02:39:40 np0005539550 podman[231076]: 2025-11-29 07:39:40.744896236 +0000 UTC m=+0.191620217 container remove 8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:39:40 np0005539550 systemd[1]: libpod-conmon-8d77bdab386d14f453e48d76fc7238c1ecac606423ef36a56adfd069d1585719.scope: Deactivated successfully.
Nov 29 02:39:40 np0005539550 podman[231195]: 2025-11-29 07:39:40.90828282 +0000 UTC m=+0.042382151 container create 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:39:40 np0005539550 systemd[1]: Started libpod-conmon-3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7.scope.
Nov 29 02:39:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:40 np0005539550 podman[231195]: 2025-11-29 07:39:40.891397524 +0000 UTC m=+0.025496885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:40 np0005539550 podman[231195]: 2025-11-29 07:39:40.996584399 +0000 UTC m=+0.130683750 container init 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:39:41 np0005539550 podman[231195]: 2025-11-29 07:39:41.006499799 +0000 UTC m=+0.140599140 container start 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:39:41 np0005539550 podman[231195]: 2025-11-29 07:39:41.011711231 +0000 UTC m=+0.145810582 container attach 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:39:41 np0005539550 python3.9[231189]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:41 np0005539550 python3.9[231367]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:39:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:39:41 np0005539550 beautiful_goldstine[231211]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:39:41 np0005539550 beautiful_goldstine[231211]: --> relative data size: 1.0
Nov 29 02:39:41 np0005539550 beautiful_goldstine[231211]: --> All data devices are unavailable
Nov 29 02:39:41 np0005539550 systemd[1]: libpod-3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7.scope: Deactivated successfully.
Nov 29 02:39:41 np0005539550 podman[231195]: 2025-11-29 07:39:41.869133552 +0000 UTC m=+1.003232903 container died 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ac35ae2f148af52e3092ee0d4319804ec4982e4e3b8d888d49edf0d56264213b-merged.mount: Deactivated successfully.
Nov 29 02:39:41 np0005539550 podman[231195]: 2025-11-29 07:39:41.937941129 +0000 UTC m=+1.072040460 container remove 3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:39:41 np0005539550 systemd[1]: libpod-conmon-3c8ed23bf57a2ed604451b404e38a71be15055698f84d5d385884aae78993fb7.scope: Deactivated successfully.
Nov 29 02:39:42 np0005539550 python3.9[231489]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:42.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:42 np0005539550 podman[231632]: 2025-11-29 07:39:42.521096117 +0000 UTC m=+0.050855414 container create 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:39:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:42.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:42 np0005539550 podman[231632]: 2025-11-29 07:39:42.494094636 +0000 UTC m=+0.023853963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:42 np0005539550 systemd[1]: Started libpod-conmon-4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc.scope.
Nov 29 02:39:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:42 np0005539550 podman[231632]: 2025-11-29 07:39:42.907678824 +0000 UTC m=+0.437438141 container init 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:42 np0005539550 podman[231632]: 2025-11-29 07:39:42.916130137 +0000 UTC m=+0.445889434 container start 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:42 np0005539550 heuristic_swartz[231648]: 167 167
Nov 29 02:39:42 np0005539550 systemd[1]: libpod-4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc.scope: Deactivated successfully.
Nov 29 02:39:43 np0005539550 podman[231632]: 2025-11-29 07:39:43.285413607 +0000 UTC m=+0.815172944 container attach 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:43 np0005539550 podman[231632]: 2025-11-29 07:39:43.286487444 +0000 UTC m=+0.816246741 container died 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:39:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4540ef9fd13b6457c7b1789fed6682d667e7178de6df25982414e973db7f7f18-merged.mount: Deactivated successfully.
Nov 29 02:39:43 np0005539550 podman[231632]: 2025-11-29 07:39:43.669383419 +0000 UTC m=+1.199142716 container remove 4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:43 np0005539550 systemd[1]: libpod-conmon-4d80ff42663f3c5aac9d16c5fdaedb8325a8c7ca9b3a7a56444ccb17d0beaffc.scope: Deactivated successfully.
Nov 29 02:39:43 np0005539550 python3.9[231795]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:43 np0005539550 podman[231801]: 2025-11-29 07:39:43.81484574 +0000 UTC m=+0.022263683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:44 np0005539550 podman[231801]: 2025-11-29 07:39:44.10606296 +0000 UTC m=+0.313480883 container create f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:39:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:44.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539550 systemd[1]: Started libpod-conmon-f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21.scope.
Nov 29 02:39:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84cd0f28e5b7e3ccd50797113a613cb8f4d0d6e3ac16515a8c055befc8165a32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84cd0f28e5b7e3ccd50797113a613cb8f4d0d6e3ac16515a8c055befc8165a32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84cd0f28e5b7e3ccd50797113a613cb8f4d0d6e3ac16515a8c055befc8165a32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84cd0f28e5b7e3ccd50797113a613cb8f4d0d6e3ac16515a8c055befc8165a32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:44.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:44 np0005539550 podman[231801]: 2025-11-29 07:39:44.57515169 +0000 UTC m=+0.782569633 container init f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:39:44 np0005539550 podman[231801]: 2025-11-29 07:39:44.584644409 +0000 UTC m=+0.792062332 container start f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:39:44 np0005539550 python3.9[231973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:44 np0005539550 podman[231801]: 2025-11-29 07:39:44.665232553 +0000 UTC m=+0.872650506 container attach f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:45 np0005539550 python3.9[232053]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]: {
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:    "0": [
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:        {
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "devices": [
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "/dev/loop3"
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            ],
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "lv_name": "ceph_lv0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "lv_size": "7511998464",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "name": "ceph_lv0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "tags": {
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.cluster_name": "ceph",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.crush_device_class": "",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.encrypted": "0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.osd_id": "0",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.type": "block",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:                "ceph.vdo": "0"
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            },
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "type": "block",
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:            "vg_name": "ceph_vg0"
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:        }
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]:    ]
Nov 29 02:39:45 np0005539550 upbeat_shtern[231895]: }
Nov 29 02:39:45 np0005539550 systemd[1]: libpod-f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21.scope: Deactivated successfully.
Nov 29 02:39:45 np0005539550 podman[231801]: 2025-11-29 07:39:45.454394942 +0000 UTC m=+1.661812865 container died f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:39:45 np0005539550 python3.9[232220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-84cd0f28e5b7e3ccd50797113a613cb8f4d0d6e3ac16515a8c055befc8165a32-merged.mount: Deactivated successfully.
Nov 29 02:39:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:46.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:46 np0005539550 podman[231801]: 2025-11-29 07:39:46.25069029 +0000 UTC m=+2.458108213 container remove f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:39:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:46 np0005539550 systemd[1]: libpod-conmon-f275e0e09cebeb28ddc7e2ac182578794a3355026c9079cb6dfa95f6dcb95a21.scope: Deactivated successfully.
Nov 29 02:39:46 np0005539550 python3.9[232300]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:46.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:46 np0005539550 podman[232508]: 2025-11-29 07:39:46.831027497 +0000 UTC m=+0.029042814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:46 np0005539550 podman[232508]: 2025-11-29 07:39:46.995155439 +0000 UTC m=+0.193170776 container create b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:39:47 np0005539550 systemd[1]: Started libpod-conmon-b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29.scope.
Nov 29 02:39:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:47 np0005539550 podman[232508]: 2025-11-29 07:39:47.224277862 +0000 UTC m=+0.422293189 container init b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:39:47 np0005539550 podman[232508]: 2025-11-29 07:39:47.232321315 +0000 UTC m=+0.430336612 container start b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:39:47 np0005539550 podman[232508]: 2025-11-29 07:39:47.236019009 +0000 UTC m=+0.434034306 container attach b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:39:47 np0005539550 strange_mclaren[232608]: 167 167
Nov 29 02:39:47 np0005539550 systemd[1]: libpod-b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29.scope: Deactivated successfully.
Nov 29 02:39:47 np0005539550 podman[232508]: 2025-11-29 07:39:47.237703701 +0000 UTC m=+0.435719028 container died b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:39:47 np0005539550 python3.9[232610]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:47 np0005539550 systemd[1]: Reloading.
Nov 29 02:39:47 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:47 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:47 np0005539550 podman[232508]: 2025-11-29 07:39:47.522256203 +0000 UTC m=+0.720271500 container remove b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mclaren, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:39:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0edbbfc3fed5290f0aa0d6e191c9515fe5421af7deb0b18d2b6c227db143005e-merged.mount: Deactivated successfully.
Nov 29 02:39:47 np0005539550 podman[232668]: 2025-11-29 07:39:47.70832169 +0000 UTC m=+0.071762563 container create fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:39:47 np0005539550 systemd[1]: libpod-conmon-b098af4c1104debe41ad7e8467b3d860d4605f9eff99dec8c5390311818b1e29.scope: Deactivated successfully.
Nov 29 02:39:47 np0005539550 podman[232668]: 2025-11-29 07:39:47.658959154 +0000 UTC m=+0.022400047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:39:47 np0005539550 systemd[1]: Started libpod-conmon-fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7.scope.
Nov 29 02:39:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:39:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1406483addc1f8410216460cf9a1372786ab4eceed3b151e80adca59e6a731a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1406483addc1f8410216460cf9a1372786ab4eceed3b151e80adca59e6a731a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1406483addc1f8410216460cf9a1372786ab4eceed3b151e80adca59e6a731a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1406483addc1f8410216460cf9a1372786ab4eceed3b151e80adca59e6a731a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:39:47 np0005539550 podman[232668]: 2025-11-29 07:39:47.808268882 +0000 UTC m=+0.171709785 container init fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:39:47 np0005539550 podman[232668]: 2025-11-29 07:39:47.816632723 +0000 UTC m=+0.180073596 container start fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:39:47 np0005539550 podman[232668]: 2025-11-29 07:39:47.828488643 +0000 UTC m=+0.191929536 container attach fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:39:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:48.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]: {
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:        "osd_id": 0,
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:        "type": "bluestore"
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]:    }
Nov 29 02:39:48 np0005539550 relaxed_herschel[232687]: }
Nov 29 02:39:48 np0005539550 systemd[1]: libpod-fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7.scope: Deactivated successfully.
Nov 29 02:39:48 np0005539550 podman[232668]: 2025-11-29 07:39:48.705259982 +0000 UTC m=+1.068700865 container died fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 02:39:49 np0005539550 python3.9[232871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:49 np0005539550 python3.9[232950]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1406483addc1f8410216460cf9a1372786ab4eceed3b151e80adca59e6a731a8-merged.mount: Deactivated successfully.
Nov 29 02:39:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:50 np0005539550 podman[232668]: 2025-11-29 07:39:50.022473807 +0000 UTC m=+2.385914710 container remove fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:39:50 np0005539550 systemd[1]: libpod-conmon-fe380d57675e82cf41a7cc623e3f903d718dc867e9533e55c4d1f59ac09a97d7.scope: Deactivated successfully.
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 68c02378-afe3-4b50-b84c-3a0b2f6bcfb6 does not exist
Nov 29 02:39:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5f5ad43b-01e7-4d67-922a-35def4d5a7a5 does not exist
Nov 29 02:39:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5943351c-cf1f-4ec1-a429-fa7f584ddc1e does not exist
Nov 29 02:39:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:39:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:39:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:50.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:39:50 np0005539550 python3.9[233153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:51 np0005539550 python3.9[233231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:51 np0005539550 python3.9[233383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:39:51 np0005539550 systemd[1]: Reloading.
Nov 29 02:39:52 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:39:52 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:39:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:52 np0005539550 systemd[1]: Starting Create netns directory...
Nov 29 02:39:52 np0005539550 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 02:39:52 np0005539550 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 02:39:52 np0005539550 systemd[1]: Finished Create netns directory.
Nov 29 02:39:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:52.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:53 np0005539550 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 02:39:53 np0005539550 python3.9[233577]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 02:39:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 02:39:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:54 np0005539550 python3.9[233781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:55 np0005539550 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:39:55 np0005539550 python3.9[233905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764401994.3068085-1276-129418799874254/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:39:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:56 np0005539550 python3.9[234058]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:39:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:39:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:56.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:39:57 np0005539550 podman[234182]: 2025-11-29 07:39:57.066757254 +0000 UTC m=+0.060138162 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:39:57 np0005539550 python3.9[234225]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:39:57 np0005539550 python3.9[234350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764401996.7314599-1351-32834230281778/.source.json _original_basename=.gdz98c6z follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:39:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:39:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:39:58.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:39:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:39:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:39:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:39:58.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:39:58 np0005539550 python3.9[234503]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:39:59
Nov 29 02:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.control']
Nov 29 02:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:40:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 02:40:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:00.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:00.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:01 np0005539550 python3.9[234931]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 02:40:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:02.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:03 np0005539550 python3.9[235084]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:40:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:04.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:06.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:06 np0005539550 python3.9[235238]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 02:40:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:40:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:40:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:40:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:40:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:40:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:08.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:08.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:40:09 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 02:40:09 np0005539550 podman[235388]: 2025-11-29 07:40:09.796686675 +0000 UTC m=+0.097742742 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:40:10 np0005539550 python3[235432]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:40:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:10.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:10.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:11 np0005539550 podman[235454]: 2025-11-29 07:40:11.318202443 +0000 UTC m=+1.207662601 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:11 np0005539550 podman[235513]: 2025-11-29 07:40:11.453929591 +0000 UTC m=+0.047241130 container create 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:40:11 np0005539550 podman[235513]: 2025-11-29 07:40:11.429107241 +0000 UTC m=+0.022418800 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:11 np0005539550 python3[235432]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 02:40:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:12.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:13 np0005539550 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 02:40:13 np0005539550 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 02:40:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:14.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:18.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:18.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:40:18.910 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:40:18.911 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:40:18.911 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:40:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:40:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:20.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:20.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:22.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:22.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:24.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:24.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:26.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:26.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:27 np0005539550 podman[235638]: 2025-11-29 07:40:27.311383263 +0000 UTC m=+0.054260486 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:28.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:28.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:30.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:31 np0005539550 python3.9[235786]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:32.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:32.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:32 np0005539550 python3.9[235941]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:33 np0005539550 python3.9[236017]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:34 np0005539550 python3.9[236168]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402033.4120429-1615-206930987261908/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:34.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:34 np0005539550 python3.9[236245]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:40:34 np0005539550 systemd[1]: Reloading.
Nov 29 02:40:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:34.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:34 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:34 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:35 np0005539550 python3.9[236405]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:40:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:35 np0005539550 systemd[1]: Reloading.
Nov 29 02:40:35 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:40:35 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:40:35 np0005539550 systemd[1]: Starting multipathd container...
Nov 29 02:40:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82cda0176926e77ba4417aed215fd766681082198086daed7782c2cc31b0683b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82cda0176926e77ba4417aed215fd766681082198086daed7782c2cc31b0683b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:36 np0005539550 systemd[1]: Started /usr/bin/podman healthcheck run 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f.
Nov 29 02:40:36 np0005539550 podman[236445]: 2025-11-29 07:40:36.031193716 +0000 UTC m=+0.134499159 container init 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Nov 29 02:40:36 np0005539550 multipathd[236460]: + sudo -E kolla_set_configs
Nov 29 02:40:36 np0005539550 podman[236445]: 2025-11-29 07:40:36.06137301 +0000 UTC m=+0.164678433 container start 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 02:40:36 np0005539550 podman[236445]: multipathd
Nov 29 02:40:36 np0005539550 systemd[1]: Started multipathd container.
Nov 29 02:40:36 np0005539550 multipathd[236460]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:36 np0005539550 multipathd[236460]: INFO:__main__:Validating config file
Nov 29 02:40:36 np0005539550 multipathd[236460]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:36 np0005539550 multipathd[236460]: INFO:__main__:Writing out command to execute
Nov 29 02:40:36 np0005539550 multipathd[236460]: ++ cat /run_command
Nov 29 02:40:36 np0005539550 multipathd[236460]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:36 np0005539550 multipathd[236460]: + ARGS=
Nov 29 02:40:36 np0005539550 multipathd[236460]: + sudo kolla_copy_cacerts
Nov 29 02:40:36 np0005539550 multipathd[236460]: + [[ ! -n '' ]]
Nov 29 02:40:36 np0005539550 multipathd[236460]: + . kolla_extend_start
Nov 29 02:40:36 np0005539550 multipathd[236460]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:36 np0005539550 multipathd[236460]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:36 np0005539550 multipathd[236460]: + umask 0022
Nov 29 02:40:36 np0005539550 multipathd[236460]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:36 np0005539550 podman[236467]: 2025-11-29 07:40:36.145618654 +0000 UTC m=+0.073554508 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 29 02:40:36 np0005539550 multipathd[236460]: 5084.266553 | --------start up--------
Nov 29 02:40:36 np0005539550 multipathd[236460]: 5084.266566 | read /etc/multipath.conf
Nov 29 02:40:36 np0005539550 systemd[1]: 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-54848884f9a5434b.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:40:36 np0005539550 systemd[1]: 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-54848884f9a5434b.service: Failed with result 'exit-code'.
Nov 29 02:40:36 np0005539550 multipathd[236460]: 5084.272268 | path checkers start up
Nov 29 02:40:36 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:40:36 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:40:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:36.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:38.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:38.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:40.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:40 np0005539550 podman[236525]: 2025-11-29 07:40:40.339851458 +0000 UTC m=+0.077811565 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:40:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:40.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:42.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:42.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:42 np0005539550 python3.9[236674]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:40:43 np0005539550 python3.9[236828]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:40:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:44.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:44 np0005539550 python3.9[236994]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:40:44 np0005539550 systemd[1]: Stopping multipathd container...
Nov 29 02:40:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:44 np0005539550 multipathd[236460]: 5092.775931 | exit (signal)
Nov 29 02:40:44 np0005539550 multipathd[236460]: 5092.776041 | --------shut down-------
Nov 29 02:40:44 np0005539550 systemd[1]: libpod-32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f.scope: Deactivated successfully.
Nov 29 02:40:44 np0005539550 podman[236998]: 2025-11-29 07:40:44.688107129 +0000 UTC m=+0.066535743 container died 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:40:44 np0005539550 systemd[1]: 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-54848884f9a5434b.timer: Deactivated successfully.
Nov 29 02:40:44 np0005539550 systemd[1]: Stopped /usr/bin/podman healthcheck run 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f.
Nov 29 02:40:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-userdata-shm.mount: Deactivated successfully.
Nov 29 02:40:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-82cda0176926e77ba4417aed215fd766681082198086daed7782c2cc31b0683b-merged.mount: Deactivated successfully.
Nov 29 02:40:44 np0005539550 podman[236998]: 2025-11-29 07:40:44.739555404 +0000 UTC m=+0.117984018 container cleanup 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:40:44 np0005539550 podman[236998]: multipathd
Nov 29 02:40:44 np0005539550 podman[237026]: multipathd
Nov 29 02:40:44 np0005539550 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 02:40:44 np0005539550 systemd[1]: Stopped multipathd container.
Nov 29 02:40:44 np0005539550 systemd[1]: Starting multipathd container...
Nov 29 02:40:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82cda0176926e77ba4417aed215fd766681082198086daed7782c2cc31b0683b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82cda0176926e77ba4417aed215fd766681082198086daed7782c2cc31b0683b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:44 np0005539550 systemd[1]: Started /usr/bin/podman healthcheck run 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f.
Nov 29 02:40:44 np0005539550 podman[237038]: 2025-11-29 07:40:44.930508403 +0000 UTC m=+0.104438399 container init 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 02:40:44 np0005539550 multipathd[237053]: + sudo -E kolla_set_configs
Nov 29 02:40:44 np0005539550 podman[237038]: 2025-11-29 07:40:44.955549608 +0000 UTC m=+0.129479574 container start 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:40:44 np0005539550 podman[237038]: multipathd
Nov 29 02:40:44 np0005539550 systemd[1]: Started multipathd container.
Nov 29 02:40:45 np0005539550 multipathd[237053]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:40:45 np0005539550 multipathd[237053]: INFO:__main__:Validating config file
Nov 29 02:40:45 np0005539550 multipathd[237053]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:40:45 np0005539550 multipathd[237053]: INFO:__main__:Writing out command to execute
Nov 29 02:40:45 np0005539550 multipathd[237053]: ++ cat /run_command
Nov 29 02:40:45 np0005539550 multipathd[237053]: + CMD='/usr/sbin/multipathd -d'
Nov 29 02:40:45 np0005539550 multipathd[237053]: + ARGS=
Nov 29 02:40:45 np0005539550 multipathd[237053]: + sudo kolla_copy_cacerts
Nov 29 02:40:45 np0005539550 podman[237060]: 2025-11-29 07:40:45.021402573 +0000 UTC m=+0.055916168 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Nov 29 02:40:45 np0005539550 systemd[1]: 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-e5f87301a27d7ab.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 02:40:45 np0005539550 systemd[1]: 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f-e5f87301a27d7ab.service: Failed with result 'exit-code'.
Nov 29 02:40:45 np0005539550 multipathd[237053]: + [[ ! -n '' ]]
Nov 29 02:40:45 np0005539550 multipathd[237053]: + . kolla_extend_start
Nov 29 02:40:45 np0005539550 multipathd[237053]: Running command: '/usr/sbin/multipathd -d'
Nov 29 02:40:45 np0005539550 multipathd[237053]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 02:40:45 np0005539550 multipathd[237053]: + umask 0022
Nov 29 02:40:45 np0005539550 multipathd[237053]: + exec /usr/sbin/multipathd -d
Nov 29 02:40:45 np0005539550 multipathd[237053]: 5093.174539 | --------start up--------
Nov 29 02:40:45 np0005539550 multipathd[237053]: 5093.174558 | read /etc/multipath.conf
Nov 29 02:40:45 np0005539550 multipathd[237053]: 5093.179957 | path checkers start up
Nov 29 02:40:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:48.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:50.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:40:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:40:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:40:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:40:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:52.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:52.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:54.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:54.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:40:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2793dd80-6bb4-4e8a-8a61-6502eb5a1b5a does not exist
Nov 29 02:40:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 434f0e0e-a99c-4f95-aa68-b8ec10237ba8 does not exist
Nov 29 02:40:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4bc5aa28-aed7-40ea-97e1-376958ca6689 does not exist
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:40:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:56.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.300531041 +0000 UTC m=+0.041598220 container create 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:40:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:56 np0005539550 systemd[1]: Started libpod-conmon-35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f.scope.
Nov 29 02:40:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.282214314 +0000 UTC m=+0.023281523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.387717668 +0000 UTC m=+0.128784867 container init 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.393677427 +0000 UTC m=+0.134744606 container start 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.397146984 +0000 UTC m=+0.138214163 container attach 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:40:56 np0005539550 silly_colden[237460]: 167 167
Nov 29 02:40:56 np0005539550 systemd[1]: libpod-35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f.scope: Deactivated successfully.
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.39980343 +0000 UTC m=+0.140870609 container died 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:40:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1703943b41346af8a0da52d920ab67d74e3c5fd729456f446d329a83702a5cad-merged.mount: Deactivated successfully.
Nov 29 02:40:56 np0005539550 podman[237444]: 2025-11-29 07:40:56.438734453 +0000 UTC m=+0.179801632 container remove 35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_colden, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:40:56 np0005539550 systemd[1]: libpod-conmon-35fa65bb658686ebc47ba1917f49dfb80ef6037e20ca87f1b90e8743544b2e6f.scope: Deactivated successfully.
Nov 29 02:40:56 np0005539550 podman[237483]: 2025-11-29 07:40:56.593494387 +0000 UTC m=+0.044030310 container create 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:40:56 np0005539550 systemd[1]: Started libpod-conmon-11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6.scope.
Nov 29 02:40:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:56 np0005539550 podman[237483]: 2025-11-29 07:40:56.571272073 +0000 UTC m=+0.021808036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:56.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:56 np0005539550 podman[237483]: 2025-11-29 07:40:56.686451429 +0000 UTC m=+0.136987372 container init 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:40:56 np0005539550 podman[237483]: 2025-11-29 07:40:56.693483205 +0000 UTC m=+0.144019128 container start 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:40:56 np0005539550 podman[237483]: 2025-11-29 07:40:56.696671234 +0000 UTC m=+0.147207157 container attach 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:40:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:40:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:40:57 np0005539550 vigorous_aryabhata[237499]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:40:57 np0005539550 vigorous_aryabhata[237499]: --> relative data size: 1.0
Nov 29 02:40:57 np0005539550 vigorous_aryabhata[237499]: --> All data devices are unavailable
Nov 29 02:40:57 np0005539550 systemd[1]: libpod-11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6.scope: Deactivated successfully.
Nov 29 02:40:57 np0005539550 podman[237515]: 2025-11-29 07:40:57.553234196 +0000 UTC m=+0.023824586 container died 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:40:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d8430c430bd62d01377c68049750c21b60e027310ae855edec9044cfd70d0a00-merged.mount: Deactivated successfully.
Nov 29 02:40:57 np0005539550 podman[237514]: 2025-11-29 07:40:57.617116141 +0000 UTC m=+0.071160338 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:40:57 np0005539550 podman[237515]: 2025-11-29 07:40:57.626380742 +0000 UTC m=+0.096971122 container remove 11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_aryabhata, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:40:57 np0005539550 systemd[1]: libpod-conmon-11108fc79dd6bde044d7e8e24c3baac032314fa3f7fff17228fad8bc94a0c0d6.scope: Deactivated successfully.
Nov 29 02:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:40:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:40:58 np0005539550 python3.9[237785]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.247500273 +0000 UTC m=+0.038247256 container create 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:40:58 np0005539550 systemd[1]: Started libpod-conmon-6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e.scope.
Nov 29 02:40:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:40:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:40:58.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:40:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.325528782 +0000 UTC m=+0.116275855 container init 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.230654372 +0000 UTC m=+0.021401375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.332973248 +0000 UTC m=+0.123720271 container start 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:40:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.33708469 +0000 UTC m=+0.127831663 container attach 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:40:58 np0005539550 agitated_herschel[237833]: 167 167
Nov 29 02:40:58 np0005539550 systemd[1]: libpod-6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e.scope: Deactivated successfully.
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.339274365 +0000 UTC m=+0.130021378 container died 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:40:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-459516b44f063ee0dc081692cecdbee00e7e291f9559b3fa54c7df624291b929-merged.mount: Deactivated successfully.
Nov 29 02:40:58 np0005539550 podman[237816]: 2025-11-29 07:40:58.382402452 +0000 UTC m=+0.173149435 container remove 6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:40:58 np0005539550 systemd[1]: libpod-conmon-6d48f8c35bfd53da69f7644a63e01eb20f30e528eea999e32b6ce352a6c9bf3e.scope: Deactivated successfully.
Nov 29 02:40:58 np0005539550 podman[237879]: 2025-11-29 07:40:58.579934535 +0000 UTC m=+0.052174854 container create 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:40:58 np0005539550 systemd[1]: Started libpod-conmon-9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c.scope.
Nov 29 02:40:58 np0005539550 podman[237879]: 2025-11-29 07:40:58.555752571 +0000 UTC m=+0.027992970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:40:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:40:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ca15ef08d3dac5e699cc4ed51a450c1b044b8fa3c3fa49fcc66e54854ca79b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:40:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:40:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:40:58.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:40:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ca15ef08d3dac5e699cc4ed51a450c1b044b8fa3c3fa49fcc66e54854ca79b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ca15ef08d3dac5e699cc4ed51a450c1b044b8fa3c3fa49fcc66e54854ca79b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ca15ef08d3dac5e699cc4ed51a450c1b044b8fa3c3fa49fcc66e54854ca79b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:40:58 np0005539550 podman[237879]: 2025-11-29 07:40:58.691516452 +0000 UTC m=+0.163756771 container init 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:40:58 np0005539550 podman[237879]: 2025-11-29 07:40:58.707420519 +0000 UTC m=+0.179660838 container start 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:40:58 np0005539550 podman[237879]: 2025-11-29 07:40:58.71107165 +0000 UTC m=+0.183311969 container attach 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:40:59
Nov 29 02:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'images', 'vms', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', '.mgr']
Nov 29 02:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:40:59 np0005539550 modest_poincare[237895]: {
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:    "0": [
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:        {
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "devices": [
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "/dev/loop3"
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            ],
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "lv_name": "ceph_lv0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "lv_size": "7511998464",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "name": "ceph_lv0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "tags": {
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.cluster_name": "ceph",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.crush_device_class": "",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.encrypted": "0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.osd_id": "0",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.type": "block",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:                "ceph.vdo": "0"
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            },
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "type": "block",
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:            "vg_name": "ceph_vg0"
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:        }
Nov 29 02:40:59 np0005539550 modest_poincare[237895]:    ]
Nov 29 02:40:59 np0005539550 modest_poincare[237895]: }
Nov 29 02:40:59 np0005539550 systemd[1]: libpod-9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c.scope: Deactivated successfully.
Nov 29 02:40:59 np0005539550 podman[237879]: 2025-11-29 07:40:59.520277509 +0000 UTC m=+0.992517818 container died 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:40:59 np0005539550 python3.9[238027]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 02:40:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c1ca15ef08d3dac5e699cc4ed51a450c1b044b8fa3c3fa49fcc66e54854ca79b-merged.mount: Deactivated successfully.
Nov 29 02:40:59 np0005539550 podman[237879]: 2025-11-29 07:40:59.575812886 +0000 UTC m=+1.048053205 container remove 9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_poincare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 02:40:59 np0005539550 systemd[1]: libpod-conmon-9d05fa854f236dfaed2e339e6ce14aa3e0d837b16a8354e95d7825839d3a040c.scope: Deactivated successfully.
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.132687883 +0000 UTC m=+0.043834835 container create 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:41:00 np0005539550 systemd[1]: Started libpod-conmon-03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e.scope.
Nov 29 02:41:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.115925585 +0000 UTC m=+0.027072567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.213395039 +0000 UTC m=+0.124542031 container init 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.223022879 +0000 UTC m=+0.134169841 container start 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.226563048 +0000 UTC m=+0.137710010 container attach 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:41:00 np0005539550 great_mccarthy[238325]: 167 167
Nov 29 02:41:00 np0005539550 systemd[1]: libpod-03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e.scope: Deactivated successfully.
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.228967578 +0000 UTC m=+0.140114550 container died 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:41:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c99eb4ef699ec9d1fb4c20d192831b66c2d6eedd2cffcdb11532028401c94a02-merged.mount: Deactivated successfully.
Nov 29 02:41:00 np0005539550 podman[238284]: 2025-11-29 07:41:00.266918185 +0000 UTC m=+0.178065157 container remove 03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:41:00 np0005539550 systemd[1]: libpod-conmon-03e208e4f6865e0020850db7e27afe8d2f40eaac6e43f6b5806902d5df5ce18e.scope: Deactivated successfully.
Nov 29 02:41:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:00.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:00 np0005539550 python3.9[238359]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 02:41:00 np0005539550 podman[238378]: 2025-11-29 07:41:00.426976423 +0000 UTC m=+0.043064927 container create e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:41:00 np0005539550 kernel: Key type psk registered
Nov 29 02:41:00 np0005539550 systemd[1]: Started libpod-conmon-e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353.scope.
Nov 29 02:41:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:41:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3ea1f41d41714f3c0da0e8c07eb33aec50de8720ef697c5398b6655e401efa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3ea1f41d41714f3c0da0e8c07eb33aec50de8720ef697c5398b6655e401efa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3ea1f41d41714f3c0da0e8c07eb33aec50de8720ef697c5398b6655e401efa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a3ea1f41d41714f3c0da0e8c07eb33aec50de8720ef697c5398b6655e401efa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:41:00 np0005539550 podman[238378]: 2025-11-29 07:41:00.407309912 +0000 UTC m=+0.023398436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:41:00 np0005539550 podman[238378]: 2025-11-29 07:41:00.510911429 +0000 UTC m=+0.126999933 container init e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:41:00 np0005539550 podman[238378]: 2025-11-29 07:41:00.518077408 +0000 UTC m=+0.134165912 container start e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:41:00 np0005539550 podman[238378]: 2025-11-29 07:41:00.521629366 +0000 UTC m=+0.137717870 container attach e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:41:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:00.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:01 np0005539550 python3.9[238561]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]: {
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:        "osd_id": 0,
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:        "type": "bluestore"
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]:    }
Nov 29 02:41:01 np0005539550 pedantic_ardinghelli[238395]: }
Nov 29 02:41:01 np0005539550 systemd[1]: libpod-e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353.scope: Deactivated successfully.
Nov 29 02:41:01 np0005539550 podman[238378]: 2025-11-29 07:41:01.413568711 +0000 UTC m=+1.029657215 container died e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:41:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0a3ea1f41d41714f3c0da0e8c07eb33aec50de8720ef697c5398b6655e401efa-merged.mount: Deactivated successfully.
Nov 29 02:41:01 np0005539550 podman[238378]: 2025-11-29 07:41:01.47758331 +0000 UTC m=+1.093671814 container remove e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ardinghelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:41:01 np0005539550 systemd[1]: libpod-conmon-e56c4324d02a3e118b018ad372276acd25bbceb55628d385af1f7eb709ca7353.scope: Deactivated successfully.
Nov 29 02:41:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:41:01 np0005539550 python3.9[238711]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764402060.6727064-1855-242398186313808/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:41:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d677e1d7-13a9-4450-a8a6-77a5eff5c064 does not exist
Nov 29 02:41:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8926cc1d-c13e-4fff-b9f8-7e7a443e7447 does not exist
Nov 29 02:41:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ed8e963d-1918-4dcc-bfed-f1bf4eca505c does not exist
Nov 29 02:41:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:02.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:02 np0005539550 python3.9[238914]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:02.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:41:03 np0005539550 python3.9[239068]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:03 np0005539550 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 02:41:03 np0005539550 systemd[1]: Stopped Load Kernel Modules.
Nov 29 02:41:03 np0005539550 systemd[1]: Stopping Load Kernel Modules...
Nov 29 02:41:03 np0005539550 systemd[1]: Starting Load Kernel Modules...
Nov 29 02:41:03 np0005539550 systemd[1]: Finished Load Kernel Modules.
Nov 29 02:41:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:04.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:04 np0005539550 python3.9[239225]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 02:41:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:04.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:06.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:06 np0005539550 systemd[1]: Reloading.
Nov 29 02:41:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:06.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:06 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:06 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:06 np0005539550 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:07 np0005539550 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 02:41:07 np0005539550 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 02:41:07 np0005539550 lvm[239339]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 02:41:07 np0005539550 lvm[239339]: VG ceph_vg0 finished
Nov 29 02:41:07 np0005539550 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 02:41:07 np0005539550 systemd[1]: Starting man-db-cache-update.service...
Nov 29 02:41:07 np0005539550 systemd[1]: Reloading.
Nov 29 02:41:07 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:07 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:41:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:41:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:41:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:41:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:41:07 np0005539550 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 02:41:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:08.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:41:09 np0005539550 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 02:41:09 np0005539550 systemd[1]: Finished man-db-cache-update.service.
Nov 29 02:41:09 np0005539550 systemd[1]: man-db-cache-update.service: Consumed 1.632s CPU time.
Nov 29 02:41:09 np0005539550 systemd[1]: run-rc2ff187dac0a478fb4f9e42d9141d6ec.service: Deactivated successfully.
Nov 29 02:41:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:10.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:11 np0005539550 podman[240556]: 2025-11-29 07:41:11.356040817 +0000 UTC m=+0.093220079 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:41:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:12.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:12.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:14.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:14 np0005539550 python3.9[240711]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:41:14 np0005539550 systemd[1]: Stopping Open-iSCSI...
Nov 29 02:41:14 np0005539550 iscsid[226951]: iscsid shutting down.
Nov 29 02:41:14 np0005539550 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 02:41:14 np0005539550 systemd[1]: Stopped Open-iSCSI.
Nov 29 02:41:14 np0005539550 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 02:41:14 np0005539550 systemd[1]: Starting Open-iSCSI...
Nov 29 02:41:14 np0005539550 systemd[1]: Started Open-iSCSI.
Nov 29 02:41:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:14.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:15 np0005539550 podman[240890]: 2025-11-29 07:41:15.352806531 +0000 UTC m=+0.092975403 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:41:15 np0005539550 python3.9[240865]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 02:41:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:16.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:16 np0005539550 python3.9[241093]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:16.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:17 np0005539550 python3.9[241245]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:17 np0005539550 systemd[1]: Reloading.
Nov 29 02:41:18 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:18 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:18.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:41:18.912 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:41:18.913 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:41:18.913 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:41:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:41:19 np0005539550 python3.9[241432]: ansible-ansible.builtin.service_facts Invoked
Nov 29 02:41:19 np0005539550 network[241449]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 02:41:19 np0005539550 network[241450]: 'network-scripts' will be removed from distribution in near future.
Nov 29 02:41:19 np0005539550 network[241451]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 02:41:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:20.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:20.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:22.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:41:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:22.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:41:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:24.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:24.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:25 np0005539550 python3.9[241729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:26 np0005539550 python3.9[241882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:26.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:27 np0005539550 python3.9[242036]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:27 np0005539550 python3.9[242189]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:27 np0005539550 podman[242191]: 2025-11-29 07:41:27.922755766 +0000 UTC m=+0.083105727 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:41:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:28.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:28 np0005539550 python3.9[242364]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:28.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:29 np0005539550 python3.9[242517]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:30 np0005539550 python3.9[242670]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:30.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:30.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:30 np0005539550 python3.9[242824]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:41:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:32.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:33 np0005539550 python3.9[242978]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539550 python3.9[243131]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:34.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:34 np0005539550 python3.9[243283]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:35 np0005539550 python3.9[243436]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:36 np0005539550 python3.9[243639]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:36.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:36 np0005539550 python3.9[243791]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:36.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:37 np0005539550 python3.9[243943]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539550 python3.9[244095]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:38 np0005539550 python3.9[244248]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:38.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:39 np0005539550 python3.9[244400]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:39 np0005539550 python3.9[244552]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:40 np0005539550 python3.9[244705]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:40.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:41 np0005539550 python3.9[244857]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:41 np0005539550 podman[244981]: 2025-11-29 07:41:41.849167315 +0000 UTC m=+0.085625939 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 29 02:41:41 np0005539550 python3.9[245024]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:42 np0005539550 python3.9[245186]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:42.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:43 np0005539550 python3.9[245338]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:41:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:44.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:44 np0005539550 python3.9[245491]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:44.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:45 np0005539550 podman[245617]: 2025-11-29 07:41:45.461367695 +0000 UTC m=+0.055366044 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd)
Nov 29 02:41:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:45 np0005539550 python3.9[245654]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 02:41:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:46.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:46 np0005539550 python3.9[245814]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:41:46 np0005539550 systemd[1]: Reloading.
Nov 29 02:41:46 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:41:46 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:41:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539550 python3.9[246003]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:48.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:48.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:48 np0005539550 python3.9[246156]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:49 np0005539550 python3.9[246309]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:50 np0005539550 python3.9[246463]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:50 np0005539550 python3.9[246616]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:51 np0005539550 python3.9[246769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539550 python3.9[246923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:52.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:52 np0005539550 python3.9[247076]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 02:41:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:54.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:55 np0005539550 python3.9[247230]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:41:56 np0005539550 python3.9[247433]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:41:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:56.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:41:56 np0005539550 python3.9[247585]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:56.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:57 np0005539550 python3.9[247737]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:58 np0005539550 podman[247862]: 2025-11-29 07:41:58.091836721 +0000 UTC m=+0.067082026 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:41:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:41:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:41:58 np0005539550 python3.9[247906]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:41:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:41:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:41:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:41:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:41:58.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:41:58 np0005539550 python3.9[248062]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:41:59
Nov 29 02:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'images']
Nov 29 02:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:41:59 np0005539550 python3.9[248214]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:00 np0005539550 python3.9[248367]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:00.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:00 np0005539550 python3.9[248519]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:42:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 3867 writes, 17K keys, 3864 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 3867 writes, 3864 syncs, 1.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1069 writes, 4943 keys, 1068 commit groups, 1.0 writes per commit group, ingest: 7.86 MB, 0.01 MB/s#012Interval WAL: 1069 writes, 1068 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.7      1.52              0.07         8    0.190       0      0       0.0       0.0#012  L6      1/0    9.45 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8      9.6      8.4      8.58              0.19         7    1.225     32K   3035       0.0       0.0#012 Sum      1/0    9.45 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.8      8.1      9.1     10.10              0.27        15    0.673     32K   3035       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.9      5.3      5.4      9.42              0.16         8    1.178     19K   1752       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0      9.6      8.4      8.58              0.19         7    1.225     32K   3035       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      1.51              0.07         7    0.216       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.019, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 10.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 9.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 2.61 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000233 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(126,2.31 MB,0.759657%) FilterBlock(16,98.48 KB,0.0316369%) IndexBlock(16,209.28 KB,0.067229%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:42:01 np0005539550 python3.9[248671]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:02.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3da4725f-e281-4c9b-b920-f49ad5234912 does not exist
Nov 29 02:42:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9de9d113-b44c-452e-91c9-a55a79fab09a does not exist
Nov 29 02:42:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7f30fc29-f14f-4679-8102-d0891f601418 does not exist
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:42:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.250694149 +0000 UTC m=+0.041690952 container create 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:42:04 np0005539550 systemd[1]: Started libpod-conmon-2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3.scope.
Nov 29 02:42:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.232058964 +0000 UTC m=+0.023055797 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.33802915 +0000 UTC m=+0.129025973 container init 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.345556528 +0000 UTC m=+0.136553331 container start 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.349477226 +0000 UTC m=+0.140474049 container attach 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:04 np0005539550 goofy_goldstine[248984]: 167 167
Nov 29 02:42:04 np0005539550 systemd[1]: libpod-2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3.scope: Deactivated successfully.
Nov 29 02:42:04 np0005539550 conmon[248984]: conmon 2b4ffe27a6eefcba646e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3.scope/container/memory.events
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.354598424 +0000 UTC m=+0.145595227 container died 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9e3c12d55c6840cced9c31c63be023c038a43e751619f7607f5fd9783936559d-merged.mount: Deactivated successfully.
Nov 29 02:42:04 np0005539550 podman[248968]: 2025-11-29 07:42:04.402608053 +0000 UTC m=+0.193604856 container remove 2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:42:04 np0005539550 systemd[1]: libpod-conmon-2b4ffe27a6eefcba646e650daec0cd4e559585c11690e1044c63937cbc68ade3.scope: Deactivated successfully.
Nov 29 02:42:04 np0005539550 podman[249008]: 2025-11-29 07:42:04.576133257 +0000 UTC m=+0.046184715 container create 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:42:04 np0005539550 systemd[1]: Started libpod-conmon-1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805.scope.
Nov 29 02:42:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:04 np0005539550 podman[249008]: 2025-11-29 07:42:04.645065258 +0000 UTC m=+0.115116736 container init 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:42:04 np0005539550 podman[249008]: 2025-11-29 07:42:04.653207041 +0000 UTC m=+0.123258499 container start 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 02:42:04 np0005539550 podman[249008]: 2025-11-29 07:42:04.558542567 +0000 UTC m=+0.028594055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:04 np0005539550 podman[249008]: 2025-11-29 07:42:04.65676124 +0000 UTC m=+0.126812718 container attach 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:42:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:04.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:42:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:42:05 np0005539550 angry_williamson[249026]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:42:05 np0005539550 angry_williamson[249026]: --> relative data size: 1.0
Nov 29 02:42:05 np0005539550 angry_williamson[249026]: --> All data devices are unavailable
Nov 29 02:42:05 np0005539550 systemd[1]: libpod-1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805.scope: Deactivated successfully.
Nov 29 02:42:05 np0005539550 podman[249008]: 2025-11-29 07:42:05.514954362 +0000 UTC m=+0.985005830 container died 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:42:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-463ab482771ed57bbf628d86580ca2ee515aa37e83eb63dccf285a72c9760626-merged.mount: Deactivated successfully.
Nov 29 02:42:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:05 np0005539550 podman[249008]: 2025-11-29 07:42:05.576823938 +0000 UTC m=+1.046875396 container remove 1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_williamson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:05 np0005539550 systemd[1]: libpod-conmon-1834933a654e537e4b3528408c6495848a262de0ec295be66cc90acd2713b805.scope: Deactivated successfully.
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.167015776 +0000 UTC m=+0.040633896 container create b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:42:06 np0005539550 systemd[1]: Started libpod-conmon-b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6.scope.
Nov 29 02:42:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.245212829 +0000 UTC m=+0.118830959 container init b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.14997634 +0000 UTC m=+0.023594490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.253876245 +0000 UTC m=+0.127494375 container start b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.258395698 +0000 UTC m=+0.132013848 container attach b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:06 np0005539550 systemd[1]: libpod-b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6.scope: Deactivated successfully.
Nov 29 02:42:06 np0005539550 sad_roentgen[249211]: 167 167
Nov 29 02:42:06 np0005539550 conmon[249211]: conmon b63af8af33879eded69c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6.scope/container/memory.events
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.261444194 +0000 UTC m=+0.135062354 container died b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-01b3068314c2942a12831445f248f6f1e164a48d4f3919f2f24dd8fe0aeae226-merged.mount: Deactivated successfully.
Nov 29 02:42:06 np0005539550 podman[249195]: 2025-11-29 07:42:06.304900649 +0000 UTC m=+0.178518779 container remove b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_roentgen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:42:06 np0005539550 systemd[1]: libpod-conmon-b63af8af33879eded69cf3fda677ae27ade3b5f2c5848af8543b299f8df2edb6.scope: Deactivated successfully.
Nov 29 02:42:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:06.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:06 np0005539550 podman[249235]: 2025-11-29 07:42:06.481068839 +0000 UTC m=+0.049409575 container create 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:06 np0005539550 systemd[1]: Started libpod-conmon-999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88.scope.
Nov 29 02:42:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65128c45d8da771ffe3b4a42a316ab7cc1bfc30a673926c41b796e6987343d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65128c45d8da771ffe3b4a42a316ab7cc1bfc30a673926c41b796e6987343d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65128c45d8da771ffe3b4a42a316ab7cc1bfc30a673926c41b796e6987343d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c65128c45d8da771ffe3b4a42a316ab7cc1bfc30a673926c41b796e6987343d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:06 np0005539550 podman[249235]: 2025-11-29 07:42:06.458892545 +0000 UTC m=+0.027233311 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:06 np0005539550 podman[249235]: 2025-11-29 07:42:06.563646721 +0000 UTC m=+0.131987487 container init 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:06 np0005539550 podman[249235]: 2025-11-29 07:42:06.570353969 +0000 UTC m=+0.138694705 container start 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:42:06 np0005539550 podman[249235]: 2025-11-29 07:42:06.574353068 +0000 UTC m=+0.142693804 container attach 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:42:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:06.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]: {
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:    "0": [
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:        {
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "devices": [
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "/dev/loop3"
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            ],
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "lv_name": "ceph_lv0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "lv_size": "7511998464",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "name": "ceph_lv0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "tags": {
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.cluster_name": "ceph",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.crush_device_class": "",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.encrypted": "0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.osd_id": "0",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.type": "block",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:                "ceph.vdo": "0"
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            },
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "type": "block",
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:            "vg_name": "ceph_vg0"
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:        }
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]:    ]
Nov 29 02:42:07 np0005539550 thirsty_banach[249251]: }
Nov 29 02:42:07 np0005539550 systemd[1]: libpod-999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88.scope: Deactivated successfully.
Nov 29 02:42:07 np0005539550 podman[249235]: 2025-11-29 07:42:07.376314826 +0000 UTC m=+0.944655592 container died 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 02:42:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:42:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:42:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:42:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:42:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:42:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7c65128c45d8da771ffe3b4a42a316ab7cc1bfc30a673926c41b796e6987343d-merged.mount: Deactivated successfully.
Nov 29 02:42:08 np0005539550 podman[249235]: 2025-11-29 07:42:08.101517198 +0000 UTC m=+1.669857934 container remove 999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:42:08 np0005539550 systemd[1]: libpod-conmon-999769e67a81855b63165f9b0dec6b6d6ca1d85d610d799e576d3229adeaab88.scope: Deactivated successfully.
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.714017905 +0000 UTC m=+0.046497013 container create 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:42:08 np0005539550 systemd[1]: Started libpod-conmon-3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730.scope.
Nov 29 02:42:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:08.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.692990059 +0000 UTC m=+0.025469187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.81756228 +0000 UTC m=+0.150041408 container init 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.82475752 +0000 UTC m=+0.157236628 container start 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.827980211 +0000 UTC m=+0.160459339 container attach 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:42:08 np0005539550 sweet_tu[249430]: 167 167
Nov 29 02:42:08 np0005539550 systemd[1]: libpod-3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730.scope: Deactivated successfully.
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.835094968 +0000 UTC m=+0.167574076 container died 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:42:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0eebeff4351b47bbc1e1497511e2a531167df1c1bf674d648562cf8e68385a33-merged.mount: Deactivated successfully.
Nov 29 02:42:08 np0005539550 podman[249414]: 2025-11-29 07:42:08.872249216 +0000 UTC m=+0.204728324 container remove 3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:42:08 np0005539550 systemd[1]: libpod-conmon-3dbdb429e211d58b83ae5ad8d7485f57978e2b5f6c05ded96d34684a5660c730.scope: Deactivated successfully.
Nov 29 02:42:09 np0005539550 podman[249453]: 2025-11-29 07:42:09.046949469 +0000 UTC m=+0.052595474 container create 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 02:42:09 np0005539550 systemd[1]: Started libpod-conmon-41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718.scope.
Nov 29 02:42:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:42:09 np0005539550 podman[249453]: 2025-11-29 07:42:09.02377426 +0000 UTC m=+0.029420295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:42:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9891a7bafb7231eb2e3c34c16106d89878685cd74607fcf3c6dbc0b156d99473/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9891a7bafb7231eb2e3c34c16106d89878685cd74607fcf3c6dbc0b156d99473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9891a7bafb7231eb2e3c34c16106d89878685cd74607fcf3c6dbc0b156d99473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9891a7bafb7231eb2e3c34c16106d89878685cd74607fcf3c6dbc0b156d99473/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:42:09 np0005539550 podman[249453]: 2025-11-29 07:42:09.138730801 +0000 UTC m=+0.144376836 container init 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:42:09 np0005539550 podman[249453]: 2025-11-29 07:42:09.145223183 +0000 UTC m=+0.150869188 container start 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:42:09 np0005539550 podman[249453]: 2025-11-29 07:42:09.148372652 +0000 UTC m=+0.154018657 container attach 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:42:09 np0005539550 python3.9[249601]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]: {
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:        "osd_id": 0,
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:        "type": "bluestore"
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]:    }
Nov 29 02:42:10 np0005539550 beautiful_germain[249521]: }
Nov 29 02:42:10 np0005539550 systemd[1]: libpod-41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718.scope: Deactivated successfully.
Nov 29 02:42:10 np0005539550 podman[249453]: 2025-11-29 07:42:10.064788387 +0000 UTC m=+1.070434392 container died 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:42:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9891a7bafb7231eb2e3c34c16106d89878685cd74607fcf3c6dbc0b156d99473-merged.mount: Deactivated successfully.
Nov 29 02:42:10 np0005539550 podman[249453]: 2025-11-29 07:42:10.120220772 +0000 UTC m=+1.125866777 container remove 41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:42:10 np0005539550 systemd[1]: libpod-conmon-41071fe542f33541880ebaad6976b7a16804550277af3c40e34ba6fa81d00718.scope: Deactivated successfully.
Nov 29 02:42:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:42:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:10 np0005539550 python3.9[249782]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 02:42:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:10.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:42:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:12 np0005539550 podman[249814]: 2025-11-29 07:42:12.377948036 +0000 UTC m=+0.109880285 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:42:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:12.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:12.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ae889a2d-8f29-4485-b0bf-f975af27818e does not exist
Nov 29 02:42:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9a433fa7-ce29-4e00-9856-7417c35b2149 does not exist
Nov 29 02:42:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5c45351d-6b4a-42d0-b8d6-7c87609aba82 does not exist
Nov 29 02:42:13 np0005539550 python3.9[250017]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 02:42:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:42:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:14.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:15 np0005539550 podman[250075]: 2025-11-29 07:42:15.729915416 +0000 UTC m=+0.065364624 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:42:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:16.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:16.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:18.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:18.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:42:18.914 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:42:18.914 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:42:18.914 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:42:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:42:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:20.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:20.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:21 np0005539550 systemd-logind[788]: New session 53 of user zuul.
Nov 29 02:42:21 np0005539550 systemd[1]: Started Session 53 of User zuul.
Nov 29 02:42:21 np0005539550 systemd[1]: session-53.scope: Deactivated successfully.
Nov 29 02:42:21 np0005539550 systemd-logind[788]: Session 53 logged out. Waiting for processes to exit.
Nov 29 02:42:21 np0005539550 systemd-logind[788]: Removed session 53.
Nov 29 02:42:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:22.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:22 np0005539550 python3.9[250280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:22.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:23 np0005539550 python3.9[250401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402142.2224605-3438-205641558151910/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:24 np0005539550 python3.9[250552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:24.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:24 np0005539550 python3.9[250628]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:24.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:25 np0005539550 python3.9[250778]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:25 np0005539550 python3.9[250899]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402144.6622097-3438-59739983205551/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:26 np0005539550 python3.9[251050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:26.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:26 np0005539550 python3.9[251171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402145.8704617-3438-1215251990747/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:27 np0005539550 python3.9[251321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:28 np0005539550 python3.9[251443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402147.0565672-3438-226028869800712/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:28 np0005539550 podman[251444]: 2025-11-29 07:42:28.324279009 +0000 UTC m=+0.059771974 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:42:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:28.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:28.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:28 np0005539550 python3.9[251612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:29 np0005539550 python3.9[251733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402148.4794073-3438-131378658233979/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:30.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:30.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:31 np0005539550 python3.9[251886]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539550 python3.9[252038]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:42:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:32 np0005539550 python3.9[252191]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:32.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:33 np0005539550 python3.9[252343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:34 np0005539550 python3.9[252467]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764402153.0405505-3759-246194365820513/.source _original_basename=.9n2n7pbb follow=False checksum=08346de17ea62cac8d80f354925edc740dec5cf2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 02:42:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:35 np0005539550 python3.9[252621]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:42:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:36 np0005539550 python3.9[252824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:36.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:36 np0005539550 python3.9[252947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402155.6975832-3837-538103600335/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:36.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:37 np0005539550 python3.9[253097]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 02:42:38 np0005539550 python3.9[253218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764402157.0058124-3882-185171097251406/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 02:42:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:39 np0005539550 python3.9[253371]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 02:42:39 np0005539550 python3.9[253523]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:42:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:40 np0005539550 python3[253676]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:42:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:42.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:43 np0005539550 podman[253714]: 2025-11-29 07:42:43.354749371 +0000 UTC m=+0.091797923 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:42:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:44.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:42:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:42:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:46.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:46.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:50.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:50.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539550 podman[253762]: 2025-11-29 07:42:52.108136194 +0000 UTC m=+5.840625573 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 29 02:42:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:52.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:54.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:42:56 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:42:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:42:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:42:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:42:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:42:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:58 np0005539550 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 02:42:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:42:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:42:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:42:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:42:59
Nov 29 02:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 02:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:43:00 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:43:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24146 addr: [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] state: up:standby) since 17.5728
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13  marking 24146 [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] mds.-1.0 up:standby laggy
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 no beacon from mds.-1.0 (gid: 24160 addr: [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] state: up:standby) since 16.5564
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13  marking 24160 [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] mds.-1.0 up:standby laggy
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13  failing and removing standby 24146 [v2:192.168.122.101:6804/830016470,v1:192.168.122.101:6805/830016470] mds.-1.0 up:standby
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 fail_mds_gid 24146 mds.cephfs.compute-1.ldsugj role -1
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : MDS daemon mds.cephfs.compute-1.ldsugj is removed because it is dead or otherwise unavailable.
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13  failing and removing standby 24160 [v2:192.168.122.102:6804/3065985027,v1:192.168.122.102:6805/3065985027] mds.-1.0 up:standby
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).mds e13 fail_mds_gid 24160 mds.cephfs.compute-2.mmoati role -1
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : MDS daemon mds.cephfs.compute-2.mmoati is removed because it is dead or otherwise unavailable.
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
Nov 29 02:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:01 np0005539550 podman[253872]: 2025-11-29 07:43:01.10080757 +0000 UTC m=+1.103843777 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:43:01 np0005539550 podman[253691]: 2025-11-29 07:43:01.163357363 +0000 UTC m=+20.100201880 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:01 np0005539550 podman[253915]: 2025-11-29 07:43:01.314840196 +0000 UTC m=+0.050566604 container create d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:01 np0005539550 podman[253915]: 2025-11-29 07:43:01.287466802 +0000 UTC m=+0.023193240 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:01 np0005539550 python3[253676]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 02:43:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:02.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:04.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:06.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:43:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:43:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:43:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:43:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:43:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:08.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:43:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:43:08 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(65) init, last seen epoch 65, mid-election, bumping
Nov 29 02:43:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:10.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:12 np0005539550 ceph-mds[93677]: mds.beacon.cephfs.compute-0.qcwnhf missed beacon ack from the monitors
Nov 29 02:43:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:12.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:12.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:13 np0005539550 podman[254008]: 2025-11-29 07:43:13.706250253 +0000 UTC m=+0.087213920 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:43:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.pdhsqi(active, since 30m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:43:14 np0005539550 podman[254181]: 2025-11-29 07:43:14.71280563 +0000 UTC m=+0.480810329 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:43:14 np0005539550 podman[254181]: 2025-11-29 07:43:14.841390381 +0000 UTC m=+0.609395050 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:43:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:14.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:15 np0005539550 podman[254338]: 2025-11-29 07:43:15.501178758 +0000 UTC m=+0.058121522 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:43:15 np0005539550 podman[254338]: 2025-11-29 07:43:15.514219744 +0000 UTC m=+0.071162488 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:43:15 np0005539550 podman[254404]: 2025-11-29 07:43:15.701733997 +0000 UTC m=+0.051803075 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4)
Nov 29 02:43:15 np0005539550 podman[254404]: 2025-11-29 07:43:15.710049775 +0000 UTC m=+0.060118823 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, description=keepalived for Ceph)
Nov 29 02:43:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: MDS daemon mds.cephfs.compute-1.ldsugj is removed because it is dead or otherwise unavailable.
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: MDS daemon mds.cephfs.compute-2.mmoati is removed because it is dead or otherwise unavailable.
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: Health check failed: insufficient standby MDS daemons available (MDS_INSUFFICIENT_STANDBY)
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 29 02:43:16 np0005539550 ceph-mon[74435]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 29 02:43:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:16.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:43:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:16.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:43:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:18.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:18.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:43:18.915 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:43:18.916 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:43:18.916 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:43:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:43:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:20.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:20.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:21 np0005539550 ceph-mon[74435]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 29 02:43:22 np0005539550 ceph-mon[74435]: paxos.0).electionLogic(69) init, last seen epoch 69, mid-election, bumping
Nov 29 02:43:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:43:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:22.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.qcwnhf=up:active} 2 up:standby
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.pdhsqi(active, since 30m), standbys: compute-2.zfrvoq, compute-1.fchyan
Nov 29 02:43:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:43:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:24.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:26.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:28 np0005539550 python3.9[254869]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:28.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:29 np0005539550 python3.9[255023]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 02:43:29 np0005539550 ceph-mon[74435]: mon.compute-2 calling monitor election
Nov 29 02:43:29 np0005539550 ceph-mon[74435]: mon.compute-0 calling monitor election
Nov 29 02:43:29 np0005539550 ceph-mon[74435]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 29 02:43:29 np0005539550 ceph-mon[74435]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 29 02:43:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:30.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:31 np0005539550 podman[255049]: 2025-11-29 07:43:31.325796219 +0000 UTC m=+0.054746718 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:31 np0005539550 podman[255050]: 2025-11-29 07:43:31.349725727 +0000 UTC m=+0.078679956 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:43:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:32 np0005539550 python3.9[255215]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 02:43:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:32.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:33 np0005539550 python3[255367]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 02:43:34 np0005539550 podman[255403]: 2025-11-29 07:43:33.945928422 +0000 UTC m=+0.024075762 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 02:43:34 np0005539550 podman[255403]: 2025-11-29 07:43:34.148274056 +0000 UTC m=+0.226421376 container create 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 02:43:34 np0005539550 python3[255367]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 02:43:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:34.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:36.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:43:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:38.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:38 np0005539550 ceph-mon[74435]: mon.compute-1 calling monitor election
Nov 29 02:43:38 np0005539550 ceph-mon[74435]: Health detail: HEALTH_WARN 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:38 np0005539550 ceph-mon[74435]: [WRN] SLOW_OPS: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops
Nov 29 02:43:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:38.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:39 np0005539550 python3.9[255595]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops)
Nov 29 02:43:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:43:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:40.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:40 np0005539550 python3.9[255750]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:40.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:41 np0005539550 python3.9[255951]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764402220.5536816-4158-35483278836215/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 02:43:41 np0005539550 python3.9[256027]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 02:43:41 np0005539550 systemd[1]: Reloading.
Nov 29 02:43:41 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:41 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:43:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:42.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:42 np0005539550 python3.9[256140]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 02:43:42 np0005539550 systemd[1]: Reloading.
Nov 29 02:43:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:42.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:42 np0005539550 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 02:43:42 np0005539550 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 02:43:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:43 np0005539550 systemd[1]: Starting nova_compute container...
Nov 29 02:43:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:44 np0005539550 podman[256180]: 2025-11-29 07:43:44.036714336 +0000 UTC m=+0.647738059 container init 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:44 np0005539550 podman[256180]: 2025-11-29 07:43:44.044165311 +0000 UTC m=+0.655189014 container start 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3)
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + sudo -E kolla_set_configs
Nov 29 02:43:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Validating config file
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying service configuration files
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Writing out command to execute
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:44 np0005539550 nova_compute[256196]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:44 np0005539550 nova_compute[256196]: ++ cat /run_command
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + CMD=nova-compute
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + ARGS=
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + sudo kolla_copy_cacerts
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + [[ ! -n '' ]]
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + . kolla_extend_start
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:44 np0005539550 nova_compute[256196]: Running command: 'nova-compute'
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + umask 0022
Nov 29 02:43:44 np0005539550 nova_compute[256196]: + exec nova-compute
Nov 29 02:43:44 np0005539550 podman[256180]: nova_compute
Nov 29 02:43:44 np0005539550 systemd[1]: Started nova_compute container.
Nov 29 02:43:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:44 np0005539550 podman[256199]: 2025-11-29 07:43:44.463581285 +0000 UTC m=+0.735775346 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 29 02:43:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:44.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:44.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:46.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.530 256213 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.531 256213 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.531 256213 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.531 256213 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:43:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.681 256213 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.695 256213 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:46 np0005539550 nova_compute[256196]: 2025-11-29 07:43:46.696 256213 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:43:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:46.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.425 256213 INFO nova.virt.driver [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.578 256213 INFO nova.compute.provider_config [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.763 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.763 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.764 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.764 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.764 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.764 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.764 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.765 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.766 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.766 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.766 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.766 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.766 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.767 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.767 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.767 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.767 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.767 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 31 sec, mon.compute-1 has slow ops)
Nov 29 02:43:47 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:43:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.768 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.769 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.769 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.769 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.769 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.769 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.770 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.770 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.770 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.770 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.770 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.771 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.772 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.772 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.772 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.772 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.772 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.773 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.773 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.773 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.773 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.774 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.775 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.775 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.775 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.775 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.775 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.776 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.777 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.778 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.779 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.780 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.781 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.782 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.783 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.783 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.783 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.783 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.784 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.784 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.784 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.784 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.784 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.785 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.786 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.787 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.788 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.789 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.790 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.791 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.792 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.793 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.794 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.794 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.794 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.794 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.794 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.795 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.796 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.797 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.798 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.799 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.800 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.801 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.801 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.801 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.801 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.801 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.802 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.803 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.803 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.803 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.803 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.803 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.804 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.805 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.805 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.805 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.805 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.805 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.806 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.807 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.808 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.809 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.810 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.810 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.810 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.810 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.810 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.811 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.812 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.813 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.814 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.815 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.816 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.817 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.818 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.819 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.820 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.821 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.822 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.822 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.822 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.822 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.822 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.823 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.824 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.825 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.826 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.827 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.827 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.827 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.827 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.827 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.828 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.829 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.830 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.831 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.832 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.833 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.834 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.835 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.836 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.837 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.837 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.837 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.837 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.837 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.838 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.839 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.840 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.841 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.842 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.843 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.843 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.843 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.843 256213 WARNING oslo_config.cfg [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:43:47 np0005539550 nova_compute[256196]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:43:47 np0005539550 nova_compute[256196]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:43:47 np0005539550 nova_compute[256196]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:43:47 np0005539550 nova_compute[256196]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.843 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.844 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.844 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.844 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.844 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.844 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.845 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.846 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.846 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.846 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.846 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.846 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.847 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.847 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.847 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.847 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.847 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.848 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.849 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.849 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.849 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.849 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.849 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.850 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.851 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.852 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.853 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.854 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.855 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.856 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.857 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.858 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.859 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.860 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.861 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.862 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.863 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.864 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.865 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.866 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.867 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.868 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.869 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.870 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.871 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.872 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.872 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.872 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.872 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.872 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.873 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.874 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.875 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.876 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.877 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.878 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.879 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.880 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.881 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.882 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.882 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.882 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.882 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.882 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.883 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.884 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.885 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.886 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.887 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.888 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.889 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.889 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.889 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.889 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.889 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.890 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.890 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.890 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.890 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.890 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.891 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.892 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.893 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.894 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.895 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.896 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.897 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.898 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.899 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.900 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.901 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.902 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.903 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.904 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.904 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.905 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.906 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.907 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.907 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.907 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.907 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.907 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.908 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.909 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.910 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.910 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.910 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.910 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.910 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.911 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.911 256213 DEBUG oslo_service.service [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:43:47 np0005539550 nova_compute[256196]: 2025-11-29 07:43:47.912 256213 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.107 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.107 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.108 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.108 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:43:48 np0005539550 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 02:43:48 np0005539550 systemd[1]: Started libvirt QEMU daemon.
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.187 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8cd9c09dc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.190 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8cd9c09dc0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.192 256213 INFO nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:48.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops)
Nov 29 02:43:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 29 02:43:48 np0005539550 python3.9[256443]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.827 256213 WARNING nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:48 np0005539550 nova_compute[256196]: 2025-11-29 07:43:48.828 256213 DEBUG nova.virt.libvirt.volume.mount [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:43:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:48.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.022 256213 INFO nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <host>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <uuid>9851e351-ef5d-4a0c-9f85-d561f6a4210f</uuid>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <arch>x86_64</arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model>EPYC-Rome-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <vendor>AMD</vendor>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <microcode version='16777317'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='x2apic'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='tsc-deadline'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='osxsave'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='hypervisor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='tsc_adjust'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='spec-ctrl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='stibp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='arch-capabilities'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='cmp_legacy'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='topoext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='virt-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='lbrv'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='tsc-scale'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='vmcb-clean'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='pause-filter'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='pfthreshold'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='svme-addr-chk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='rdctl-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='mds-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature name='pschange-mc-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <pages unit='KiB' size='4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <pages unit='KiB' size='2048'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <power_management>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <suspend_mem/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </power_management>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <iommu support='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <migration_features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <live/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <uri_transports>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <uri_transport>tcp</uri_transport>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <uri_transport>rdma</uri_transport>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </uri_transports>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </migration_features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <topology>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <cells num='1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <cell id='0'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <memory unit='KiB'>7864320</memory>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <distances>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <sibling id='0' value='10'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          </distances>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          <cpus num='8'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:          </cpus>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        </cell>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </cells>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </topology>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <cache>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </cache>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <secmodel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model>selinux</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <doi>0</doi>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </secmodel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <secmodel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model>dac</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <doi>0</doi>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </secmodel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </host>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <guest>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <os_type>hvm</os_type>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <arch name='i686'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <wordsize>32</wordsize>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <domain type='qemu'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <domain type='kvm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <pae/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <nonpae/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <apic default='on' toggle='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <cpuselection/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <deviceboot/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <externalSnapshot/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </guest>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <guest>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <os_type>hvm</os_type>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <arch name='x86_64'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <wordsize>64</wordsize>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <domain type='qemu'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <domain type='kvm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <apic default='on' toggle='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <cpuselection/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <deviceboot/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <externalSnapshot/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </guest>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </capabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: #033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.029 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.054 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:43:49 np0005539550 nova_compute[256196]: <domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <domain>kvm</domain>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <arch>i686</arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <vcpu max='4096'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <iothreads supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <os supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='firmware'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <loader supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>rom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pflash</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='readonly'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>yes</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='secure'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </loader>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </os>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='maximumMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <vendor>AMD</vendor>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='succor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='custom' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-128'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-256'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-512'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <memoryBacking supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='sourceType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>anonymous</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>memfd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </memoryBacking>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <disk supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='diskDevice'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>disk</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cdrom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>floppy</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>lun</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>fdc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>sata</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </disk>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <graphics supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vnc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egl-headless</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </graphics>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <video supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='modelType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vga</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cirrus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>none</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>bochs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ramfb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </video>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hostdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='mode'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>subsystem</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='startupPolicy'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>mandatory</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>requisite</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>optional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='subsysType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pci</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='capsType'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='pciBackend'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hostdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <rng supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>random</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </rng>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <filesystem supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='driverType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>path</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>handle</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtiofs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </filesystem>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <tpm supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-tis</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-crb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emulator</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>external</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendVersion'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>2.0</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </tpm>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <redirdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </redirdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <channel supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </channel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <crypto supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </crypto>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <interface supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>passt</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </interface>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <panic supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>isa</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>hyperv</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </panic>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <console supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>null</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dev</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pipe</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stdio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>udp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tcp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu-vdagent</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </console>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <gic supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <genid supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backup supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <async-teardown supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <ps2 supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sev supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sgx supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hyperv supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='features'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>relaxed</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vapic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>spinlocks</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vpindex</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>runtime</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>synic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stimer</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reset</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vendor_id</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>frequencies</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reenlightenment</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tlbflush</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ipi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>avic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emsr_bitmap</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>xmm_input</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hyperv>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <launchSecurity supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='sectype'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tdx</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </launchSecurity>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.060 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:43:49 np0005539550 nova_compute[256196]: <domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <domain>kvm</domain>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <arch>i686</arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <vcpu max='240'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <iothreads supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <os supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='firmware'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <loader supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>rom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pflash</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='readonly'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>yes</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='secure'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </loader>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </os>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='maximumMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <vendor>AMD</vendor>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='succor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='custom' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-128'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-256'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-512'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <memoryBacking supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='sourceType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>anonymous</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>memfd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </memoryBacking>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <disk supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='diskDevice'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>disk</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cdrom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>floppy</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>lun</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ide</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>fdc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>sata</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </disk>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <graphics supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vnc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egl-headless</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </graphics>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <video supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='modelType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vga</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cirrus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>none</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>bochs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ramfb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </video>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hostdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='mode'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>subsystem</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='startupPolicy'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>mandatory</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>requisite</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>optional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='subsysType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pci</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='capsType'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='pciBackend'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hostdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <rng supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>random</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </rng>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <filesystem supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='driverType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>path</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>handle</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtiofs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </filesystem>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <tpm supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-tis</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-crb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emulator</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>external</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendVersion'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>2.0</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </tpm>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <redirdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </redirdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <channel supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </channel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <crypto supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </crypto>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <interface supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>passt</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </interface>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <panic supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>isa</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>hyperv</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </panic>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <console supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>null</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dev</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pipe</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stdio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>udp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tcp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu-vdagent</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </console>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <gic supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <genid supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backup supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <async-teardown supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <ps2 supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sev supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sgx supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hyperv supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='features'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>relaxed</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vapic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>spinlocks</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vpindex</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>runtime</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>synic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stimer</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reset</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vendor_id</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>frequencies</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reenlightenment</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tlbflush</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ipi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>avic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emsr_bitmap</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>xmm_input</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hyperv>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <launchSecurity supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='sectype'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tdx</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </launchSecurity>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.101 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.105 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:43:49 np0005539550 nova_compute[256196]: <domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <domain>kvm</domain>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <arch>x86_64</arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <vcpu max='4096'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <iothreads supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <os supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='firmware'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>efi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <loader supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>rom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pflash</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='readonly'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>yes</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='secure'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>yes</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </loader>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </os>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='maximumMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <vendor>AMD</vendor>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='succor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='custom' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-128'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-256'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-512'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <memoryBacking supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='sourceType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>anonymous</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>memfd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </memoryBacking>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <disk supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='diskDevice'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>disk</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cdrom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>floppy</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>lun</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>fdc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>sata</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </disk>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <graphics supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vnc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egl-headless</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </graphics>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <video supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='modelType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vga</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cirrus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>none</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>bochs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ramfb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </video>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hostdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='mode'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>subsystem</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='startupPolicy'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>mandatory</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>requisite</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>optional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='subsysType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pci</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='capsType'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='pciBackend'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hostdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <rng supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>random</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </rng>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <filesystem supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='driverType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>path</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>handle</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtiofs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </filesystem>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <tpm supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-tis</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-crb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emulator</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>external</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendVersion'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>2.0</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </tpm>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <redirdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </redirdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <channel supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </channel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <crypto supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </crypto>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <interface supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>passt</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </interface>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <panic supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>isa</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>hyperv</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </panic>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <console supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>null</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dev</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pipe</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stdio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>udp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tcp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu-vdagent</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </console>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <gic supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <genid supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backup supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <async-teardown supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <ps2 supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sev supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sgx supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hyperv supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='features'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>relaxed</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vapic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>spinlocks</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vpindex</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>runtime</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>synic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stimer</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reset</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vendor_id</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>frequencies</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reenlightenment</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tlbflush</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ipi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>avic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emsr_bitmap</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>xmm_input</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hyperv>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <launchSecurity supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='sectype'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tdx</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </launchSecurity>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.173 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:43:49 np0005539550 nova_compute[256196]: <domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <domain>kvm</domain>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <arch>x86_64</arch>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <vcpu max='240'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <iothreads supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <os supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='firmware'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <loader supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>rom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pflash</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='readonly'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>yes</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='secure'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>no</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </loader>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </os>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='maximumMigratable'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>on</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>off</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <vendor>AMD</vendor>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='succor'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <mode name='custom' supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Denverton-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='auto-ibrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amd-psfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='stibp-always-on'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='EPYC-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-128'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-256'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx10-512'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='prefetchiti'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Haswell-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512er'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512pf'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fma4'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tbm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xop'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='amx-tile'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-bf16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-fp16'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bitalg'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrc'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fzrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='la57'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='taa-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xfd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ifma'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cmpccxadd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fbsdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='fsrs'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ibrs-all'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mcdt-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pbrsb-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='psdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='serialize'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vaes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='hle'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='rtm'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512bw'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512cd'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512dq'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512f'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='avx512vl'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='invpcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pcid'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='pku'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='mpx'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='core-capability'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='split-lock-detect'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='cldemote'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='erms'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='gfni'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdir64b'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='movdiri'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='xsaves'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='athlon-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='core2duo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='coreduo-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='n270-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='ss'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <blockers model='phenom-v1'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnow'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <feature name='3dnowext'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </blockers>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </mode>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <memoryBacking supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <enum name='sourceType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>anonymous</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <value>memfd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </memoryBacking>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <disk supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='diskDevice'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>disk</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cdrom</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>floppy</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>lun</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ide</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>fdc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>sata</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </disk>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <graphics supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vnc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egl-headless</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </graphics>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <video supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='modelType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vga</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>cirrus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>none</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>bochs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ramfb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </video>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hostdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='mode'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>subsystem</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='startupPolicy'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>mandatory</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>requisite</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>optional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='subsysType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pci</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>scsi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='capsType'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='pciBackend'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hostdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <rng supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtio-non-transitional</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>random</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>egd</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </rng>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <filesystem supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='driverType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>path</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>handle</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>virtiofs</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </filesystem>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <tpm supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-tis</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tpm-crb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emulator</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>external</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendVersion'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>2.0</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </tpm>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <redirdev supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='bus'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>usb</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </redirdev>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <channel supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </channel>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <crypto supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendModel'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>builtin</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </crypto>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <interface supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='backendType'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>default</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>passt</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </interface>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <panic supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='model'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>isa</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>hyperv</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </panic>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <console supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='type'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>null</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vc</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pty</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dev</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>file</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>pipe</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stdio</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>udp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tcp</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>unix</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>qemu-vdagent</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>dbus</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </console>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </devices>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <gic supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <genid supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <backup supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <async-teardown supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <ps2 supported='yes'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sev supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <sgx supported='no'/>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <hyperv supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='features'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>relaxed</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vapic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>spinlocks</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vpindex</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>runtime</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>synic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>stimer</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reset</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>vendor_id</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>frequencies</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>reenlightenment</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tlbflush</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>ipi</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>avic</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>emsr_bitmap</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>xmm_input</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </defaults>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </hyperv>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    <launchSecurity supported='yes'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      <enum name='sectype'>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:        <value>tdx</value>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:      </enum>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:    </launchSecurity>
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  </features>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </domainCapabilities>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.242 256213 DEBUG nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.243 256213 INFO nova.virt.libvirt.host [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.245 256213 INFO nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.246 256213 INFO nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.255 256213 DEBUG nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:43:49 np0005539550 nova_compute[256196]:  <model>Nehalem</model>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: </cpu>
Nov 29 02:43:49 np0005539550 nova_compute[256196]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.257 256213 DEBUG nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.536 256213 INFO nova.virt.node [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Determined node identity a73c606e-2495-4af4-b703-8d4b3001fdf5 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.584 256213 WARNING nova.compute.manager [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Compute nodes ['a73c606e-2495-4af4-b703-8d4b3001fdf5'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 02:43:49 np0005539550 nova_compute[256196]: 2025-11-29 07:43:49.715 256213 INFO nova.compute.manager [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.099 256213 WARNING nova.compute.manager [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.099 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.099 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.100 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.100 256213 DEBUG nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.100 256213 DEBUG oslo_concurrency.processutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:50 np0005539550 python3.9[256608]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:50.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:43:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451994993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.515 256213 DEBUG oslo_concurrency.processutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:50 np0005539550 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 02:43:50 np0005539550 systemd[1]: Started libvirt nodedev daemon.
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.886 256213 WARNING nova.virt.libvirt.driver [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.887 256213 DEBUG nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.887 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.888 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:43:50 np0005539550 nova_compute[256196]: 2025-11-29 07:43:50.905 256213 WARNING nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] No compute node record for compute-0.ctlplane.example.com:a73c606e-2495-4af4-b703-8d4b3001fdf5: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host a73c606e-2495-4af4-b703-8d4b3001fdf5 could not be found.#033[00m
Nov 29 02:43:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:50.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:51 np0005539550 nova_compute[256196]: 2025-11-29 07:43:51.133 256213 INFO nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: a73c606e-2495-4af4-b703-8d4b3001fdf5#033[00m
Nov 29 02:43:51 np0005539550 python3.9[256803]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ed0c9ed7-95eb-448b-b667-5b13060ccb12 does not exist
Nov 29 02:43:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 71eeb3cf-cea5-4197-8843-35b1962d0d39 does not exist
Nov 29 02:43:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9b2bc8e8-5a2e-4073-a070-dd9a35279a0f does not exist
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:43:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:43:51 np0005539550 nova_compute[256196]: 2025-11-29 07:43:51.821 256213 DEBUG nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:43:51 np0005539550 nova_compute[256196]: 2025-11-29 07:43:51.822 256213 DEBUG nova.compute.resource_tracker [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.001122894 +0000 UTC m=+0.086281686 container create 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:51.935957247 +0000 UTC m=+0.021116059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:52 np0005539550 systemd[1]: Started libpod-conmon-235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e.scope.
Nov 29 02:43:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.182140305 +0000 UTC m=+0.267299157 container init 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.189768185 +0000 UTC m=+0.274926977 container start 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.194074763 +0000 UTC m=+0.279233555 container attach 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:43:52 np0005539550 serene_knuth[257038]: 167 167
Nov 29 02:43:52 np0005539550 systemd[1]: libpod-235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e.scope: Deactivated successfully.
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.196501903 +0000 UTC m=+0.281660705 container died 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:43:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f7b6afaa7d17c19bc1aef7e4c0bd99f12e93a9b7e0d7b2354fd6e10a8a739e14-merged.mount: Deactivated successfully.
Nov 29 02:43:52 np0005539550 podman[257021]: 2025-11-29 07:43:52.34013111 +0000 UTC m=+0.425289902 container remove 235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_knuth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:52 np0005539550 systemd[1]: libpod-conmon-235fd68fc05f75f5906616b346fcb256643a2b59cc8ce1b06b967d4298e2024e.scope: Deactivated successfully.
Nov 29 02:43:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:52.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:52 np0005539550 python3.9[257130]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:52 np0005539550 podman[257139]: 2025-11-29 07:43:52.503304745 +0000 UTC m=+0.034315538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:52 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:43:52 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:43:52 np0005539550 podman[257139]: 2025-11-29 07:43:52.707005462 +0000 UTC m=+0.238016235 container create 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:43:52 np0005539550 systemd[1]: Started libpod-conmon-4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d.scope.
Nov 29 02:43:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:52 np0005539550 podman[257139]: 2025-11-29 07:43:52.83345684 +0000 UTC m=+0.364467633 container init 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:43:52 np0005539550 podman[257139]: 2025-11-29 07:43:52.842487356 +0000 UTC m=+0.373498129 container start 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:43:52 np0005539550 podman[257139]: 2025-11-29 07:43:52.84624694 +0000 UTC m=+0.377257973 container attach 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:52.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:53 np0005539550 ceph-mon[74435]: Health check failed: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops (SLOW_OPS)
Nov 29 02:43:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:43:53 np0005539550 python3.9[257334]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 02:43:53 np0005539550 bold_mclaren[257167]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:43:53 np0005539550 bold_mclaren[257167]: --> relative data size: 1.0
Nov 29 02:43:53 np0005539550 bold_mclaren[257167]: --> All data devices are unavailable
Nov 29 02:43:53 np0005539550 podman[257139]: 2025-11-29 07:43:53.728452521 +0000 UTC m=+1.259463294 container died 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:43:53 np0005539550 nova_compute[256196]: 2025-11-29 07:43:53.727 256213 INFO nova.scheduler.client.report [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] [req-f17287e5-1d26-4c9c-8a98-165f86930610] Created resource provider record via placement API for resource provider with UUID a73c606e-2495-4af4-b703-8d4b3001fdf5 and name compute-0.ctlplane.example.com.#033[00m
Nov 29 02:43:53 np0005539550 systemd[1]: Stopping nova_compute container...
Nov 29 02:43:53 np0005539550 systemd[1]: libpod-4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d.scope: Deactivated successfully.
Nov 29 02:43:54 np0005539550 nova_compute[256196]: 2025-11-29 07:43:54.177 256213 DEBUG oslo_concurrency.processutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0eaccf6f4d149000a0a0bf7fd89e4a901eb6575802991eeb3c7936e45cf7bc27-merged.mount: Deactivated successfully.
Nov 29 02:43:54 np0005539550 podman[257139]: 2025-11-29 07:43:54.256914178 +0000 UTC m=+1.787924951 container remove 4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:54 np0005539550 systemd[1]: libpod-conmon-4b81b405ea3178e19a89e808b8435a3eb0d176d942800ee6fd33aabea919cc4d.scope: Deactivated successfully.
Nov 29 02:43:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:54 np0005539550 nova_compute[256196]: 2025-11-29 07:43:54.442 256213 DEBUG oslo_concurrency.lockutils [None req-45798987-61f0-4949-a054-f5f2a24cab59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:43:54 np0005539550 nova_compute[256196]: 2025-11-29 07:43:54.442 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:54 np0005539550 nova_compute[256196]: 2025-11-29 07:43:54.442 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:54 np0005539550 nova_compute[256196]: 2025-11-29 07:43:54.443 256213 DEBUG oslo_concurrency.lockutils [None req-cd1bc086-68c0-4a80-8940-c2de03bc94b4 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:54 np0005539550 podman[257535]: 2025-11-29 07:43:54.829746904 +0000 UTC m=+0.022785740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:43:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:43:54 np0005539550 systemd[1]: libpod-2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918.scope: Deactivated successfully.
Nov 29 02:43:54 np0005539550 virtqemud[256287]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 02:43:54 np0005539550 virtqemud[256287]: hostname: compute-0
Nov 29 02:43:54 np0005539550 virtqemud[256287]: End of file while reading data: Input/output error
Nov 29 02:43:54 np0005539550 systemd[1]: libpod-2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918.scope: Consumed 4.214s CPU time.
Nov 29 02:43:54 np0005539550 podman[257535]: 2025-11-29 07:43:54.963750331 +0000 UTC m=+0.156789137 container create 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:43:54 np0005539550 podman[257348]: 2025-11-29 07:43:54.976173341 +0000 UTC m=+1.217339233 container died 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 29 02:43:55 np0005539550 systemd[1]: Started libpod-conmon-74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa.scope.
Nov 29 02:43:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6-merged.mount: Deactivated successfully.
Nov 29 02:43:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918-userdata-shm.mount: Deactivated successfully.
Nov 29 02:43:55 np0005539550 podman[257535]: 2025-11-29 07:43:55.061896852 +0000 UTC m=+0.254935678 container init 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:43:55 np0005539550 podman[257535]: 2025-11-29 07:43:55.069928492 +0000 UTC m=+0.262967298 container start 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:43:55 np0005539550 jolly_zhukovsky[257566]: 167 167
Nov 29 02:43:55 np0005539550 systemd[1]: libpod-74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa.scope: Deactivated successfully.
Nov 29 02:43:55 np0005539550 podman[257348]: 2025-11-29 07:43:55.185048497 +0000 UTC m=+1.426214389 container cleanup 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251125)
Nov 29 02:43:55 np0005539550 podman[257348]: nova_compute
Nov 29 02:43:55 np0005539550 podman[257535]: 2025-11-29 07:43:55.216561984 +0000 UTC m=+0.409600790 container attach 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:43:55 np0005539550 podman[257535]: 2025-11-29 07:43:55.21838449 +0000 UTC m=+0.411423296 container died 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:43:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a2043c4d0ae5c3bcb0333a052882893acbf5c1c8305a5ff54fee63cad0deabee-merged.mount: Deactivated successfully.
Nov 29 02:43:55 np0005539550 podman[257535]: 2025-11-29 07:43:55.26122981 +0000 UTC m=+0.454268616 container remove 74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_zhukovsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:43:55 np0005539550 podman[257581]: nova_compute
Nov 29 02:43:55 np0005539550 systemd[1]: libpod-conmon-74cf4ac7fa04ce0b53725f773b315ffa8d7fff1be438ee4a00fd78c978bc53aa.scope: Deactivated successfully.
Nov 29 02:43:55 np0005539550 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 02:43:55 np0005539550 systemd[1]: Stopped nova_compute container.
Nov 29 02:43:55 np0005539550 systemd[1]: Starting nova_compute container...
Nov 29 02:43:55 np0005539550 podman[257615]: 2025-11-29 07:43:55.508349901 +0000 UTC m=+0.124746076 container create ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:43:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f042c3f193c16992665fa4237b9c735433c597f6ca8f96d8e2fc155c84dac6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 podman[257600]: 2025-11-29 07:43:55.532497174 +0000 UTC m=+0.234071486 container init 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 29 02:43:55 np0005539550 systemd[1]: Started libpod-conmon-ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86.scope.
Nov 29 02:43:55 np0005539550 podman[257600]: 2025-11-29 07:43:55.54636361 +0000 UTC m=+0.247937902 container start 2b907bf91be0d0b61341427873a730f68629911b7acf4cbef5b5890af7db3918 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + sudo -E kolla_set_configs
Nov 29 02:43:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a837cd361862320619a7583a5d847de3bfdae55bf63733570b24929a42ca55/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a837cd361862320619a7583a5d847de3bfdae55bf63733570b24929a42ca55/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a837cd361862320619a7583a5d847de3bfdae55bf63733570b24929a42ca55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a837cd361862320619a7583a5d847de3bfdae55bf63733570b24929a42ca55/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:55 np0005539550 podman[257615]: 2025-11-29 07:43:55.486339181 +0000 UTC m=+0.102735356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Validating config file
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying service configuration files
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /etc/ceph
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Creating directory /etc/ceph
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Writing out command to execute
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:55 np0005539550 nova_compute[257631]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 02:43:55 np0005539550 nova_compute[257631]: ++ cat /run_command
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + CMD=nova-compute
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + ARGS=
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + sudo kolla_copy_cacerts
Nov 29 02:43:55 np0005539550 podman[257600]: nova_compute
Nov 29 02:43:55 np0005539550 systemd[1]: Started nova_compute container.
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + [[ ! -n '' ]]
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + . kolla_extend_start
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 02:43:55 np0005539550 nova_compute[257631]: Running command: 'nova-compute'
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + umask 0022
Nov 29 02:43:55 np0005539550 nova_compute[257631]: + exec nova-compute
Nov 29 02:43:55 np0005539550 podman[257615]: 2025-11-29 07:43:55.668550452 +0000 UTC m=+0.284946627 container init ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:43:55 np0005539550 podman[257615]: 2025-11-29 07:43:55.67767691 +0000 UTC m=+0.294073065 container start ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:43:55 np0005539550 podman[257615]: 2025-11-29 07:43:55.681902725 +0000 UTC m=+0.298298890 container attach ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:43:56 np0005539550 ceph-mon[74435]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 41 sec, mon.compute-1 has slow ops)
Nov 29 02:43:56 np0005539550 ceph-mon[74435]: Cluster is now healthy
Nov 29 02:43:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:43:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:43:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:56 np0005539550 nice_tesla[257639]: {
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:    "0": [
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:        {
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "devices": [
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "/dev/loop3"
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            ],
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "lv_name": "ceph_lv0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "lv_size": "7511998464",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "name": "ceph_lv0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "tags": {
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.cluster_name": "ceph",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.crush_device_class": "",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.encrypted": "0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.osd_id": "0",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.type": "block",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:                "ceph.vdo": "0"
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            },
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "type": "block",
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:            "vg_name": "ceph_vg0"
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:        }
Nov 29 02:43:56 np0005539550 nice_tesla[257639]:    ]
Nov 29 02:43:56 np0005539550 nice_tesla[257639]: }
Nov 29 02:43:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:56.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:56 np0005539550 podman[257615]: 2025-11-29 07:43:56.493502463 +0000 UTC m=+1.109898618 container died ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:43:56 np0005539550 systemd[1]: libpod-ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86.scope: Deactivated successfully.
Nov 29 02:43:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-54a837cd361862320619a7583a5d847de3bfdae55bf63733570b24929a42ca55-merged.mount: Deactivated successfully.
Nov 29 02:43:56 np0005539550 podman[257615]: 2025-11-29 07:43:56.854841437 +0000 UTC m=+1.471237592 container remove ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 02:43:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:56.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:56 np0005539550 systemd[1]: libpod-conmon-ebb94a15f070596576186bb22c53494cd316c26542fc9860ee75ebf92d08bb86.scope: Deactivated successfully.
Nov 29 02:43:57 np0005539550 podman[257835]: 2025-11-29 07:43:57.430694007 +0000 UTC m=+0.022153334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:57 np0005539550 podman[257835]: 2025-11-29 07:43:57.695698995 +0000 UTC m=+0.287158302 container create e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.724 257641 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.724 257641 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.724 257641 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.725 257641 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 02:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:57 np0005539550 systemd[1]: Started libpod-conmon-e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1.scope.
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.881 257641 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:43:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.907 257641 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:43:57 np0005539550 nova_compute[257631]: 2025-11-29 07:43:57.908 257641 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:43:57 np0005539550 podman[257835]: 2025-11-29 07:43:57.996108768 +0000 UTC m=+0.587568095 container init e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:43:58 np0005539550 podman[257835]: 2025-11-29 07:43:58.004248542 +0000 UTC m=+0.595707849 container start e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:43:58 np0005539550 fervent_cerf[257855]: 167 167
Nov 29 02:43:58 np0005539550 systemd[1]: libpod-e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1.scope: Deactivated successfully.
Nov 29 02:43:58 np0005539550 podman[257835]: 2025-11-29 07:43:58.060951368 +0000 UTC m=+0.652410695 container attach e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:43:58 np0005539550 podman[257835]: 2025-11-29 07:43:58.061474411 +0000 UTC m=+0.652933728 container died e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:43:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:43:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0a6e1f8e041f38ba787084f3f45b93c85e3cb6223db3cfde239dc7ff80d8a1ab-merged.mount: Deactivated successfully.
Nov 29 02:43:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:43:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:43:58 np0005539550 podman[257835]: 2025-11-29 07:43:58.465440489 +0000 UTC m=+1.056899786 container remove e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_cerf, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:58 np0005539550 systemd[1]: libpod-conmon-e29c96316a78eec7c8eb82f610d963c2462de337281fd9cb892e37d8ad14c1d1.scope: Deactivated successfully.
Nov 29 02:43:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:43:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:58 np0005539550 podman[257963]: 2025-11-29 07:43:58.612951613 +0000 UTC m=+0.023966550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:43:58 np0005539550 nova_compute[257631]: 2025-11-29 07:43:58.903 257641 INFO nova.virt.driver [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 02:43:58 np0005539550 podman[257963]: 2025-11-29 07:43:58.926692688 +0000 UTC m=+0.337707605 container create d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:43:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:43:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:43:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:43:58.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:43:58 np0005539550 python3.9[258025]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.015 257641 INFO nova.compute.provider_config [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: Started libpod-conmon-d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99.scope.
Nov 29 02:43:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d591578803de0e0e6b91225fe55fa71c2cd970568291f71461ab98c19ee19ecf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d591578803de0e0e6b91225fe55fa71c2cd970568291f71461ab98c19ee19ecf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d591578803de0e0e6b91225fe55fa71c2cd970568291f71461ab98c19ee19ecf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d591578803de0e0e6b91225fe55fa71c2cd970568291f71461ab98c19ee19ecf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 podman[257963]: 2025-11-29 07:43:59.085974146 +0000 UTC m=+0.496989083 container init d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 02:43:59 np0005539550 podman[257963]: 2025-11-29 07:43:59.094552051 +0000 UTC m=+0.505566958 container start d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:43:59 np0005539550 podman[257963]: 2025-11-29 07:43:59.0989274 +0000 UTC m=+0.509942307 container attach d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:43:59 np0005539550 systemd[1]: Started libpod-conmon-d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9.scope.
Nov 29 02:43:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da121e964eada1738e4082c6b5533b80f587fc177bc30e4e6e21af2449cff9e6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da121e964eada1738e4082c6b5533b80f587fc177bc30e4e6e21af2449cff9e6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da121e964eada1738e4082c6b5533b80f587fc177bc30e4e6e21af2449cff9e6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.195 257641 DEBUG oslo_concurrency.lockutils [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.196 257641 DEBUG oslo_concurrency.lockutils [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.196 257641 DEBUG oslo_concurrency.lockutils [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.196 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.197 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.198 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.199 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.200 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.201 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.201 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.201 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.201 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.201 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.202 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.203 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.203 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.203 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.203 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.204 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.205 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.205 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.205 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.205 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.205 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.206 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.206 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.206 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.206 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.206 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.207 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.207 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.207 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.208 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.208 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.208 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.209 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.210 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.210 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.210 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.210 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.211 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 podman[258056]: 2025-11-29 07:43:59.211598504 +0000 UTC m=+0.143681090 container init d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.211 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.212 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.212 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.212 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.212 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.212 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.213 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.213 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.213 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.213 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.213 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.214 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.215 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.215 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.215 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.215 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.215 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.216 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.217 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.218 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.219 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.220 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.220 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.220 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 podman[258056]: 2025-11-29 07:43:59.22068327 +0000 UTC m=+0.152765856 container start d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.220 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.221 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.221 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.222 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.222 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.222 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.222 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.222 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.223 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.223 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.223 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.223 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.223 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.224 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.225 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.226 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.227 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.228 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.229 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.230 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.230 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.230 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.230 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.230 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.231 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.232 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.233 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.233 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.233 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.233 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.233 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.234 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.235 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.236 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.237 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.237 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.237 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.237 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.237 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.238 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.239 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.240 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.240 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.240 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.240 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.240 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.241 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.242 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.242 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.242 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.242 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.242 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.243 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.244 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.244 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.244 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.245 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.245 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.245 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.246 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.246 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.247 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.248 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.248 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.248 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.248 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.248 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.249 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.250 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 python3.9[258025]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.251 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.252 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.253 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.254 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.255 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.256 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.257 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.258 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.259 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:43:59
Nov 29 02:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Nov 29 02:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.260 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.261 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.262 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.263 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.264 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.265 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.266 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.267 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.268 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.269 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.270 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.271 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 02:43:59 np0005539550 nova_compute_init[258079]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.272 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.273 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.273 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.273 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.273 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.273 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.274 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.275 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.276 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.277 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.277 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.277 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.277 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.277 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.278 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.278 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: libpod-d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9.scope: Deactivated successfully.
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.278 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.278 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.278 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.279 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.280 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.280 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.280 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.280 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.280 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.281 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.282 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.282 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.282 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.282 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.282 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.283 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.283 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.283 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.283 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.283 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.284 257641 WARNING oslo_config.cfg [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 02:43:59 np0005539550 nova_compute[257631]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 02:43:59 np0005539550 nova_compute[257631]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 02:43:59 np0005539550 nova_compute[257631]: and ``live_migration_inbound_addr`` respectively.
Nov 29 02:43:59 np0005539550 nova_compute[257631]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.285 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.285 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.285 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.285 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.285 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.286 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.287 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.287 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 podman[258080]: 2025-11-29 07:43:59.287349925 +0000 UTC m=+0.022147344 container died d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.287 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.287 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.287 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rbd_secret_uuid        = b66774a7-56d9-5535-bd8c-681234404870 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.288 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.289 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.290 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.290 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.290 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.290 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.290 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.291 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.291 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.291 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.291 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.291 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.292 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.293 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.294 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.295 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.296 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.297 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.298 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.299 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.300 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.301 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.302 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.303 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.304 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.305 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.305 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.305 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.305 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.305 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.306 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.307 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.307 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.307 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.307 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.308 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.308 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.308 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.308 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.308 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.309 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.309 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.309 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.309 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.310 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.310 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.310 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.311 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.312 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.313 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.314 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.315 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.316 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.317 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.318 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.319 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.320 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.321 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.322 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.323 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.324 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.325 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.325 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.325 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.325 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.325 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.326 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.326 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.326 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.326 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.326 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.327 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.327 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.327 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.327 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.327 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.328 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.329 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.329 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.329 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.329 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.330 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.331 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.331 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.331 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.331 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.331 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.332 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.332 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.332 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.332 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.332 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.333 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.333 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.333 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.333 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.334 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.334 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.334 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.334 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.334 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.335 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.335 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.335 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.335 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.335 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.336 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.336 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.336 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.336 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.336 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.337 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.337 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.337 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.337 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.337 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.338 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.338 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.338 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.338 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.338 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.339 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.340 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.340 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.340 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.340 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.340 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.341 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.341 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.341 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.341 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.341 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.342 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.342 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.342 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.342 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.342 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.343 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.343 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.343 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.343 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.343 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.344 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.345 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.346 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.347 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.348 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.349 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.350 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.351 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.352 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.352 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.352 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.352 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.352 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.353 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.354 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.355 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.356 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.356 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.356 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.356 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.356 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.357 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.357 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.357 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9-userdata-shm.mount: Deactivated successfully.
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.357 257641 DEBUG oslo_service.service [None req-234ce1b0-3f12-44be-aa61-492469168fd2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.358 257641 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da121e964eada1738e4082c6b5533b80f587fc177bc30e4e6e21af2449cff9e6-merged.mount: Deactivated successfully.
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.377 257641 INFO nova.virt.node [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Determined node identity a73c606e-2495-4af4-b703-8d4b3001fdf5 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.378 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.379 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.379 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.379 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 02:43:59 np0005539550 podman[258088]: 2025-11-29 07:43:59.39245403 +0000 UTC m=+0.095960357 container cleanup d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.393 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe5ba260250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: libpod-conmon-d70da27a15dbc4f99c38f635887cb498dec444456fefb62a0136599002592dc9.scope: Deactivated successfully.
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.409 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe5ba260250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.410 257641 INFO nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.418 257641 INFO nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <host>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <uuid>9851e351-ef5d-4a0c-9f85-d561f6a4210f</uuid>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <arch>x86_64</arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <microcode version='16777317'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <signature family='23' model='49' stepping='0'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='x2apic'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='tsc-deadline'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='osxsave'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='hypervisor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='tsc_adjust'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='spec-ctrl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='stibp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='arch-capabilities'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='cmp_legacy'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='topoext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='virt-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='lbrv'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='tsc-scale'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='vmcb-clean'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='pause-filter'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='pfthreshold'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='rdctl-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='mds-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature name='pschange-mc-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <pages unit='KiB' size='4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <pages unit='KiB' size='2048'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <pages unit='KiB' size='1048576'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <power_management>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <suspend_mem/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </power_management>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <iommu support='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <migration_features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <live/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <uri_transports>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <uri_transport>tcp</uri_transport>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <uri_transport>rdma</uri_transport>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </uri_transports>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </migration_features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <topology>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <cells num='1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <cell id='0'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <memory unit='KiB'>7864320</memory>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <pages unit='KiB' size='4'>1966080</pages>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <distances>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <sibling id='0' value='10'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          </distances>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          <cpus num='8'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:          </cpus>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        </cell>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </cells>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </topology>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <cache>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </cache>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <secmodel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model>selinux</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <doi>0</doi>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </secmodel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <secmodel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model>dac</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <doi>0</doi>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </secmodel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </host>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <guest>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <os_type>hvm</os_type>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <arch name='i686'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <wordsize>32</wordsize>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <domain type='qemu'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <domain type='kvm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <pae/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <nonpae/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <apic default='on' toggle='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <cpuselection/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <deviceboot/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <externalSnapshot/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </guest>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <guest>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <os_type>hvm</os_type>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <arch name='x86_64'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <wordsize>64</wordsize>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <domain type='qemu'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <domain type='kvm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <acpi default='on' toggle='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <apic default='on' toggle='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <cpuselection/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <deviceboot/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <disksnapshot default='on' toggle='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <externalSnapshot/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </guest>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </capabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: #033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.425 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.428 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 02:43:59 np0005539550 nova_compute[257631]: <domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <arch>i686</arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <vcpu max='240'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <os supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>rom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pflash</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>yes</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='secure'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </loader>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>memfd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </memoryBacking>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>disk</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>floppy</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>lun</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ide</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>fdc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>sata</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vnc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <video supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vga</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>none</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>bochs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='mode'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>requisite</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>optional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pci</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hostdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>random</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>path</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>handle</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </filesystem>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emulator</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>external</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>2.0</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </tpm>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </redirdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </channel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </crypto>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>passt</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>isa</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </panic>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <console supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>null</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dev</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pipe</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stdio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>udp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tcp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </console>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='features'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vapic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>runtime</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>synic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stimer</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reset</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ipi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>avic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hyperv>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tdx</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </launchSecurity>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.434 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 02:43:59 np0005539550 nova_compute[257631]: <domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <arch>i686</arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <vcpu max='4096'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <os supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>rom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pflash</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>yes</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='secure'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </loader>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>memfd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </memoryBacking>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>disk</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>floppy</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>lun</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>fdc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>sata</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vnc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <video supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vga</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>none</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>bochs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='mode'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>requisite</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>optional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pci</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hostdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>random</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>path</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>handle</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </filesystem>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emulator</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>external</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>2.0</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </tpm>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </redirdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </channel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </crypto>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>passt</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>isa</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </panic>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <console supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>null</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dev</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pipe</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stdio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>udp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tcp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </console>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='features'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vapic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>runtime</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>synic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stimer</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reset</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ipi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>avic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hyperv>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tdx</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </launchSecurity>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.469 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.473 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 02:43:59 np0005539550 nova_compute[257631]: <domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <arch>x86_64</arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <vcpu max='240'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <os supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='firmware'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>rom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pflash</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>yes</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='secure'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </loader>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>memfd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </memoryBacking>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>disk</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>floppy</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>lun</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ide</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>fdc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>sata</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vnc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <video supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vga</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>none</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>bochs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='mode'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>requisite</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>optional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pci</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hostdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>random</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>path</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>handle</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </filesystem>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emulator</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>external</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>2.0</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </tpm>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </redirdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </channel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </crypto>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>passt</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>isa</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </panic>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <console supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>null</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dev</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pipe</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stdio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>udp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tcp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </console>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='features'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vapic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>runtime</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>synic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stimer</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reset</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ipi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>avic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hyperv>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tdx</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </launchSecurity>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.540 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 02:43:59 np0005539550 nova_compute[257631]: <domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <domain>kvm</domain>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <arch>x86_64</arch>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <vcpu max='4096'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <iothreads supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <os supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='firmware'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>efi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <loader supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>rom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pflash</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='readonly'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>yes</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='secure'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>yes</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>no</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </loader>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-passthrough' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='hostPassthroughMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='maximum' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='maximumMigratable'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>on</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>off</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='host-model' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <vendor>AMD</vendor>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='x2apic'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='hypervisor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='stibp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='overflow-recov'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='succor'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lbrv'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='tsc-scale'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='flushbyasid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pause-filter'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='pfthreshold'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <feature policy='disable' name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <mode name='custom' supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Broadwell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Cooperlake-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Denverton-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Dhyana-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='auto-ibrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Milan-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amd-psfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='no-nested-data-bp'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='null-sel-clr-base'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='stibp-always-on'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-Rome-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='EPYC-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='GraniteRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-128'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-256'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx10-512'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='prefetchiti'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Haswell-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v6'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Icelake-Server-v7'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='IvyBridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='KnightsMill-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4fmaps'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-4vnniw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512er'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512pf'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G4-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Opteron_G5-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fma4'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tbm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xop'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SapphireRapids-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='amx-tile'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-bf16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-fp16'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512-vpopcntdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bitalg'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vbmi2'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrc'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fzrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='la57'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='taa-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='tsx-ldtrk'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xfd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='SierraForest-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ifma'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-ne-convert'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx-vnni-int8'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='bus-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cmpccxadd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fbsdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='fsrs'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ibrs-all'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mcdt-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pbrsb-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='psdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='sbdr-ssdp-no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='serialize'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vaes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='vpclmulqdq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Client-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='hle'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='rtm'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Skylake-Server-v5'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512bw'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512cd'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512dq'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512f'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='avx512vl'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='invpcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pcid'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='pku'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='mpx'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v2'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v3'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='core-capability'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='split-lock-detect'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='Snowridge-v4'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='cldemote'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='erms'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='gfni'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdir64b'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='movdiri'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='xsaves'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='athlon-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='core2duo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='coreduo-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='n270-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='ss'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <blockers model='phenom-v1'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnow'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <feature name='3dnowext'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </blockers>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </mode>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <memoryBacking supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <enum name='sourceType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>anonymous</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <value>memfd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </memoryBacking>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <disk supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='diskDevice'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>disk</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cdrom</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>floppy</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>lun</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>fdc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>sata</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <graphics supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vnc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egl-headless</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <video supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='modelType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vga</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>cirrus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>none</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>bochs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ramfb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hostdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='mode'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>subsystem</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='startupPolicy'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>mandatory</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>requisite</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>optional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='subsysType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pci</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>scsi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='capsType'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='pciBackend'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hostdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <rng supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtio-non-transitional</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>random</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>egd</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <filesystem supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='driverType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>path</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>handle</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>virtiofs</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </filesystem>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <tpm supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-tis</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tpm-crb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emulator</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>external</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendVersion'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>2.0</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </tpm>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <redirdev supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='bus'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>usb</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </redirdev>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <channel supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </channel>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <crypto supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendModel'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>builtin</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </crypto>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <interface supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='backendType'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>default</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>passt</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <panic supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='model'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>isa</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>hyperv</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </panic>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <console supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='type'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>null</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vc</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pty</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dev</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>file</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>pipe</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stdio</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>udp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tcp</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>unix</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>qemu-vdagent</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>dbus</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </console>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <gic supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <vmcoreinfo supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <genid supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backingStoreInput supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <backup supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <async-teardown supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <ps2 supported='yes'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sev supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <sgx supported='no'/>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <hyperv supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='features'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>relaxed</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vapic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>spinlocks</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vpindex</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>runtime</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>synic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>stimer</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reset</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>vendor_id</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>frequencies</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>reenlightenment</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tlbflush</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>ipi</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>avic</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>emsr_bitmap</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>xmm_input</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <spinlocks>4095</spinlocks>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <stimer_direct>on</stimer_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </defaults>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </hyperv>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    <launchSecurity supported='yes'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      <enum name='sectype'>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:        <value>tdx</value>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:      </enum>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:    </launchSecurity>
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </domainCapabilities>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.612 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.612 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.612 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.613 257641 INFO nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Secure Boot support detected#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.621 257641 INFO nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.621 257641 INFO nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.625 257641 DEBUG nova.virt.libvirt.volume.mount [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.635 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] cpu compare xml: <cpu match="exact">
Nov 29 02:43:59 np0005539550 nova_compute[257631]:  <model>Nehalem</model>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: </cpu>
Nov 29 02:43:59 np0005539550 nova_compute[257631]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.637 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.721 257641 INFO nova.virt.node [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Determined node identity a73c606e-2495-4af4-b703-8d4b3001fdf5 from /var/lib/nova/compute_id#033[00m
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.851 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Verified node a73c606e-2495-4af4-b703-8d4b3001fdf5 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]: {
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:        "osd_id": 0,
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:        "type": "bluestore"
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]:    }
Nov 29 02:43:59 np0005539550 wonderful_montalcini[258042]: }
Nov 29 02:43:59 np0005539550 nova_compute[257631]: 2025-11-29 07:43:59.965 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 02:43:59 np0005539550 systemd[1]: libpod-d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99.scope: Deactivated successfully.
Nov 29 02:43:59 np0005539550 podman[257963]: 2025-11-29 07:43:59.972434745 +0000 UTC m=+1.383449652 container died d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:44:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d591578803de0e0e6b91225fe55fa71c2cd970568291f71461ab98c19ee19ecf-merged.mount: Deactivated successfully.
Nov 29 02:44:00 np0005539550 podman[257963]: 2025-11-29 07:44:00.03232348 +0000 UTC m=+1.443338387 container remove d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:44:00 np0005539550 systemd[1]: libpod-conmon-d6d60e6fcb8433e5d62b18841b984d10c34e9f15939b6f6d7b83d352d712de99.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.206 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.206 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.206 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.207 257641 DEBUG nova.compute.resource_tracker [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.207 257641 DEBUG oslo_concurrency.processutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 694c6f88-f180-4e4b-bd95-5ee738094873 does not exist
Nov 29 02:44:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 37359215-4f42-4ed6-a18c-8e5d3f0a7f59 does not exist
Nov 29 02:44:00 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ea3d7fd5-d1d7-434c-8155-3fe6d635fb66 does not exist
Nov 29 02:44:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:00.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3789361585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.814 257641 DEBUG oslo_concurrency.processutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:00 np0005539550 systemd[1]: session-52.scope: Deactivated successfully.
Nov 29 02:44:00 np0005539550 systemd[1]: session-52.scope: Consumed 2min 18.344s CPU time.
Nov 29 02:44:00 np0005539550 systemd-logind[788]: Session 52 logged out. Waiting for processes to exit.
Nov 29 02:44:00 np0005539550 systemd-logind[788]: Removed session 52.
Nov 29 02:44:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:00.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.960 257641 WARNING nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.961 257641 DEBUG nova.compute.resource_tracker [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5179MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.961 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:00 np0005539550 nova_compute[257631]: 2025-11-29 07:44:00.961 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:02 np0005539550 podman[258317]: 2025-11-29 07:44:02.328458132 +0000 UTC m=+0.058852771 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:44:02 np0005539550 podman[258316]: 2025-11-29 07:44:02.335395185 +0000 UTC m=+0.066204025 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:44:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:44:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.580 257641 DEBUG nova.compute.resource_tracker [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.580 257641 DEBUG nova.compute.resource_tracker [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.613 257641 DEBUG nova.scheduler.client.report [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.681 257641 DEBUG nova.scheduler.client.report [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.681 257641 DEBUG nova.compute.provider_tree [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.714 257641 DEBUG nova.scheduler.client.report [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.734 257641 DEBUG nova.scheduler.client.report [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:44:02 np0005539550 nova_compute[257631]: 2025-11-29 07:44:02.753 257641 DEBUG oslo_concurrency.processutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:02.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/922998155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.198 257641 DEBUG oslo_concurrency.processutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.204 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 02:44:03 np0005539550 nova_compute[257631]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.204 257641 INFO nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.206 257641 DEBUG nova.compute.provider_tree [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.206 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.208 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Libvirt baseline CPU <cpu>
Nov 29 02:44:03 np0005539550 nova_compute[257631]:  <arch>x86_64</arch>
Nov 29 02:44:03 np0005539550 nova_compute[257631]:  <model>Nehalem</model>
Nov 29 02:44:03 np0005539550 nova_compute[257631]:  <vendor>AMD</vendor>
Nov 29 02:44:03 np0005539550 nova_compute[257631]:  <topology sockets="8" cores="1" threads="1"/>
Nov 29 02:44:03 np0005539550 nova_compute[257631]: </cpu>
Nov 29 02:44:03 np0005539550 nova_compute[257631]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Nov 29 02:44:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.342 257641 DEBUG nova.scheduler.client.report [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updated inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.342 257641 DEBUG nova.compute.provider_tree [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updating resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.343 257641 DEBUG nova.compute.provider_tree [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.488 257641 DEBUG nova.compute.provider_tree [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Updating resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.548 257641 DEBUG nova.compute.resource_tracker [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.549 257641 DEBUG oslo_concurrency.lockutils [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.549 257641 DEBUG nova.service [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.727 257641 DEBUG nova.service [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 02:44:03 np0005539550 nova_compute[257631]: 2025-11-29 07:44:03.728 257641 DEBUG nova.servicegroup.drivers.db [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 02:44:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:04.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:06.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:06.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:44:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:44:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:44:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:44:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:44:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:08.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:44:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:08.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:10.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:12.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:12.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:14.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:14.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:15 np0005539550 podman[258384]: 2025-11-29 07:44:15.336783408 +0000 UTC m=+0.074959314 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 02:44:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:16.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:18.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:44:18.916 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:44:18.916 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:44:18.917 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:18.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:18.975341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402258975414, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 3029, "num_deletes": 502, "total_data_size": 5339192, "memory_usage": 5447008, "flush_reason": "Manual Compaction"}
Nov 29 02:44:18 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259013723, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 5160974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15668, "largest_seqno": 18696, "table_properties": {"data_size": 5148055, "index_size": 8133, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 29760, "raw_average_key_size": 20, "raw_value_size": 5119991, "raw_average_value_size": 3466, "num_data_blocks": 361, "num_entries": 1477, "num_filter_entries": 1477, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764401835, "oldest_key_time": 1764401835, "file_creation_time": 1764402258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 38475 microseconds, and 11304 cpu microseconds.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.013824) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 5160974 bytes OK
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.013850) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.017044) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.017096) EVENT_LOG_v1 {"time_micros": 1764402259017085, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.017138) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 5326259, prev total WAL file size 5350393, number of live WAL files 2.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.019949) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(5040KB)], [38(9674KB)]
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259020087, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 15067843, "oldest_snapshot_seqno": -1}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5090 keys, 10632446 bytes, temperature: kUnknown
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259116853, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 10632446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10595795, "index_size": 22845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 127461, "raw_average_key_size": 25, "raw_value_size": 10501174, "raw_average_value_size": 2063, "num_data_blocks": 956, "num_entries": 5090, "num_filter_entries": 5090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.117147) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 10632446 bytes
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.118323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.5 rd, 109.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(5.0) write-amplify(2.1) OK, records in: 6119, records dropped: 1029 output_compression: NoCompression
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.118340) EVENT_LOG_v1 {"time_micros": 1764402259118331, "job": 18, "event": "compaction_finished", "compaction_time_micros": 96880, "compaction_time_cpu_micros": 49791, "output_level": 6, "num_output_files": 1, "total_output_size": 10632446, "num_input_records": 6119, "num_output_records": 5090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259119198, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259120775, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.019642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.120831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.120836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.120838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.120840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.120842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:44:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.883484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259883520, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 268, "num_deletes": 256, "total_data_size": 44973, "memory_usage": 51968, "flush_reason": "Manual Compaction"}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259885920, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 45593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18697, "largest_seqno": 18964, "table_properties": {"data_size": 43720, "index_size": 102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4300, "raw_average_key_size": 16, "raw_value_size": 40131, "raw_average_value_size": 149, "num_data_blocks": 4, "num_entries": 268, "num_filter_entries": 268, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402259, "oldest_key_time": 1764402259, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 2483 microseconds, and 785 cpu microseconds.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.885965) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 45593 bytes OK
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.885985) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895918) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895944) EVENT_LOG_v1 {"time_micros": 1764402259895938, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.895962) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 42865, prev total WAL file size 42865, number of live WAL files 2.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.896320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(44KB)], [41(10MB)]
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259896355, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10678039, "oldest_snapshot_seqno": -1}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4839 keys, 10253232 bytes, temperature: kUnknown
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259973561, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 10253232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10218599, "index_size": 21403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 123565, "raw_average_key_size": 25, "raw_value_size": 10128651, "raw_average_value_size": 2093, "num_data_blocks": 878, "num_entries": 4839, "num_filter_entries": 4839, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764402259, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.973925) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 10253232 bytes
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.975967) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.1 rd, 132.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(459.1) write-amplify(224.9) OK, records in: 5358, records dropped: 519 output_compression: NoCompression
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.976000) EVENT_LOG_v1 {"time_micros": 1764402259975985, "job": 20, "event": "compaction_finished", "compaction_time_micros": 77306, "compaction_time_cpu_micros": 22337, "output_level": 6, "num_output_files": 1, "total_output_size": 10253232, "num_input_records": 5358, "num_output_records": 4839, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259976176, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402259979955, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.896218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.980004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.980009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.980011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.980012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:44:19.980014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:44:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:20.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:20.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:22.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:24.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:24.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:26.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:26.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:28.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:28.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:30.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:30.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:33 np0005539550 podman[258470]: 2025-11-29 07:44:33.302259019 +0000 UTC m=+0.043578329 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 02:44:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:33 np0005539550 podman[258469]: 2025-11-29 07:44:33.309948636 +0000 UTC m=+0.052459007 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 02:44:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:44:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1801.4 total, 600.0 interval#012Cumulative writes: 9065 writes, 35K keys, 9065 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 9065 writes, 1978 syncs, 4.58 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 565 writes, 838 keys, 565 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s#012Interval WAL: 565 writes, 267 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:44:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:34.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:36.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:36.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:38.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:40.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:40.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:42.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:42.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:44:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643546130' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:44:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:44:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3643546130' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:44:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:44.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:45 np0005539550 nova_compute[257631]: 2025-11-29 07:44:45.730 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:45 np0005539550 nova_compute[257631]: 2025-11-29 07:44:45.791 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:46 np0005539550 podman[258565]: 2025-11-29 07:44:46.389679672 +0000 UTC m=+0.131597567 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:44:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:46.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 02:44:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:49.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:50.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:51.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:52.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:54.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:55.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:44:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:44:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:44:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:57.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:57 np0005539550 nova_compute[257631]: 2025-11-29 07:44:57.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:57 np0005539550 nova_compute[257631]: 2025-11-29 07:44:57.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:57 np0005539550 nova_compute[257631]: 2025-11-29 07:44:57.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:44:57 np0005539550 nova_compute[257631]: 2025-11-29 07:44:57.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:44:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:44:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:44:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:44:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.545 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.546 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.546 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.547 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.547 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.547 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.547 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.548 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.548 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:44:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:44:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.898 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.899 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.899 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.899 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:44:58 np0005539550 nova_compute[257631]: 2025-11-29 07:44:58.900 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:44:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:44:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:44:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:44:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:44:59
Nov 29 02:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms']
Nov 29 02:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:44:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:44:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270298304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:44:59 np0005539550 nova_compute[257631]: 2025-11-29 07:44:59.362 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:44:59 np0005539550 nova_compute[257631]: 2025-11-29 07:44:59.548 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:44:59 np0005539550 nova_compute[257631]: 2025-11-29 07:44:59.549 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5218MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:44:59 np0005539550 nova_compute[257631]: 2025-11-29 07:44:59.549 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:44:59 np0005539550 nova_compute[257631]: 2025-11-29 07:44:59.550 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:45:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:45:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:45:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:45:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:45:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:45:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:45:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:02.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:45:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 podman[258795]: 2025-11-29 07:45:04.322209569 +0000 UTC m=+0.055120305 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 02:45:04 np0005539550 podman[258794]: 2025-11-29 07:45:04.349748225 +0000 UTC m=+0.084966240 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:45:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:04.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:04 np0005539550 nova_compute[257631]: 2025-11-29 07:45:04.986 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:45:04 np0005539550 nova_compute[257631]: 2025-11-29 07:45:04.986 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:45:05 np0005539550 nova_compute[257631]: 2025-11-29 07:45:05.013 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:45:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:45:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:45:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396988722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:45:05 np0005539550 nova_compute[257631]: 2025-11-29 07:45:05.546 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:45:05 np0005539550 nova_compute[257631]: 2025-11-29 07:45:05.553 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.002739636 +0000 UTC m=+0.038275493 container create d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:45:06 np0005539550 systemd[1]: Started libpod-conmon-d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7.scope.
Nov 29 02:45:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:05.98535304 +0000 UTC m=+0.020888927 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.103781498 +0000 UTC m=+0.139317375 container init d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.112022969 +0000 UTC m=+0.147558826 container start d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.116310029 +0000 UTC m=+0.151845896 container attach d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:45:06 np0005539550 laughing_dirac[259136]: 167 167
Nov 29 02:45:06 np0005539550 systemd[1]: libpod-d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7.scope: Deactivated successfully.
Nov 29 02:45:06 np0005539550 conmon[259136]: conmon d1b649ea958a716b8278 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7.scope/container/memory.events
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.119249525 +0000 UTC m=+0.154785382 container died d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:45:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ab710f459f008d3871a0cbcc251ba19b754168c2482a5be1cbf741318dcefbc-merged.mount: Deactivated successfully.
Nov 29 02:45:06 np0005539550 podman[259120]: 2025-11-29 07:45:06.425222934 +0000 UTC m=+0.460758791 container remove d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_dirac, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:45:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:06 np0005539550 systemd[1]: libpod-conmon-d1b649ea958a716b8278f49a33225ae2ce667d197f1ef3a606ce2698bebf0bc7.scope: Deactivated successfully.
Nov 29 02:45:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:06.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:06 np0005539550 podman[259160]: 2025-11-29 07:45:06.591682954 +0000 UTC m=+0.052046976 container create 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:45:06 np0005539550 systemd[1]: Started libpod-conmon-99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d.scope.
Nov 29 02:45:06 np0005539550 podman[259160]: 2025-11-29 07:45:06.564913667 +0000 UTC m=+0.025277709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061af5e97f6c0963f6d43fe9d8b8924ccae19b1d480624562136ecd0e7c10145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061af5e97f6c0963f6d43fe9d8b8924ccae19b1d480624562136ecd0e7c10145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061af5e97f6c0963f6d43fe9d8b8924ccae19b1d480624562136ecd0e7c10145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/061af5e97f6c0963f6d43fe9d8b8924ccae19b1d480624562136ecd0e7c10145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:06 np0005539550 podman[259160]: 2025-11-29 07:45:06.770652644 +0000 UTC m=+0.231016686 container init 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:45:06 np0005539550 podman[259160]: 2025-11-29 07:45:06.779943623 +0000 UTC m=+0.240307665 container start 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:45:06 np0005539550 podman[259160]: 2025-11-29 07:45:06.78450296 +0000 UTC m=+0.244867002 container attach 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:45:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:45:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:07 np0005539550 nova_compute[257631]: 2025-11-29 07:45:07.304 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:45:07 np0005539550 nova_compute[257631]: 2025-11-29 07:45:07.308 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:45:07 np0005539550 nova_compute[257631]: 2025-11-29 07:45:07.308 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:07 np0005539550 great_mclaren[259176]: [
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:    {
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "available": false,
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "ceph_device": false,
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "lsm_data": {},
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "lvs": [],
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "path": "/dev/sr0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "rejected_reasons": [
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "Has a FileSystem",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "Insufficient space (<5GB)"
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        ],
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        "sys_api": {
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "actuators": null,
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "device_nodes": "sr0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "devname": "sr0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "human_readable_size": "482.00 KB",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "id_bus": "ata",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "model": "QEMU DVD-ROM",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "nr_requests": "2",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "parent": "/dev/sr0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "partitions": {},
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "path": "/dev/sr0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "removable": "1",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "rev": "2.5+",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "ro": "0",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "rotational": "1",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "sas_address": "",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "sas_device_handle": "",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "scheduler_mode": "mq-deadline",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "sectors": 0,
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "sectorsize": "2048",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "size": 493568.0,
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "support_discard": "2048",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "type": "disk",
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:            "vendor": "QEMU"
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:        }
Nov 29 02:45:07 np0005539550 great_mclaren[259176]:    }
Nov 29 02:45:07 np0005539550 great_mclaren[259176]: ]
Nov 29 02:45:07 np0005539550 systemd[1]: libpod-99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d.scope: Deactivated successfully.
Nov 29 02:45:07 np0005539550 systemd[1]: libpod-99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d.scope: Consumed 1.223s CPU time.
Nov 29 02:45:07 np0005539550 podman[259160]: 2025-11-29 07:45:07.982911651 +0000 UTC m=+1.443275673 container died 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:45:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:45:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:45:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:45:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:45:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:45:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-061af5e97f6c0963f6d43fe9d8b8924ccae19b1d480624562136ecd0e7c10145-merged.mount: Deactivated successfully.
Nov 29 02:45:08 np0005539550 podman[259160]: 2025-11-29 07:45:08.03511345 +0000 UTC m=+1.495477472 container remove 99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclaren, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:08 np0005539550 systemd[1]: libpod-conmon-99727c7e07da88b20d94130f62c061f9b98f9ba5d12a909e4d7bbe893900741d.scope: Deactivated successfully.
Nov 29 02:45:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:45:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:45:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:45:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:09.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:45:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:10.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:11.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:12.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:45:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:13.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:14.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:15.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:16.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:17 np0005539550 podman[260383]: 2025-11-29 07:45:17.352910945 +0000 UTC m=+0.090950164 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 02:45:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:18.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:45:18.916 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:45:18.917 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:45:18.917 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:45:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:19.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:45:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:45:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:20.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:21.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:45:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:45:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:22.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:45:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:23.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:45:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:45:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:45:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:25.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:45:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:27.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 772ab25b-26de-4432-9fd7-1527ca1a73cf does not exist
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev afcccd12-589d-4b77-ac38-224fabcc682f does not exist
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 13b95627-3e26-4be8-b245-77e6661afafa does not exist
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:45:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:28 np0005539550 podman[260607]: 2025-11-29 07:45:28.085593256 +0000 UTC m=+0.025332121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:28 np0005539550 podman[260607]: 2025-11-29 07:45:28.472732767 +0000 UTC m=+0.412471572 container create 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:45:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:28.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:45:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:45:28 np0005539550 systemd[1]: Started libpod-conmon-739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d.scope.
Nov 29 02:45:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:29.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:29 np0005539550 podman[260607]: 2025-11-29 07:45:29.234686683 +0000 UTC m=+1.174425548 container init 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:29 np0005539550 podman[260607]: 2025-11-29 07:45:29.241760694 +0000 UTC m=+1.181499509 container start 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:29 np0005539550 affectionate_moore[260624]: 167 167
Nov 29 02:45:29 np0005539550 systemd[1]: libpod-739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d.scope: Deactivated successfully.
Nov 29 02:45:29 np0005539550 podman[260607]: 2025-11-29 07:45:29.301222529 +0000 UTC m=+1.240961374 container attach 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 02:45:29 np0005539550 podman[260607]: 2025-11-29 07:45:29.302078981 +0000 UTC m=+1.241817806 container died 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:45:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9952b69e6ee63f0ccadd3d8d62635a2932c7ceada2749340a2976d51bb46345c-merged.mount: Deactivated successfully.
Nov 29 02:45:30 np0005539550 podman[260607]: 2025-11-29 07:45:30.154679281 +0000 UTC m=+2.094418096 container remove 739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_moore, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:45:30 np0005539550 systemd[1]: libpod-conmon-739e70827fb34f198889fc22c760a3854958c703e9e6c1383d8908fa10eadb4d.scope: Deactivated successfully.
Nov 29 02:45:30 np0005539550 nova_compute[257631]: 2025-11-29 07:45:30.297 257641 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 1.57 sec#033[00m
Nov 29 02:45:30 np0005539550 podman[260651]: 2025-11-29 07:45:30.349895949 +0000 UTC m=+0.047594022 container create 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:45:30 np0005539550 systemd[1]: Started libpod-conmon-7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966.scope.
Nov 29 02:45:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539550 podman[260651]: 2025-11-29 07:45:30.330297226 +0000 UTC m=+0.027995319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:30 np0005539550 podman[260651]: 2025-11-29 07:45:30.443964102 +0000 UTC m=+0.141662205 container init 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:45:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:30 np0005539550 podman[260651]: 2025-11-29 07:45:30.452144331 +0000 UTC m=+0.149842404 container start 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:45:30 np0005539550 podman[260651]: 2025-11-29 07:45:30.45560935 +0000 UTC m=+0.153307423 container attach 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:30.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:45:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:31 np0005539550 bold_swanson[260668]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:45:31 np0005539550 bold_swanson[260668]: --> relative data size: 1.0
Nov 29 02:45:31 np0005539550 bold_swanson[260668]: --> All data devices are unavailable
Nov 29 02:45:31 np0005539550 systemd[1]: libpod-7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966.scope: Deactivated successfully.
Nov 29 02:45:31 np0005539550 podman[260651]: 2025-11-29 07:45:31.28244173 +0000 UTC m=+0.980139803 container died 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 02:45:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f62878e15ef318f62a09e18556a9c2cf5c593e7b8fb31af20e0c5fd67b8b1222-merged.mount: Deactivated successfully.
Nov 29 02:45:31 np0005539550 podman[260651]: 2025-11-29 07:45:31.340456118 +0000 UTC m=+1.038154191 container remove 7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_swanson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:31 np0005539550 systemd[1]: libpod-conmon-7611e0e9db1b61bcdce0e090a4c550e32695b969e9d089b774c901b48e62a966.scope: Deactivated successfully.
Nov 29 02:45:31 np0005539550 podman[260833]: 2025-11-29 07:45:31.918768153 +0000 UTC m=+0.047940131 container create 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:45:31 np0005539550 systemd[1]: Started libpod-conmon-0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309.scope.
Nov 29 02:45:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:31 np0005539550 podman[260833]: 2025-11-29 07:45:31.893936816 +0000 UTC m=+0.023108824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:31 np0005539550 podman[260833]: 2025-11-29 07:45:31.994707141 +0000 UTC m=+0.123879139 container init 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:45:32 np0005539550 podman[260833]: 2025-11-29 07:45:32.00089666 +0000 UTC m=+0.130068638 container start 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:32 np0005539550 podman[260833]: 2025-11-29 07:45:32.00481541 +0000 UTC m=+0.133987388 container attach 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:45:32 np0005539550 magical_varahamihira[260851]: 167 167
Nov 29 02:45:32 np0005539550 systemd[1]: libpod-0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309.scope: Deactivated successfully.
Nov 29 02:45:32 np0005539550 podman[260833]: 2025-11-29 07:45:32.007039777 +0000 UTC m=+0.136211755 container died 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:45:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1fb2c3999474746d66a13b45dee835d5282d6772f923966a900cbdf48e6ec13a-merged.mount: Deactivated successfully.
Nov 29 02:45:32 np0005539550 podman[260833]: 2025-11-29 07:45:32.047441544 +0000 UTC m=+0.176613542 container remove 0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_varahamihira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:45:32 np0005539550 systemd[1]: libpod-conmon-0063477c94ae78aaef75efd95280ff2906c5f76e73f8e718ca17b6c362209309.scope: Deactivated successfully.
Nov 29 02:45:32 np0005539550 podman[260874]: 2025-11-29 07:45:32.208781632 +0000 UTC m=+0.040115870 container create d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:45:32 np0005539550 systemd[1]: Started libpod-conmon-d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628.scope.
Nov 29 02:45:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0544cfc6c089076b4c0bdad56e3fe4fd6e666bb228b82bfeed4a05fa397654d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0544cfc6c089076b4c0bdad56e3fe4fd6e666bb228b82bfeed4a05fa397654d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0544cfc6c089076b4c0bdad56e3fe4fd6e666bb228b82bfeed4a05fa397654d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0544cfc6c089076b4c0bdad56e3fe4fd6e666bb228b82bfeed4a05fa397654d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:32 np0005539550 podman[260874]: 2025-11-29 07:45:32.19310219 +0000 UTC m=+0.024436458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:32 np0005539550 podman[260874]: 2025-11-29 07:45:32.298128224 +0000 UTC m=+0.129462492 container init d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:45:32 np0005539550 podman[260874]: 2025-11-29 07:45:32.30730512 +0000 UTC m=+0.138639358 container start d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:45:32 np0005539550 podman[260874]: 2025-11-29 07:45:32.311060166 +0000 UTC m=+0.142394404 container attach d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:45:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:32.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:33.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]: {
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:    "0": [
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:        {
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "devices": [
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "/dev/loop3"
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            ],
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "lv_name": "ceph_lv0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "lv_size": "7511998464",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "name": "ceph_lv0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "tags": {
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.cluster_name": "ceph",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.crush_device_class": "",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.encrypted": "0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.osd_id": "0",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.type": "block",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:                "ceph.vdo": "0"
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            },
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "type": "block",
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:            "vg_name": "ceph_vg0"
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:        }
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]:    ]
Nov 29 02:45:33 np0005539550 gallant_liskov[260890]: }
Nov 29 02:45:33 np0005539550 systemd[1]: libpod-d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628.scope: Deactivated successfully.
Nov 29 02:45:33 np0005539550 podman[260874]: 2025-11-29 07:45:33.137187308 +0000 UTC m=+0.968521556 container died d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:45:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b0544cfc6c089076b4c0bdad56e3fe4fd6e666bb228b82bfeed4a05fa397654d-merged.mount: Deactivated successfully.
Nov 29 02:45:33 np0005539550 podman[260874]: 2025-11-29 07:45:33.194124628 +0000 UTC m=+1.025458866 container remove d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:45:33 np0005539550 systemd[1]: libpod-conmon-d451478a96bb41e2f28d29c2dfb5b7528bc15c884a958cf68ff4dce7b028c628.scope: Deactivated successfully.
Nov 29 02:45:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.766470519 +0000 UTC m=+0.034909857 container create 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:45:33 np0005539550 systemd[1]: Started libpod-conmon-959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4.scope.
Nov 29 02:45:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.831684542 +0000 UTC m=+0.100123900 container init 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.837414089 +0000 UTC m=+0.105853427 container start 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.84097692 +0000 UTC m=+0.109416288 container attach 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:45:33 np0005539550 wizardly_leavitt[261070]: 167 167
Nov 29 02:45:33 np0005539550 systemd[1]: libpod-959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4.scope: Deactivated successfully.
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.843292089 +0000 UTC m=+0.111731437 container died 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.750405317 +0000 UTC m=+0.018844675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b07ed3c376ab484f81be2dc1987a4607f105c5fae847a410382d785be0f06cd1-merged.mount: Deactivated successfully.
Nov 29 02:45:33 np0005539550 podman[261053]: 2025-11-29 07:45:33.885470791 +0000 UTC m=+0.153910129 container remove 959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_leavitt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:45:33 np0005539550 systemd[1]: libpod-conmon-959d3b68dd52e1df129562a6125a4fc4697c786f7acc70bcc002601214357bd4.scope: Deactivated successfully.
Nov 29 02:45:34 np0005539550 podman[261095]: 2025-11-29 07:45:34.032965375 +0000 UTC m=+0.037229276 container create bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:45:34 np0005539550 systemd[1]: Started libpod-conmon-bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8.scope.
Nov 29 02:45:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:45:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b176a96838b41553b49b1a6bbbc379ef33e3810d54754678977f37f0f90907a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b176a96838b41553b49b1a6bbbc379ef33e3810d54754678977f37f0f90907a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b176a96838b41553b49b1a6bbbc379ef33e3810d54754678977f37f0f90907a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b176a96838b41553b49b1a6bbbc379ef33e3810d54754678977f37f0f90907a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:45:34 np0005539550 podman[261095]: 2025-11-29 07:45:34.106020259 +0000 UTC m=+0.110284190 container init bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:45:34 np0005539550 podman[261095]: 2025-11-29 07:45:34.016694287 +0000 UTC m=+0.020958218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:45:34 np0005539550 podman[261095]: 2025-11-29 07:45:34.112892145 +0000 UTC m=+0.117156056 container start bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:45:34 np0005539550 podman[261095]: 2025-11-29 07:45:34.119796842 +0000 UTC m=+0.124060753 container attach bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:45:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]: {
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:        "osd_id": 0,
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:        "type": "bluestore"
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]:    }
Nov 29 02:45:34 np0005539550 stupefied_mclaren[261112]: }
Nov 29 02:45:34 np0005539550 systemd[1]: libpod-bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8.scope: Deactivated successfully.
Nov 29 02:45:34 np0005539550 conmon[261112]: conmon bd39d759ed2dbe5f6ffe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8.scope/container/memory.events
Nov 29 02:45:35 np0005539550 podman[261134]: 2025-11-29 07:45:35.022220021 +0000 UTC m=+0.027940518 container died bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 02:45:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9b176a96838b41553b49b1a6bbbc379ef33e3810d54754678977f37f0f90907a-merged.mount: Deactivated successfully.
Nov 29 02:45:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:35.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:35 np0005539550 podman[261134]: 2025-11-29 07:45:35.093491799 +0000 UTC m=+0.099212286 container remove bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:45:35 np0005539550 systemd[1]: libpod-conmon-bd39d759ed2dbe5f6ffe52a26129e0ef7c34f8e078db9e746fea61bcf91c09d8.scope: Deactivated successfully.
Nov 29 02:45:35 np0005539550 podman[261140]: 2025-11-29 07:45:35.100467848 +0000 UTC m=+0.094659749 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 02:45:35 np0005539550 podman[261133]: 2025-11-29 07:45:35.131777631 +0000 UTC m=+0.126822814 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:45:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:45:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:45:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:35 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dc1fb7a1-ae15-432e-b97c-646697b6e3a5 does not exist
Nov 29 02:45:35 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1705add9-45b0-4a70-ba21-f61fcc6efeeb does not exist
Nov 29 02:45:35 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4b980785-93da-4d2c-8dc9-a92f9ef0219a does not exist
Nov 29 02:45:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:36 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:36 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:45:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:36.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:37.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:38.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:39.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:40.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:41.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:42.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:43.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:44.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:45.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:46.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:47.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:48 np0005539550 podman[261294]: 2025-11-29 07:45:48.359909591 +0000 UTC m=+0.100222102 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:45:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:48.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:49.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:50.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:51.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:45:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:55.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:45:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:56.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:57.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:45:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:45:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:45:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:45:58.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:45:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:45:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:45:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:45:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:45:59
Nov 29 02:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.control']
Nov 29 02:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:46:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:00.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:02.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:03.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:04.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:05.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:05 np0005539550 podman[261380]: 2025-11-29 07:46:05.315758224 +0000 UTC m=+0.053025731 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 02:46:05 np0005539550 podman[261379]: 2025-11-29 07:46:05.321136732 +0000 UTC m=+0.062054933 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:46:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:06.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:07.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.301 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.302 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.677 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.677 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.677 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.752 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.753 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.753 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.753 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.754 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.755 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.836 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.837 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.837 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.838 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:46:07 np0005539550 nova_compute[257631]: 2025-11-29 07:46:07.838 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:46:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:46:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:46:07 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:46:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840649845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.290 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.433 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.434 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5212MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.434 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.435 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.608 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.608 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:46:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:08.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:08 np0005539550 nova_compute[257631]: 2025-11-29 07:46:08.628 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:46:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:46:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:46:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764424056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:46:09 np0005539550 nova_compute[257631]: 2025-11-29 07:46:09.067 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:46:09 np0005539550 nova_compute[257631]: 2025-11-29 07:46:09.073 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:46:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:09.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:09 np0005539550 nova_compute[257631]: 2025-11-29 07:46:09.231 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:46:09 np0005539550 nova_compute[257631]: 2025-11-29 07:46:09.233 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:46:09 np0005539550 nova_compute[257631]: 2025-11-29 07:46:09.233 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:10.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:11.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:12.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:15.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:16.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:17.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:18.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:18.917 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:18.918 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:18.919 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:46:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:19.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:19 np0005539550 podman[261474]: 2025-11-29 07:46:19.338771586 +0000 UTC m=+0.083295657 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:46:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:46:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:21.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:23.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:25.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:26.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:27.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:28.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:29.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:30.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:31.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:32.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:33.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:46:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:34 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 02:46:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:35.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:36 np0005539550 podman[261564]: 2025-11-29 07:46:36.327218715 +0000 UTC m=+0.060025430 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:46:36 np0005539550 podman[261580]: 2025-11-29 07:46:36.344087898 +0000 UTC m=+0.074939833 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 02:46:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 1 op/s
Nov 29 02:46:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 02:46:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:36.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:37.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 02:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 02:46:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:38.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:46:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2265937091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:46:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:46:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2265937091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:46:38 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 29 02:46:38 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:46:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:39.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 02:46:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 02:46:40 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 02:46:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Nov 29 02:46:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:40.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:41.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:46:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 29 02:46:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:42.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:46:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:43.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 36ccff1c-e061-4a3e-8b63-c75584aa2844 does not exist
Nov 29 02:46:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7717cf31-7de3-4fc4-b343-23245a1ff152 does not exist
Nov 29 02:46:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bef5e939-ab56-4406-9f2a-f94ec03f4a04 does not exist
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:46:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.841655552 +0000 UTC m=+0.039280149 container create 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:43 np0005539550 systemd[1]: Started libpod-conmon-19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0.scope.
Nov 29 02:46:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.919874418 +0000 UTC m=+0.117499015 container init 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.824306407 +0000 UTC m=+0.021931034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.926540139 +0000 UTC m=+0.124164726 container start 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.929124526 +0000 UTC m=+0.126749133 container attach 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:46:43 np0005539550 unruffled_haslett[261942]: 167 167
Nov 29 02:46:43 np0005539550 systemd[1]: libpod-19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0.scope: Deactivated successfully.
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.932207495 +0000 UTC m=+0.129832102 container died 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:46:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f05d2802c8135dd86f3f8b2b319ce5f7099fd337a903b6f53152b30bb537c721-merged.mount: Deactivated successfully.
Nov 29 02:46:43 np0005539550 podman[261927]: 2025-11-29 07:46:43.98191512 +0000 UTC m=+0.179539717 container remove 19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:43 np0005539550 systemd[1]: libpod-conmon-19dad9925ca16ae8b7058c48a4084781716eb07f9ad5d8c7f3ca060bd88845d0.scope: Deactivated successfully.
Nov 29 02:46:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:46:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:46:44 np0005539550 podman[261967]: 2025-11-29 07:46:44.141481263 +0000 UTC m=+0.039054063 container create 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:46:44 np0005539550 systemd[1]: Started libpod-conmon-5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937.scope.
Nov 29 02:46:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:44 np0005539550 podman[261967]: 2025-11-29 07:46:44.214843805 +0000 UTC m=+0.112416615 container init 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:46:44 np0005539550 podman[261967]: 2025-11-29 07:46:44.125952965 +0000 UTC m=+0.023525785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:44 np0005539550 podman[261967]: 2025-11-29 07:46:44.224679387 +0000 UTC m=+0.122252177 container start 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:46:44 np0005539550 podman[261967]: 2025-11-29 07:46:44.22831771 +0000 UTC m=+0.125890530 container attach 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 02:46:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Nov 29 02:46:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:44.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:45 np0005539550 objective_mirzakhani[261984]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:46:45 np0005539550 objective_mirzakhani[261984]: --> relative data size: 1.0
Nov 29 02:46:45 np0005539550 objective_mirzakhani[261984]: --> All data devices are unavailable
Nov 29 02:46:45 np0005539550 systemd[1]: libpod-5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937.scope: Deactivated successfully.
Nov 29 02:46:45 np0005539550 podman[261967]: 2025-11-29 07:46:45.050161762 +0000 UTC m=+0.947734552 container died 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:46:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a65245a56017cbabfed87b28086bb6030b2c8b0a2895d351be12e2c06fd8e534-merged.mount: Deactivated successfully.
Nov 29 02:46:45 np0005539550 podman[261967]: 2025-11-29 07:46:45.111006262 +0000 UTC m=+1.008579052 container remove 5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:46:45 np0005539550 systemd[1]: libpod-conmon-5d9bc235a346a096b027149de89f7ccb4262bb323a65dfd826f6ffb63c8ac937.scope: Deactivated successfully.
Nov 29 02:46:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.688947737 +0000 UTC m=+0.042754958 container create 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 29 02:46:45 np0005539550 systemd[1]: Started libpod-conmon-0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb.scope.
Nov 29 02:46:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.668336049 +0000 UTC m=+0.022143330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.763215412 +0000 UTC m=+0.117022663 container init 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.77168856 +0000 UTC m=+0.125495771 container start 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.774989034 +0000 UTC m=+0.128796275 container attach 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:46:45 np0005539550 recursing_agnesi[262168]: 167 167
Nov 29 02:46:45 np0005539550 systemd[1]: libpod-0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb.scope: Deactivated successfully.
Nov 29 02:46:45 np0005539550 conmon[262168]: conmon 0ea6a1158eef9fe21641 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb.scope/container/memory.events
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.778525425 +0000 UTC m=+0.132332646 container died 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:46:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-092f146063caf4cc9a66bf0d1b96fd40ec34ca03d648b703b694dd0596a954ec-merged.mount: Deactivated successfully.
Nov 29 02:46:45 np0005539550 podman[262152]: 2025-11-29 07:46:45.812343073 +0000 UTC m=+0.166150294 container remove 0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:46:45 np0005539550 systemd[1]: libpod-conmon-0ea6a1158eef9fe216418c0f7836f1be14381650257d8130469d9e39d23a7dbb.scope: Deactivated successfully.
Nov 29 02:46:45 np0005539550 podman[262191]: 2025-11-29 07:46:45.967325968 +0000 UTC m=+0.039372761 container create 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:46:46 np0005539550 systemd[1]: Started libpod-conmon-93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484.scope.
Nov 29 02:46:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291415ab765a737b7db0cb340b32dcf66b6efa603021a1f3951529b0e7318e8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291415ab765a737b7db0cb340b32dcf66b6efa603021a1f3951529b0e7318e8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291415ab765a737b7db0cb340b32dcf66b6efa603021a1f3951529b0e7318e8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291415ab765a737b7db0cb340b32dcf66b6efa603021a1f3951529b0e7318e8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:46.03914261 +0000 UTC m=+0.111189493 container init 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:45.951516263 +0000 UTC m=+0.023563046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:46.046477778 +0000 UTC m=+0.118524561 container start 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:46.049930127 +0000 UTC m=+0.121976920 container attach 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:46:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Nov 29 02:46:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]: {
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:    "0": [
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:        {
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "devices": [
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "/dev/loop3"
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            ],
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "lv_name": "ceph_lv0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "lv_size": "7511998464",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "name": "ceph_lv0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "tags": {
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.cluster_name": "ceph",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.crush_device_class": "",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.encrypted": "0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.osd_id": "0",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.type": "block",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:                "ceph.vdo": "0"
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            },
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "type": "block",
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:            "vg_name": "ceph_vg0"
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:        }
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]:    ]
Nov 29 02:46:46 np0005539550 relaxed_ardinghelli[262209]: }
Nov 29 02:46:46 np0005539550 systemd[1]: libpod-93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484.scope: Deactivated successfully.
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:46.865614191 +0000 UTC m=+0.937660994 container died 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:46:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-291415ab765a737b7db0cb340b32dcf66b6efa603021a1f3951529b0e7318e8c-merged.mount: Deactivated successfully.
Nov 29 02:46:46 np0005539550 podman[262191]: 2025-11-29 07:46:46.925131647 +0000 UTC m=+0.997178430 container remove 93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ardinghelli, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:46 np0005539550 systemd[1]: libpod-conmon-93b729ecde8b05c11e7189ad7e62ecf169f2f4b20a6c611ebaf94d20da401484.scope: Deactivated successfully.
Nov 29 02:46:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:47.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:47 np0005539550 podman[262371]: 2025-11-29 07:46:47.503511094 +0000 UTC m=+0.023434512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:47 np0005539550 podman[262371]: 2025-11-29 07:46:47.922176704 +0000 UTC m=+0.442100102 container create 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:46:48 np0005539550 systemd[1]: Started libpod-conmon-167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835.scope.
Nov 29 02:46:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:48 np0005539550 podman[262371]: 2025-11-29 07:46:48.125371616 +0000 UTC m=+0.645295064 container init 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:48 np0005539550 podman[262371]: 2025-11-29 07:46:48.134483 +0000 UTC m=+0.654406398 container start 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:46:48 np0005539550 podman[262371]: 2025-11-29 07:46:48.138602835 +0000 UTC m=+0.658526253 container attach 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 02:46:48 np0005539550 priceless_bardeen[262388]: 167 167
Nov 29 02:46:48 np0005539550 systemd[1]: libpod-167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835.scope: Deactivated successfully.
Nov 29 02:46:48 np0005539550 podman[262371]: 2025-11-29 07:46:48.140783861 +0000 UTC m=+0.660707259 container died 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6c255f2eb75e72a4403287ea0b44ce4fd2a11e926bc8ed1af420e2ab7f4e2e4c-merged.mount: Deactivated successfully.
Nov 29 02:46:48 np0005539550 podman[262371]: 2025-11-29 07:46:48.178686373 +0000 UTC m=+0.698609771 container remove 167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_bardeen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:48 np0005539550 systemd[1]: libpod-conmon-167277bfb2412e7c3929a172565e6f15a07c883cb0996415a767e7e5d9b99835.scope: Deactivated successfully.
Nov 29 02:46:48 np0005539550 podman[262413]: 2025-11-29 07:46:48.328150327 +0000 UTC m=+0.041702110 container create 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:48 np0005539550 systemd[1]: Started libpod-conmon-1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9.scope.
Nov 29 02:46:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:46:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b213af1be29494c4eaef72225132fa9f1646c9868e8dd0c9299f93c41afcf680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:48 np0005539550 podman[262413]: 2025-11-29 07:46:48.310064094 +0000 UTC m=+0.023615897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:46:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b213af1be29494c4eaef72225132fa9f1646c9868e8dd0c9299f93c41afcf680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b213af1be29494c4eaef72225132fa9f1646c9868e8dd0c9299f93c41afcf680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b213af1be29494c4eaef72225132fa9f1646c9868e8dd0c9299f93c41afcf680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:46:48 np0005539550 podman[262413]: 2025-11-29 07:46:48.414304637 +0000 UTC m=+0.127856470 container init 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:46:48 np0005539550 podman[262413]: 2025-11-29 07:46:48.425481104 +0000 UTC m=+0.139032887 container start 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:46:48 np0005539550 podman[262413]: 2025-11-29 07:46:48.429079206 +0000 UTC m=+0.142631009 container attach 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:46:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 29 02:46:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:48.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]: {
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:        "osd_id": 0,
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:        "type": "bluestore"
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]:    }
Nov 29 02:46:49 np0005539550 condescending_bhaskara[262430]: }
Nov 29 02:46:49 np0005539550 systemd[1]: libpod-1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9.scope: Deactivated successfully.
Nov 29 02:46:49 np0005539550 podman[262451]: 2025-11-29 07:46:49.284491668 +0000 UTC m=+0.021105732 container died 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:46:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b213af1be29494c4eaef72225132fa9f1646c9868e8dd0c9299f93c41afcf680-merged.mount: Deactivated successfully.
Nov 29 02:46:49 np0005539550 podman[262451]: 2025-11-29 07:46:49.333345951 +0000 UTC m=+0.069960025 container remove 1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhaskara, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:46:49 np0005539550 systemd[1]: libpod-conmon-1a2c2510583d327d8593b1a03742c1e3eb9c32ecd58df71f0c86ac4f5a612df9.scope: Deactivated successfully.
Nov 29 02:46:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:46:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:46:50 np0005539550 podman[262467]: 2025-11-29 07:46:50.362312237 +0000 UTC m=+0.104084841 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 02:46:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 02:46:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:50.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7b556e80-8ff8-4b0f-b411-97dab02114ac does not exist
Nov 29 02:46:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 61b5baf9-ab34-473b-96ed-dc86ee49a57e does not exist
Nov 29 02:46:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3c70bd67-3a46-402c-9e83-3ac8534d0172 does not exist
Nov 29 02:46:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:51.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:51.974 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:46:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:51.975 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:46:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:46:51.976 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:46:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:46:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 0 B/s wr, 34 op/s
Nov 29 02:46:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:52.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:46:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:53.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Nov 29 02:46:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:46:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:54.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:46:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:55.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Nov 29 02:46:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:56.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:57.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:46:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:46:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 29 02:46:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:46:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:46:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:46:58.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:46:59
Nov 29 02:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'volumes']
Nov 29 02:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:46:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:46:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:46:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:46:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:47:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 0 B/s wr, 38 op/s
Nov 29 02:47:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:00.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:01.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Nov 29 02:47:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:02.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000077s ======
Nov 29 02:47:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:03.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Nov 29 02:47:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Nov 29 02:47:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:04.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:05.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 0 B/s wr, 77 op/s
Nov 29 02:47:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:06.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:07 np0005539550 podman[262602]: 2025-11-29 07:47:07.350753935 +0000 UTC m=+0.076270178 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 02:47:07 np0005539550 podman[262601]: 2025-11-29 07:47:07.359895859 +0000 UTC m=+0.085412472 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 02:47:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:07.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Nov 29 02:47:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:08.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.235 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.236 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.236 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.237 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.385 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.386 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.386 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.386 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.387 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.387 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.387 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.387 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.387 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.458 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.459 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.459 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.459 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.459 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:09.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902280494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:09 np0005539550 nova_compute[257631]: 2025-11-29 07:47:09.929 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.100 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.102 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.102 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.102 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.179 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.180 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.219 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:47:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Nov 29 02:47:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:47:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315979272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.686 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.691 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:47:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:10.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.705 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.707 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:47:10 np0005539550 nova_compute[257631]: 2025-11-29 07:47:10.708 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:11.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Nov 29 02:47:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:47:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:12.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:47:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:13.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Nov 29 02:47:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:14.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:47:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:15.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:47:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Nov 29 02:47:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:16.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:17.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:47:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:47:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:18.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:47:18.918 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:47:18.919 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:47:18.919 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:47:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:47:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:19.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:47:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:47:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:20.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:47:21 np0005539550 podman[262694]: 2025-11-29 07:47:21.351064993 +0000 UTC m=+0.090388798 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:47:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:47:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:47:21.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:47:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:47:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:47:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:47:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:47:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:50:20 np0005539550 brave_satoshi[267431]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:50:20 np0005539550 brave_satoshi[267431]: --> relative data size: 1.0
Nov 29 02:50:20 np0005539550 rsyslogd[1008]: imjournal: 2948 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 29 02:50:20 np0005539550 brave_satoshi[267431]: --> All data devices are unavailable
Nov 29 02:50:20 np0005539550 systemd[1]: libpod-4da31bf3db74827ad5e5a87e9d360d4fd870a4d86bc5877e2f040af97ae67ae0.scope: Deactivated successfully.
Nov 29 02:50:20 np0005539550 podman[267456]: 2025-11-29 07:50:20.301201468 +0000 UTC m=+0.029015017 container died 4da31bf3db74827ad5e5a87e9d360d4fd870a4d86bc5877e2f040af97ae67ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:50:20 np0005539550 nova_compute[257631]: 2025-11-29 07:50:20.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539550 nova_compute[257631]: 2025-11-29 07:50:20.404 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.549 158978 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.549 158978 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppvqe17l2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.412 267470 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.418 267470 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.420 267470 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.420 267470 INFO oslo.privsep.daemon [-] privsep daemon running as pid 267470#033[00m
Nov 29 02:50:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:20.551 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c069e424-36c0-4337-9052-521939c65c61]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 403 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 629 KiB/s rd, 3.8 MiB/s wr, 118 op/s
Nov 29 02:50:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-920843dd7f3710a59ed4b8301ef56a2c1c17a66546f28cb69a8e2ccde3150c5d-merged.mount: Deactivated successfully.
Nov 29 02:50:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:20.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.094 267470 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.095 267470 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.095 267470 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.405 257641 DEBUG nova.compute.manager [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.406 257641 DEBUG oslo_concurrency.lockutils [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.406 257641 DEBUG oslo_concurrency.lockutils [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.406 257641 DEBUG oslo_concurrency.lockutils [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.406 257641 DEBUG nova.compute.manager [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] No waiting events found dispatching network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:50:21 np0005539550 nova_compute[257631]: 2025-11-29 07:50:21.407 257641 WARNING nova.compute.manager [req-a5d2d50f-8945-470e-8e68-f0623e80cf49 req-8b6555f6-8b2c-4611-9ce3-cf58c232155b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received unexpected event network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:50:21 np0005539550 podman[267456]: 2025-11-29 07:50:21.578366381 +0000 UTC m=+1.306179930 container remove 4da31bf3db74827ad5e5a87e9d360d4fd870a4d86bc5877e2f040af97ae67ae0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:50:21 np0005539550 systemd[1]: libpod-conmon-4da31bf3db74827ad5e5a87e9d360d4fd870a4d86bc5877e2f040af97ae67ae0.scope: Deactivated successfully.
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.723 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[135c1317-1341-4ca4-ad51-6960dc0b25a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 NetworkManager[49039]: <info>  [1764402621.8656] manager: (tap17051a29-50): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.864 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1acc58bc-c8e9-450a-a4ce-1d92944d6ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.894 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[15a4c3ed-b6eb-4420-a91d-012c0f2fce08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.896 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[33063dce-8977-4f41-86f3-9a4c70b166e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 systemd-udevd[267584]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:50:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:21.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:21 np0005539550 NetworkManager[49039]: <info>  [1764402621.9250] device (tap17051a29-50): carrier: link connected
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.925 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bc794963-933c-43a9-917d-bad241cb0028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.946 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[84c59cd5-384f-4b55-be21-b99ba09dfc21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17051a29-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:50:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566958, 'reachable_time': 27863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267603, 'error': None, 'target': 'ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.963 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b01bd2-94e7-4c98-87ca-b1a6bdf89078]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:50de'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 566958, 'tstamp': 566958}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267604, 'error': None, 'target': 'ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:21.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[140e2452-a094-4c55-af06-908f4f14c4a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap17051a29-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:50:de'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566958, 'reachable_time': 27863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267605, 'error': None, 'target': 'ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.012 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce68935-91c5-41ce-9991-abcc03f023a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.067 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[414a8cfe-8ec4-4a96-bd6a-558107d5f5f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.069 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17051a29-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.069 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.070 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17051a29-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.1043] manager: (tap17051a29-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 02:50:22 np0005539550 kernel: tap17051a29-50: entered promiscuous mode
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.106 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap17051a29-50, col_values=(('external_ids', {'iface-id': 'ba3350ae-f4f6-4a57-8f68-d95e027d64f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.108 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:22Z|00031|binding|INFO|Releasing lport ba3350ae-f4f6-4a57-8f68-d95e027d64f5 from this chassis (sb_readonly=0)
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.123 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.125 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/17051a29-59e4-4f9d-a9c3-0a59a97ce9fb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/17051a29-59e4-4f9d-a9c3-0a59a97ce9fb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.125 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ac33b5ff-061d-46d0-b777-ae8db7af6e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.127 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/17051a29-59e4-4f9d-a9c3-0a59a97ce9fb.pid.haproxy
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 17051a29-59e4-4f9d-a9c3-0a59a97ce9fb
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:50:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:50:22.127 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'env', 'PROCESS_TAG=haproxy-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/17051a29-59e4-4f9d-a9c3-0a59a97ce9fb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:50:22 np0005539550 podman[267651]: 2025-11-29 07:50:22.171190478 +0000 UTC m=+0.021617392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4944] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4948] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4959] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4962] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4969] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4974] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4977] device (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:22 np0005539550 NetworkManager[49039]: <info>  [1764402622.4980] device (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 02:50:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 391 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 137 op/s
Nov 29 02:50:22 np0005539550 podman[267651]: 2025-11-29 07:50:22.715472244 +0000 UTC m=+0.565899148 container create bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.770 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:22Z|00032|binding|INFO|Releasing lport ba3350ae-f4f6-4a57-8f68-d95e027d64f5 from this chassis (sb_readonly=0)
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.843 257641 DEBUG nova.compute.manager [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-changed-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.844 257641 DEBUG nova.compute.manager [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Refreshing instance network info cache due to event network-changed-dde08ae7-bcb7-43ab-8adf-8ee4218091e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.844 257641 DEBUG oslo_concurrency.lockutils [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-65be1187-6d11-4752-8ae6-ba034b77b9e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.844 257641 DEBUG oslo_concurrency.lockutils [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-65be1187-6d11-4752-8ae6-ba034b77b9e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:50:22 np0005539550 nova_compute[257631]: 2025-11-29 07:50:22.844 257641 DEBUG nova.network.neutron [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Refreshing network info cache for port dde08ae7-bcb7-43ab-8adf-8ee4218091e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:50:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:23 np0005539550 systemd[1]: Started libpod-conmon-bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1.scope.
Nov 29 02:50:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:50:23 np0005539550 podman[267651]: 2025-11-29 07:50:23.433136717 +0000 UTC m=+1.283563641 container init bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:23 np0005539550 podman[267651]: 2025-11-29 07:50:23.445246777 +0000 UTC m=+1.295673681 container start bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:50:23 np0005539550 cool_robinson[267683]: 167 167
Nov 29 02:50:23 np0005539550 systemd[1]: libpod-bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1.scope: Deactivated successfully.
Nov 29 02:50:23 np0005539550 podman[267693]: 2025-11-29 07:50:23.443300356 +0000 UTC m=+0.354882913 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:50:23 np0005539550 podman[267651]: 2025-11-29 07:50:23.685923524 +0000 UTC m=+1.536350458 container attach bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 02:50:23 np0005539550 podman[267651]: 2025-11-29 07:50:23.688223215 +0000 UTC m=+1.538650139 container died bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:50:23 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:23Z|00033|memory|INFO|peak resident set size grew 50% in last 1356.9 seconds, from 16128 kB to 24192 kB
Nov 29 02:50:23 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:23Z|00034|memory|INFO|idl-cells-OVN_Southbound:10327 idl-cells-Open_vSwitch:870 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:353 lflow-cache-entries-cache-matches:290 lflow-cache-size-KB:1474 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:590 ofctrl_installed_flow_usage-KB:432 ofctrl_sb_flow_ref_usage-KB:223
Nov 29 02:50:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:23.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:24 np0005539550 podman[267693]: 2025-11-29 07:50:24.205767494 +0000 UTC m=+1.117350041 container create b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:50:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-99c68f09a0254776784c9aac268659c7639d8a8dfc46ea21c66fce27ea159c62-merged.mount: Deactivated successfully.
Nov 29 02:50:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 372 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.8 MiB/s wr, 140 op/s
Nov 29 02:50:24 np0005539550 nova_compute[257631]: 2025-11-29 07:50:24.701 257641 DEBUG nova.network.neutron [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Updated VIF entry in instance network info cache for port dde08ae7-bcb7-43ab-8adf-8ee4218091e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:50:24 np0005539550 nova_compute[257631]: 2025-11-29 07:50:24.702 257641 DEBUG nova.network.neutron [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Updating instance_info_cache with network_info: [{"id": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "address": "fa:16:3e:9b:3f:09", "network": {"id": "17051a29-59e4-4f9d-a9c3-0a59a97ce9fb", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1243968495-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa00908198f941b5afb99ba561a959d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdde08ae7-bc", "ovs_interfaceid": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:50:24 np0005539550 nova_compute[257631]: 2025-11-29 07:50:24.724 257641 DEBUG oslo_concurrency.lockutils [req-9e0f132a-b1db-4009-b09e-ce687097b6d1 req-49be7c9f-512e-4eef-8976-e45ddb9a0e8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-65be1187-6d11-4752-8ae6-ba034b77b9e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:50:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:24.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:25 np0005539550 podman[267651]: 2025-11-29 07:50:25.076639185 +0000 UTC m=+2.927066099 container remove bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:25 np0005539550 systemd[1]: libpod-conmon-bad678d34f6f7acdedd40bcfaa286776c8402af4ae616db76611db01fa134ba1.scope: Deactivated successfully.
Nov 29 02:50:25 np0005539550 systemd[1]: Started libpod-conmon-b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9.scope.
Nov 29 02:50:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:50:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a823b57d05fff0585f370f7e52af2031eacd4858acb8e53c4d6e7eea8ba37645/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:25 np0005539550 podman[267792]: 2025-11-29 07:50:25.311796486 +0000 UTC m=+0.075458094 container create b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:25 np0005539550 podman[267693]: 2025-11-29 07:50:25.319178871 +0000 UTC m=+2.230761438 container init b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 02:50:25 np0005539550 podman[267767]: 2025-11-29 07:50:25.319850629 +0000 UTC m=+0.803918624 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 02:50:25 np0005539550 podman[267693]: 2025-11-29 07:50:25.32860389 +0000 UTC m=+2.240186437 container start b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 02:50:25 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [NOTICE]   (267820) : New worker (267825) forked
Nov 29 02:50:25 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [NOTICE]   (267820) : Loading success.
Nov 29 02:50:25 np0005539550 podman[267792]: 2025-11-29 07:50:25.2661485 +0000 UTC m=+0.029810128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:25 np0005539550 systemd[1]: Started libpod-conmon-b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e.scope.
Nov 29 02:50:25 np0005539550 nova_compute[257631]: 2025-11-29 07:50:25.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:25 np0005539550 nova_compute[257631]: 2025-11-29 07:50:25.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:50:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989514600c1c225ea35ef875bea6ed99093903de13114bbfc7bfb3149518d8e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989514600c1c225ea35ef875bea6ed99093903de13114bbfc7bfb3149518d8e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989514600c1c225ea35ef875bea6ed99093903de13114bbfc7bfb3149518d8e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/989514600c1c225ea35ef875bea6ed99093903de13114bbfc7bfb3149518d8e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:25 np0005539550 podman[267792]: 2025-11-29 07:50:25.53724831 +0000 UTC m=+0.300909948 container init b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:50:25 np0005539550 podman[267792]: 2025-11-29 07:50:25.547149752 +0000 UTC m=+0.310811360 container start b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:50:25 np0005539550 podman[267792]: 2025-11-29 07:50:25.552572315 +0000 UTC m=+0.316233923 container attach b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:50:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:25.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]: {
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:    "0": [
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:        {
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "devices": [
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "/dev/loop3"
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            ],
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "lv_name": "ceph_lv0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "lv_size": "7511998464",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "name": "ceph_lv0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "tags": {
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.cluster_name": "ceph",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.crush_device_class": "",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.encrypted": "0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.osd_id": "0",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.type": "block",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:                "ceph.vdo": "0"
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            },
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "type": "block",
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:            "vg_name": "ceph_vg0"
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:        }
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]:    ]
Nov 29 02:50:26 np0005539550 zen_sutherland[267834]: }
Nov 29 02:50:26 np0005539550 systemd[1]: libpod-b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e.scope: Deactivated successfully.
Nov 29 02:50:26 np0005539550 podman[267792]: 2025-11-29 07:50:26.355718557 +0000 UTC m=+1.119380165 container died b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:50:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 360 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 232 op/s
Nov 29 02:50:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-989514600c1c225ea35ef875bea6ed99093903de13114bbfc7bfb3149518d8e8-merged.mount: Deactivated successfully.
Nov 29 02:50:26 np0005539550 podman[267792]: 2025-11-29 07:50:26.862370329 +0000 UTC m=+1.626031937 container remove b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sutherland, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:50:26 np0005539550 systemd[1]: libpod-conmon-b339fdd7dd7063d2483047f54011b7c0fcfbf4a65e8e6311b6f9ae26c1ccd05e.scope: Deactivated successfully.
Nov 29 02:50:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.490705003 +0000 UTC m=+0.022156216 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.598799558 +0000 UTC m=+0.130250741 container create a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:27 np0005539550 systemd[1]: Started libpod-conmon-a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145.scope.
Nov 29 02:50:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.91917349 +0000 UTC m=+0.450624693 container init a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:27.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.930474919 +0000 UTC m=+0.461926102 container start a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:50:27 np0005539550 systemd[1]: libpod-a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145.scope: Deactivated successfully.
Nov 29 02:50:27 np0005539550 conmon[268010]: conmon a1756a181f90b9408b0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145.scope/container/memory.events
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.936366264 +0000 UTC m=+0.467817477 container attach a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:50:27 np0005539550 crazy_allen[268010]: 167 167
Nov 29 02:50:27 np0005539550 podman[267994]: 2025-11-29 07:50:27.937751701 +0000 UTC m=+0.469202884 container died a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:50:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-00d516d455acccc2ccd8f2d2cc31b6c69910aec7c5af2c446aec09d9705fe7ee-merged.mount: Deactivated successfully.
Nov 29 02:50:28 np0005539550 podman[267994]: 2025-11-29 07:50:28.016993664 +0000 UTC m=+0.548444847 container remove a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_allen, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:28 np0005539550 systemd[1]: libpod-conmon-a1756a181f90b9408b0c89438363681509cb6cc10cfbeb9d3ad68b71178c4145.scope: Deactivated successfully.
Nov 29 02:50:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:28 np0005539550 podman[268038]: 2025-11-29 07:50:28.274713231 +0000 UTC m=+0.077801816 container create f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:50:28 np0005539550 podman[268038]: 2025-11-29 07:50:28.220485678 +0000 UTC m=+0.023574283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:50:28 np0005539550 systemd[1]: Started libpod-conmon-f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7.scope.
Nov 29 02:50:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:50:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b580b291cea552cc4bb3288c025158f2466258aea6b9a402dbddeea52c9d8441/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b580b291cea552cc4bb3288c025158f2466258aea6b9a402dbddeea52c9d8441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b580b291cea552cc4bb3288c025158f2466258aea6b9a402dbddeea52c9d8441/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b580b291cea552cc4bb3288c025158f2466258aea6b9a402dbddeea52c9d8441/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:50:28 np0005539550 podman[268038]: 2025-11-29 07:50:28.512989574 +0000 UTC m=+0.316078179 container init f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:50:28 np0005539550 podman[268038]: 2025-11-29 07:50:28.519748462 +0000 UTC m=+0.322837047 container start f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:50:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 388 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.0 MiB/s wr, 326 op/s
Nov 29 02:50:28 np0005539550 podman[268038]: 2025-11-29 07:50:28.586529156 +0000 UTC m=+0.389617771 container attach f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:50:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:28.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]: {
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:        "osd_id": 0,
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:        "type": "bluestore"
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]:    }
Nov 29 02:50:29 np0005539550 peaceful_leavitt[268054]: }
Nov 29 02:50:29 np0005539550 systemd[1]: libpod-f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7.scope: Deactivated successfully.
Nov 29 02:50:29 np0005539550 podman[268075]: 2025-11-29 07:50:29.512679007 +0000 UTC m=+0.029289674 container died f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 02:50:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b580b291cea552cc4bb3288c025158f2466258aea6b9a402dbddeea52c9d8441-merged.mount: Deactivated successfully.
Nov 29 02:50:29 np0005539550 podman[268075]: 2025-11-29 07:50:29.57109482 +0000 UTC m=+0.087705457 container remove f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_leavitt, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:50:29 np0005539550 systemd[1]: libpod-conmon-f51ccf38f33ef5b63d27f444814c31edfd0037b6b2268401f4c366fe07d397a7.scope: Deactivated successfully.
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 89d7db16-43b4-4823-962a-a29c62af78e7 does not exist
Nov 29 02:50:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a6f31161-6709-4666-b22b-7342f6d93e12 does not exist
Nov 29 02:50:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 29c3dc83-114f-4948-a84b-eef9c6d93c05 does not exist
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:50:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:30 np0005539550 nova_compute[257631]: 2025-11-29 07:50:30.377 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539550 nova_compute[257631]: 2025-11-29 07:50:30.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 388 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.4 MiB/s wr, 349 op/s
Nov 29 02:50:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:30.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 388 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.3 MiB/s wr, 309 op/s
Nov 29 02:50:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:32.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:33.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 391 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 286 op/s
Nov 29 02:50:34 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:34Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9b:3f:09 10.100.0.14
Nov 29 02:50:34 np0005539550 ovn_controller[148680]: 2025-11-29T07:50:34Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9b:3f:09 10.100.0.14
Nov 29 02:50:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:34.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:35 np0005539550 nova_compute[257631]: 2025-11-29 07:50:35.382 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539550 nova_compute[257631]: 2025-11-29 07:50:35.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 356 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 3.0 MiB/s wr, 331 op/s
Nov 29 02:50:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:36.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 365 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.5 MiB/s wr, 338 op/s
Nov 29 02:50:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:38.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:40 np0005539550 nova_compute[257631]: 2025-11-29 07:50:40.385 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:40 np0005539550 nova_compute[257631]: 2025-11-29 07:50:40.411 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 345 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.9 MiB/s wr, 290 op/s
Nov 29 02:50:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:41.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 345 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.9 MiB/s wr, 258 op/s
Nov 29 02:50:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:50:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:42.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:50:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:43.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:44 np0005539550 podman[268178]: 2025-11-29 07:50:44.29688642 +0000 UTC m=+0.070438042 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 02:50:44 np0005539550 podman[268177]: 2025-11-29 07:50:44.299830837 +0000 UTC m=+0.073531623 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:50:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 357 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.8 MiB/s wr, 339 op/s
Nov 29 02:50:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:45 np0005539550 nova_compute[257631]: 2025-11-29 07:50:45.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539550 nova_compute[257631]: 2025-11-29 07:50:45.413 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 350 MiB data, 440 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.9 MiB/s wr, 334 op/s
Nov 29 02:50:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:47.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 306 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 8.2 MiB/s wr, 329 op/s
Nov 29 02:50:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:50 np0005539550 nova_compute[257631]: 2025-11-29 07:50:50.391 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:50 np0005539550 nova_compute[257631]: 2025-11-29 07:50:50.415 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 320 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 6.6 MiB/s wr, 252 op/s
Nov 29 02:50:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:51.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 320 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 915 KiB/s rd, 5.3 MiB/s wr, 191 op/s
Nov 29 02:50:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:53.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 326 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 258 op/s
Nov 29 02:50:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:55 np0005539550 nova_compute[257631]: 2025-11-29 07:50:55.395 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:55 np0005539550 nova_compute[257631]: 2025-11-29 07:50:55.418 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:50:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:55.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:56 np0005539550 podman[268247]: 2025-11-29 07:50:56.338166257 +0000 UTC m=+0.082380457 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:50:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 314 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 188 op/s
Nov 29 02:50:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:57.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:57 np0005539550 nova_compute[257631]: 2025-11-29 07:50:57.309 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:57 np0005539550 nova_compute[257631]: 2025-11-29 07:50:57.310 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:57 np0005539550 nova_compute[257631]: 2025-11-29 07:50:57.369 257641 DEBUG nova.objects.instance [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lazy-loading 'flavor' on Instance uuid 65be1187-6d11-4752-8ae6-ba034b77b9e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:50:57 np0005539550 nova_compute[257631]: 2025-11-29 07:50:57.464 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:57.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:50:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:50:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:50:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 281 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 193 op/s
Nov 29 02:50:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:50:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:50:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:50:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:50:59 np0005539550 nova_compute[257631]: 2025-11-29 07:50:59.097 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:50:59 np0005539550 nova_compute[257631]: 2025-11-29 07:50:59.097 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:50:59 np0005539550 nova_compute[257631]: 2025-11-29 07:50:59.097 257641 INFO nova.compute.manager [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Attaching volume e9658287-11e8-4a83-a391-d9089e1d2337 to /dev/vdb#033[00m
Nov 29 02:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:50:59
Nov 29 02:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', '.mgr', 'default.rgw.log']
Nov 29 02:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:50:59 np0005539550 nova_compute[257631]: 2025-11-29 07:50:59.605 257641 DEBUG os_brick.utils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:50:59 np0005539550 nova_compute[257631]: 2025-11-29 07:50:59.607 257641 INFO oslo.privsep.daemon [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpvcbhu12h/privsep.sock']#033[00m
Nov 29 02:50:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:50:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:50:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:50:59.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.333 257641 INFO oslo.privsep.daemon [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.211 268278 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.216 268278 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.218 268278 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.218 268278 INFO oslo.privsep.daemon [-] privsep daemon running as pid 268278#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.337 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[75e11311-fd2d-4dcb-999c-b4a8acdae9eb]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.419 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.422 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.431 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.445 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.446 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d30dfde7-2f7c-43fe-ae71-09d2681e8ba2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.447 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.455 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.455 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[62004c3f-f31f-460f-9bad-964bf25d2c8c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.457 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.466 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.467 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[887e04cc-cc14-42b5-989a-73d860e75989]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.468 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[30360394-614a-4e1a-8dff-d7e1f54292fa]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.469 257641 DEBUG oslo_concurrency.processutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.492 257641 DEBUG oslo_concurrency.processutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.496 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.497 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.497 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.497 257641 DEBUG os_brick.utils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] <== get_connector_properties: return (891ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.498 257641 DEBUG nova.virt.block_device [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Updating existing volume attachment record: 6789a472-0c8b-4120-be8e-baa2a31a12f5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:51:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 281 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1002 KiB/s wr, 137 op/s
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.985 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.987 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:51:00 np0005539550 nova_compute[257631]: 2025-11-29 07:51:00.987 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:01.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612442984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:01 np0005539550 nova_compute[257631]: 2025-11-29 07:51:01.432 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:01.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 281 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 74 KiB/s wr, 98 op/s
Nov 29 02:51:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:03.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 281 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 74 KiB/s wr, 98 op/s
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.010 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.010 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.013 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.013 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.172 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.173 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4456MB free_disk=20.85165023803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.174 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.174 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:05.208 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.208 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:05.209 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.274 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance bc395e50-9261-47b8-a618-4af9eb4933fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.274 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.274 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.274 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.399 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.431 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539550 ovn_controller[148680]: 2025-11-29T07:51:05Z|00035|binding|INFO|Releasing lport ba3350ae-f4f6-4a57-8f68-d95e027d64f5 from this chassis (sb_readonly=0)
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.721 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712596052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.863 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.871 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.929 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:05.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.979 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:51:05 np0005539550 nova_compute[257631]: 2025-11-29 07:51:05.979 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.056 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.057 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.058 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.067 257641 DEBUG nova.objects.instance [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lazy-loading 'flavor' on Instance uuid 65be1187-6d11-4752-8ae6-ba034b77b9e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.160 257641 DEBUG nova.virt.libvirt.driver [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Attempting to attach volume e9658287-11e8-4a83-a391-d9089e1d2337 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.163 257641 DEBUG nova.virt.libvirt.guest [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-e9658287-11e8-4a83-a391-d9089e1d2337">
Nov 29 02:51:06 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 02:51:06 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  </auth>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:51:06 np0005539550 nova_compute[257631]:  <serial>e9658287-11e8-4a83-a391-d9089e1d2337</serial>
Nov 29 02:51:06 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:51:06 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:51:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 281 MiB data, 407 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 19 KiB/s wr, 32 op/s
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.974 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.975 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.975 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:51:06 np0005539550 nova_compute[257631]: 2025-11-29 07:51:06.976 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:51:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:07.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.578 257641 DEBUG nova.virt.libvirt.driver [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.579 257641 DEBUG nova.virt.libvirt.driver [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.579 257641 DEBUG nova.virt.libvirt.driver [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.579 257641 DEBUG nova.virt.libvirt.driver [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] No VIF found with MAC fa:16:3e:9b:3f:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 17 KiB/s wr, 23 op/s
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.764 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.765 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.765 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:51:08 np0005539550 nova_compute[257631]: 2025-11-29 07:51:08.765 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bc395e50-9261-47b8-a618-4af9eb4933fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:09.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:09.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:10 np0005539550 nova_compute[257631]: 2025-11-29 07:51:10.434 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 4.3 KiB/s rd, 16 KiB/s wr, 6 op/s
Nov 29 02:51:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:11.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:12 np0005539550 nova_compute[257631]: 2025-11-29 07:51:12.202 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:12.211 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 1.9 KiB/s rd, 5.2 KiB/s wr, 3 op/s
Nov 29 02:51:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:13.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 5.2 KiB/s wr, 3 op/s
Nov 29 02:51:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:15 np0005539550 podman[268410]: 2025-11-29 07:51:15.32464022 +0000 UTC m=+0.052998451 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:51:15 np0005539550 podman[268409]: 2025-11-29 07:51:15.334737486 +0000 UTC m=+0.064479744 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:51:15 np0005539550 nova_compute[257631]: 2025-11-29 07:51:15.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:15.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 6.8 KiB/s wr, 3 op/s
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.626 257641 DEBUG nova.virt.libvirt.driver [None req-4600290f-b930-4651-9ea6-8994c34133c2 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] volume_snapshot_create: create_info: {'snapshot_id': 'c7fd7688-2799-44a9-aaa7-3f7a5898cbc8', 'type': 'qcow2', 'new_file': 'new_file'} volume_snapshot_create /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3572#033[00m
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [None req-4600290f-b930-4651-9ea6-8994c34133c2 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Error occurred during volume_snapshot_create, sending error status to Cinder.: nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Traceback (most recent call last):
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]     raise exception.InternalError(msg)
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.630 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] #033[00m
Nov 29 02:51:16 np0005539550 nova_compute[257631]: 2025-11-29 07:51:16.710 257641 DEBUG oslo_concurrency.lockutils [None req-4dd40905-d36a-4c08-a76b-2df69f2ddfaf 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 17.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:17.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.037 257641 DEBUG nova.virt.libvirt.driver [None req-15ca4962-ee40-4330-bd00-074dc370f7f1 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] volume_snapshot_delete: delete_info: {'volume_id': 'e9658287-11e8-4a83-a391-d9089e1d2337'} _volume_snapshot_delete /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:3673#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [None req-15ca4962-ee40-4330-bd00-074dc370f7f1 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Error occurred during volume_snapshot_delete, sending error status to Cinder.: KeyError: 'type'
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0]     if delete_info['type'] != 'qcow2':
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] KeyError: 'type'
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.038 257641 ERROR nova.virt.libvirt.driver [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] #033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver [None req-4600290f-b930-4651-9ea6-8994c34133c2 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot c7fd7688-2799-44a9-aaa7-3f7a5898cbc8 could not be found.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     raise exception.InternalError(msg)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot c7fd7688-2799-44a9-aaa7-3f7a5898cbc8 could not be found. (HTTP 404) (Request-ID: req-9e72b631-5f9e-414f-bd32-8015757893c6)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot c7fd7688-2799-44a9-aaa7-3f7a5898cbc8 could not be found.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.047 257641 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server [None req-4600290f-b930-4651-9ea6-8994c34133c2 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] Exception during message handling: nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4410, in volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_create(context, instance, volume_id,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3597, in volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3590, in volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     self._volume_snapshot_create(context, instance, guest,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3477, in _volume_snapshot_create
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server     raise exception.InternalError(msg)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server nova.exception.InternalError: Found no disk to snapshot.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.051 257641 ERROR oslo_messaging.rpc.server #033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver [None req-15ca4962-ee40-4330-bd00-074dc370f7f1 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] Failed to send updated snapshot status to volume service.: nova.exception.SnapshotNotFound: Snapshot None could not be found.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     if delete_info['type'] != 'qcow2':
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver KeyError: 'type'
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver cinderclient.exceptions.NotFound: Snapshot None could not be found. (HTTP 404) (Request-ID: req-a3e802a5-c4d9-4953-94c3-c04a5e109300)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver 
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3412, in _volume_snapshot_update_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     self._volume_api.update_snapshot_status(context,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 397, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 468, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id))
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 488, in _reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     raise desired_exc.with_traceback(sys.exc_info()[2])
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 466, in wrapper
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/volume/cinder.py", line 761, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     vs.update_snapshot_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 225, in update_snapshot_status
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self._action('os-update_snapshot_status',
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/v3/volume_snapshots.py", line 221, in _action
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     resp, body = self.api.client.post(url, body=body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 223, in post
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 211, in _cs_request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     return self.request(url, method, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/cinderclient/client.py", line 197, in request
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver nova.exception.SnapshotNotFound: Snapshot None could not be found.
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.450 257641 ERROR nova.virt.libvirt.driver #033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server [None req-15ca4962-ee40-4330-bd00-074dc370f7f1 c9ba83c37c7346e8813378b9eed7b3c2 e3b6aa0314c4478c9bec34cb13079b54 - - default default] Exception during message handling: KeyError: 'type'
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     return func(*args, **kwargs)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 4422, in volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     self.driver.volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3853, in volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     self._volume_snapshot_update_status(
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     raise self.value
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3846, in volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     self._volume_snapshot_delete(context, instance, volume_id,
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3676, in _volume_snapshot_delete
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server     if delete_info['type'] != 'qcow2':
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server KeyError: 'type'
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.453 257641 ERROR oslo_messaging.rpc.server #033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.546 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.572 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.572 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.573 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.573 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.574 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.574 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.574 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.574 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:51:17 np0005539550 nova_compute[257631]: 2025-11-29 07:51:17.575 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:51:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:17.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 281 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 5.0 KiB/s rd, 5.2 KiB/s wr, 6 op/s
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.915 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "bc395e50-9261-47b8-a618-4af9eb4933fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.915 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "bc395e50-9261-47b8-a618-4af9eb4933fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.915 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "bc395e50-9261-47b8-a618-4af9eb4933fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.916 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "bc395e50-9261-47b8-a618-4af9eb4933fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.916 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "bc395e50-9261-47b8-a618-4af9eb4933fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.917 257641 INFO nova.compute.manager [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Terminating instance#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.918 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.918 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquired lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:18 np0005539550 nova_compute[257631]: 2025-11-29 07:51:18.918 257641 DEBUG nova.network.neutron [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:18.923 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:18.924 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:18.924 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:19.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006512415957100316 of space, bias 1.0, pg target 1.953724787130095 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:51:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:51:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:20 np0005539550 nova_compute[257631]: 2025-11-29 07:51:20.437 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 244 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 6.1 KiB/s rd, 12 KiB/s wr, 11 op/s
Nov 29 02:51:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:21 np0005539550 nova_compute[257631]: 2025-11-29 07:51:21.833 257641 DEBUG oslo_concurrency.lockutils [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:21 np0005539550 nova_compute[257631]: 2025-11-29 07:51:21.833 257641 DEBUG oslo_concurrency.lockutils [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:21 np0005539550 nova_compute[257631]: 2025-11-29 07:51:21.882 257641 INFO nova.compute.manager [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Detaching volume e9658287-11e8-4a83-a391-d9089e1d2337#033[00m
Nov 29 02:51:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:21.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.047 257641 INFO nova.virt.block_device [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Attempting to driver detach volume e9658287-11e8-4a83-a391-d9089e1d2337 from mountpoint /dev/vdb#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.056 257641 DEBUG nova.virt.libvirt.driver [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Attempting to detach device vdb from instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.057 257641 DEBUG nova.virt.libvirt.guest [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-e9658287-11e8-4a83-a391-d9089e1d2337">
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <serial>e9658287-11e8-4a83-a391-d9089e1d2337</serial>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:51:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.066 257641 INFO nova.virt.libvirt.driver [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Successfully detached device vdb from instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 from the persistent domain config.#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.067 257641 DEBUG nova.virt.libvirt.driver [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.067 257641 DEBUG nova.virt.libvirt.guest [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-e9658287-11e8-4a83-a391-d9089e1d2337">
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <serial>e9658287-11e8-4a83-a391-d9089e1d2337</serial>
Nov 29 02:51:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:51:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:51:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.215 257641 DEBUG nova.network.neutron [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.283 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764402682.2832763, 65be1187-6d11-4752-8ae6-ba034b77b9e0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.285 257641 DEBUG nova.virt.libvirt.driver [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.287 257641 INFO nova.virt.libvirt.driver [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Successfully detached device vdb from instance 65be1187-6d11-4752-8ae6-ba034b77b9e0 from the live domain config.#033[00m
Nov 29 02:51:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 244 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 6.0 KiB/s rd, 11 KiB/s wr, 10 op/s
Nov 29 02:51:22 np0005539550 nova_compute[257631]: 2025-11-29 07:51:22.761 257641 DEBUG nova.network.neutron [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:24.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 42 op/s
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.651 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Releasing lock "refresh_cache-bc395e50-9261-47b8-a618-4af9eb4933fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.652 257641 DEBUG nova.compute.manager [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.674 257641 DEBUG nova.objects.instance [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lazy-loading 'flavor' on Instance uuid 65be1187-6d11-4752-8ae6-ba034b77b9e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.764 257641 DEBUG oslo_concurrency.lockutils [None req-514ce9b7-6682-48c6-a885-70ca18ef25b8 87a8af2d3a924844ad3b9279035f79d6 93de83185bb14386b1526c79641f9a31 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:24 np0005539550 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 29 02:51:24 np0005539550 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000001.scope: Consumed 19.082s CPU time.
Nov 29 02:51:24 np0005539550 systemd-machined[216673]: Machine qemu-2-instance-00000001 terminated.
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.877 257641 INFO nova.virt.libvirt.driver [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Instance destroyed successfully.#033[00m
Nov 29 02:51:24 np0005539550 nova_compute[257631]: 2025-11-29 07:51:24.878 257641 DEBUG nova.objects.instance [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lazy-loading 'resources' on Instance uuid bc395e50-9261-47b8-a618-4af9eb4933fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.438 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 02:51:25 np0005539550 nova_compute[257631]: 2025-11-29 07:51:25.441 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:26.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.369 257641 INFO nova.virt.libvirt.driver [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Deleting instance files /var/lib/nova/instances/bc395e50-9261-47b8-a618-4af9eb4933fa_del#033[00m
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.370 257641 INFO nova.virt.libvirt.driver [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Deletion of /var/lib/nova/instances/bc395e50-9261-47b8-a618-4af9eb4933fa_del complete#033[00m
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.444 257641 INFO nova.compute.manager [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Took 1.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.445 257641 DEBUG oslo.service.loopingcall [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.445 257641 DEBUG nova.compute.manager [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:51:26 np0005539550 nova_compute[257631]: 2025-11-29 07:51:26.445 257641 DEBUG nova.network.neutron [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:51:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 47 op/s
Nov 29 02:51:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:27.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.299 257641 DEBUG nova.network.neutron [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.330 257641 DEBUG nova.network.neutron [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:27 np0005539550 podman[268527]: 2025-11-29 07:51:27.343084996 +0000 UTC m=+0.084788390 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.368 257641 INFO nova.compute.manager [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.423 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.424 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.505 257641 DEBUG oslo_concurrency.processutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2189902376' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.965 257641 DEBUG oslo_concurrency.processutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:27 np0005539550 nova_compute[257631]: 2025-11-29 07:51:27.971 257641 DEBUG nova.compute.provider_tree [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:28.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:28 np0005539550 nova_compute[257631]: 2025-11-29 07:51:28.052 257641 DEBUG nova.scheduler.client.report [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:28 np0005539550 nova_compute[257631]: 2025-11-29 07:51:28.110 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:28 np0005539550 nova_compute[257631]: 2025-11-29 07:51:28.178 257641 INFO nova.scheduler.client.report [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Deleted allocations for instance bc395e50-9261-47b8-a618-4af9eb4933fa#033[00m
Nov 29 02:51:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:28 np0005539550 nova_compute[257631]: 2025-11-29 07:51:28.281 257641 DEBUG oslo_concurrency.lockutils [None req-527ceff0-1e84-4708-be26-79e3b88dd992 01739124bee74c899af6384f8ec2d427 3c7cd563ba394223a76bd2579800406c - - default default] Lock "bc395e50-9261-47b8-a618-4af9eb4933fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 156 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 12 KiB/s wr, 70 op/s
Nov 29 02:51:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.083 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.083 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.083 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.084 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.084 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.085 257641 INFO nova.compute.manager [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Terminating instance#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.086 257641 DEBUG nova.compute.manager [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:29 np0005539550 kernel: tapdde08ae7-bc (unregistering): left promiscuous mode
Nov 29 02:51:29 np0005539550 NetworkManager[49039]: <info>  [1764402689.2072] device (tapdde08ae7-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:51:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:51:29Z|00036|binding|INFO|Releasing lport dde08ae7-bcb7-43ab-8adf-8ee4218091e1 from this chassis (sb_readonly=0)
Nov 29 02:51:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:51:29Z|00037|binding|INFO|Setting lport dde08ae7-bcb7-43ab-8adf-8ee4218091e1 down in Southbound
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.253 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:51:29Z|00038|binding|INFO|Removing iface tapdde08ae7-bc ovn-installed in OVS
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.264 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:3f:09 10.100.0.14'], port_security=['fa:16:3e:9b:3f:09 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '65be1187-6d11-4752-8ae6-ba034b77b9e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'fa00908198f941b5afb99ba561a959d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '567f4bd8-4596-4085-aafb-88186e26f584', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea77c13d-06ba-4fc9-9d28-195ec18839d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=dde08ae7-bcb7-43ab-8adf-8ee4218091e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.266 158978 INFO neutron.agent.ovn.metadata.agent [-] Port dde08ae7-bcb7-43ab-8adf-8ee4218091e1 in datapath 17051a29-59e4-4f9d-a9c3-0a59a97ce9fb unbound from our chassis#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.267 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.268 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b525e46-eb73-4eeb-b4cd-0681b132ef92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.269 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb namespace which is not needed anymore#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 29 02:51:29 np0005539550 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 17.117s CPU time.
Nov 29 02:51:29 np0005539550 systemd-machined[216673]: Machine qemu-4-instance-00000008 terminated.
Nov 29 02:51:29 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [NOTICE]   (267820) : haproxy version is 2.8.14-c23fe91
Nov 29 02:51:29 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [NOTICE]   (267820) : path to executable is /usr/sbin/haproxy
Nov 29 02:51:29 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [WARNING]  (267820) : Exiting Master process...
Nov 29 02:51:29 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [ALERT]    (267820) : Current worker (267825) exited with code 143 (Terminated)
Nov 29 02:51:29 np0005539550 neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb[267810]: [WARNING]  (267820) : All workers exited. Exiting... (0)
Nov 29 02:51:29 np0005539550 systemd[1]: libpod-b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9.scope: Deactivated successfully.
Nov 29 02:51:29 np0005539550 podman[268601]: 2025-11-29 07:51:29.408909958 +0000 UTC m=+0.046697875 container died b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:51:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9-userdata-shm.mount: Deactivated successfully.
Nov 29 02:51:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a823b57d05fff0585f370f7e52af2031eacd4858acb8e53c4d6e7eea8ba37645-merged.mount: Deactivated successfully.
Nov 29 02:51:29 np0005539550 podman[268601]: 2025-11-29 07:51:29.451050651 +0000 UTC m=+0.088838568 container cleanup b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:51:29 np0005539550 systemd[1]: libpod-conmon-b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9.scope: Deactivated successfully.
Nov 29 02:51:29 np0005539550 podman[268632]: 2025-11-29 07:51:29.513723046 +0000 UTC m=+0.041718253 container remove b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.518 257641 INFO nova.virt.libvirt.driver [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Instance destroyed successfully.#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.518 257641 DEBUG nova.objects.instance [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lazy-loading 'resources' on Instance uuid 65be1187-6d11-4752-8ae6-ba034b77b9e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.520 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[04e317b9-d919-4808-bf12-4ed44997814c]: (4, ('Sat Nov 29 07:51:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb (b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9)\nb6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9\nSat Nov 29 07:51:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb (b6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9)\nb6cebe2a1513ab7ddf1b750c94f31690c80c843fa48481bc8bae0e195d951bd9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.521 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ed9ec0-b8c0-49db-8de6-921b3d4a424b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.522 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17051a29-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 kernel: tap17051a29-50: left promiscuous mode
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.544 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8bb423-ca1d-470e-975d-8c03adc2c13a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.544 257641 DEBUG nova.virt.libvirt.vif [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAssistedSnapshotsTest-server-728957882',display_name='tempest-VolumesAssistedSnapshotsTest-server-728957882',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesassistedsnapshotstest-server-728957882',id=8,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFajVSCNLyRURGjSdh9dSx2vm1nfm9aKB9WShiD1p7eq741JgIFSiA9yappXzmPZBlGBPKThmOJN6lq/caey+DJjkk8oMcDJQXVbdCguTwMWdn1UU63ak/eDLRt0FXasCA==',key_name='tempest-keypair-424144542',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='fa00908198f941b5afb99ba561a959d1',ramdisk_id='',reservation_id='r-4p41uz8h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAssistedSnapshotsTest-1257886079',owner_user_name='tempest-VolumesAssistedSnapshotsTest-1257886079-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:50:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='24b6be0dcb9a47aaa2356f6bcab4ef1c',uuid=65be1187-6d11-4752-8ae6-ba034b77b9e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "address": "fa:16:3e:9b:3f:09", "network": {"id": "17051a29-59e4-4f9d-a9c3-0a59a97ce9fb", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1243968495-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa00908198f941b5afb99ba561a959d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdde08ae7-bc", "ovs_interfaceid": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.544 257641 DEBUG nova.network.os_vif_util [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Converting VIF {"id": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "address": "fa:16:3e:9b:3f:09", "network": {"id": "17051a29-59e4-4f9d-a9c3-0a59a97ce9fb", "bridge": "br-int", "label": "tempest-VolumesAssistedSnapshotsTest-1243968495-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "fa00908198f941b5afb99ba561a959d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdde08ae7-bc", "ovs_interfaceid": "dde08ae7-bcb7-43ab-8adf-8ee4218091e1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.545 257641 DEBUG nova.network.os_vif_util [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9b:3f:09,bridge_name='br-int',has_traffic_filtering=True,id=dde08ae7-bcb7-43ab-8adf-8ee4218091e1,network=Network(17051a29-59e4-4f9d-a9c3-0a59a97ce9fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdde08ae7-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.545 257641 DEBUG os_vif [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:3f:09,bridge_name='br-int',has_traffic_filtering=True,id=dde08ae7-bcb7-43ab-8adf-8ee4218091e1,network=Network(17051a29-59e4-4f9d-a9c3-0a59a97ce9fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdde08ae7-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.548 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdde08ae7-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.551 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.553 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.555 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.558 257641 INFO os_vif [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9b:3f:09,bridge_name='br-int',has_traffic_filtering=True,id=dde08ae7-bcb7-43ab-8adf-8ee4218091e1,network=Network(17051a29-59e4-4f9d-a9c3-0a59a97ce9fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdde08ae7-bc')#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.565 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0d9f1b32-c215-4d3a-bb2e-bb94e928442b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.568 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fe3ef5-dc5c-4787-a795-d74836f31db8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.583 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9dfe7d-278f-4eca-b25a-9d7ebb5b1279]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 566938, 'reachable_time': 21019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268674, 'error': None, 'target': 'ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 systemd[1]: run-netns-ovnmeta\x2d17051a29\x2d59e4\x2d4f9d\x2da9c3\x2d0a59a97ce9fb.mount: Deactivated successfully.
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.592 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-17051a29-59e4-4f9d-a9c3-0a59a97ce9fb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:51:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:51:29.593 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[494ace70-9c54-4ec4-910d-14cd7e736161]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.873 257641 DEBUG nova.compute.manager [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-vif-unplugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.874 257641 DEBUG oslo_concurrency.lockutils [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.874 257641 DEBUG oslo_concurrency.lockutils [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.874 257641 DEBUG oslo_concurrency.lockutils [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.875 257641 DEBUG nova.compute.manager [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] No waiting events found dispatching network-vif-unplugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:51:29 np0005539550 nova_compute[257631]: 2025-11-29 07:51:29.875 257641 DEBUG nova.compute.manager [req-c12e31f8-2efc-42c6-bc44-1338674fab24 req-9fb9341c-0b1a-4d73-8474-caafeab3e561 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-vif-unplugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:51:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:30.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.050 257641 INFO nova.virt.libvirt.driver [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Deleting instance files /var/lib/nova/instances/65be1187-6d11-4752-8ae6-ba034b77b9e0_del#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.051 257641 INFO nova.virt.libvirt.driver [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Deletion of /var/lib/nova/instances/65be1187-6d11-4752-8ae6-ba034b77b9e0_del complete#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.156 257641 INFO nova.compute.manager [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.157 257641 DEBUG oslo.service.loopingcall [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.157 257641 DEBUG nova.compute.manager [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.158 257641 DEBUG nova.network.neutron [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:51:30 np0005539550 nova_compute[257631]: 2025-11-29 07:51:30.442 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 93 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 12 KiB/s wr, 100 op/s
Nov 29 02:51:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:51:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:51:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:51:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:51:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f79e6344-5fe5-445f-bb00-a5d43a52952c does not exist
Nov 29 02:51:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 53f4ef0a-24c8-4317-b908-43c5fc69baf3 does not exist
Nov 29 02:51:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 146dc486-aa3d-4312-933f-5e64a1b95a51 does not exist
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:51:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:51:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:32.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.078 257641 DEBUG nova.network.neutron [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:31.988408877 +0000 UTC m=+0.023184114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.125 257641 DEBUG nova.compute.manager [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.126 257641 DEBUG oslo_concurrency.lockutils [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.126 257641 DEBUG oslo_concurrency.lockutils [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.127 257641 DEBUG oslo_concurrency.lockutils [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.127 257641 DEBUG nova.compute.manager [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] No waiting events found dispatching network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.127 257641 WARNING nova.compute.manager [req-912eaf50-96f3-4ac6-a587-a45a5d9b8632 req-0712aece-7132-438d-8304-b3f8d9213cfe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received unexpected event network-vif-plugged-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.129 257641 INFO nova.compute.manager [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Took 1.97 seconds to deallocate network for instance.#033[00m
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.183813358 +0000 UTC m=+0.218588595 container create bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.225 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.226 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:51:32 np0005539550 systemd[1]: Started libpod-conmon-bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2.scope.
Nov 29 02:51:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.419910104 +0000 UTC m=+0.454685341 container init bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.426511618 +0000 UTC m=+0.461286855 container start bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.431482049 +0000 UTC m=+0.466257276 container attach bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Nov 29 02:51:32 np0005539550 practical_lichterman[268969]: 167 167
Nov 29 02:51:32 np0005539550 systemd[1]: libpod-bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2.scope: Deactivated successfully.
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.434044767 +0000 UTC m=+0.468820024 container died bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f496b3ce8863f70ccad9a8c5410d35ab429ed73329efb887376c2c105b1b0d6d-merged.mount: Deactivated successfully.
Nov 29 02:51:32 np0005539550 podman[268953]: 2025-11-29 07:51:32.471217919 +0000 UTC m=+0.505993156 container remove bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lichterman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 02:51:32 np0005539550 systemd[1]: libpod-conmon-bcb92d0b4548422f92c9df65b095f1a99259e7d0de9fc32c707e2d3dd7ed7ad2.scope: Deactivated successfully.
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.509 257641 DEBUG oslo_concurrency.processutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 93 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 3.2 KiB/s wr, 93 op/s
Nov 29 02:51:32 np0005539550 podman[268992]: 2025-11-29 07:51:32.647143125 +0000 UTC m=+0.050993508 container create d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:51:32 np0005539550 systemd[1]: Started libpod-conmon-d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f.scope.
Nov 29 02:51:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:32 np0005539550 podman[268992]: 2025-11-29 07:51:32.626207492 +0000 UTC m=+0.030057885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:32 np0005539550 podman[268992]: 2025-11-29 07:51:32.739257798 +0000 UTC m=+0.143108211 container init d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 02:51:32 np0005539550 podman[268992]: 2025-11-29 07:51:32.747018693 +0000 UTC m=+0.150869076 container start d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:51:32 np0005539550 podman[268992]: 2025-11-29 07:51:32.75407839 +0000 UTC m=+0.157928793 container attach d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.797 257641 DEBUG nova.compute.manager [req-afed9d88-d982-4013-9f7f-8279b5b25464 req-ef53d53d-39f4-4825-a755-07784270e596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Received event network-vif-deleted-dde08ae7-bcb7-43ab-8adf-8ee4218091e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:51:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115375400' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.950 257641 DEBUG oslo_concurrency.processutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:32 np0005539550 nova_compute[257631]: 2025-11-29 07:51:32.957 257641 DEBUG nova.compute.provider_tree [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:33 np0005539550 nova_compute[257631]: 2025-11-29 07:51:33.027 257641 DEBUG nova.scheduler.client.report [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:33 np0005539550 nova_compute[257631]: 2025-11-29 07:51:33.055 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:33 np0005539550 nova_compute[257631]: 2025-11-29 07:51:33.107 257641 INFO nova.scheduler.client.report [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Deleted allocations for instance 65be1187-6d11-4752-8ae6-ba034b77b9e0#033[00m
Nov 29 02:51:33 np0005539550 nova_compute[257631]: 2025-11-29 07:51:33.242 257641 DEBUG oslo_concurrency.lockutils [None req-3dcea768-54e6-4a65-ac64-ccbadf4a9208 24b6be0dcb9a47aaa2356f6bcab4ef1c fa00908198f941b5afb99ba561a959d1 - - default default] Lock "65be1187-6d11-4752-8ae6-ba034b77b9e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:51:33 np0005539550 tender_ardinghelli[269027]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:51:33 np0005539550 tender_ardinghelli[269027]: --> relative data size: 1.0
Nov 29 02:51:33 np0005539550 tender_ardinghelli[269027]: --> All data devices are unavailable
Nov 29 02:51:33 np0005539550 systemd[1]: libpod-d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f.scope: Deactivated successfully.
Nov 29 02:51:33 np0005539550 podman[268992]: 2025-11-29 07:51:33.593568512 +0000 UTC m=+0.997418895 container died d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-923bcb437b92b2044ebbf91ac9f9b15b226137e60365e1239024ace624979eb5-merged.mount: Deactivated successfully.
Nov 29 02:51:33 np0005539550 podman[268992]: 2025-11-29 07:51:33.646079439 +0000 UTC m=+1.049929822 container remove d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:33 np0005539550 systemd[1]: libpod-conmon-d10f628a26a28ff8f2fac4e119bc7aeadd9bb33d41aa6b4b49cf832d55565f4f.scope: Deactivated successfully.
Nov 29 02:51:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:34.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.258549326 +0000 UTC m=+0.044898467 container create 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:51:34 np0005539550 systemd[1]: Started libpod-conmon-96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba.scope.
Nov 29 02:51:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.334351728 +0000 UTC m=+0.120700879 container init 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.24168937 +0000 UTC m=+0.028038531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.341297631 +0000 UTC m=+0.127646772 container start 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.345504272 +0000 UTC m=+0.131853413 container attach 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 02:51:34 np0005539550 nifty_albattani[269214]: 167 167
Nov 29 02:51:34 np0005539550 systemd[1]: libpod-96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba.scope: Deactivated successfully.
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.348238884 +0000 UTC m=+0.134588035 container died 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b20cb8a5ebf61f5eca0dacee592a7f965149e6ac887c842fda11632bdf582bbd-merged.mount: Deactivated successfully.
Nov 29 02:51:34 np0005539550 podman[269198]: 2025-11-29 07:51:34.400220457 +0000 UTC m=+0.186569598 container remove 96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:51:34 np0005539550 systemd[1]: libpod-conmon-96ef02bd98383a57a078fda363dc890e3ff84471ed2067f2e51ce15ca7770aba.scope: Deactivated successfully.
Nov 29 02:51:34 np0005539550 nova_compute[257631]: 2025-11-29 07:51:34.550 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:34 np0005539550 podman[269238]: 2025-11-29 07:51:34.571107451 +0000 UTC m=+0.042350240 container create 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 279 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 4.0 KiB/s wr, 100 op/s
Nov 29 02:51:34 np0005539550 systemd[1]: Started libpod-conmon-46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8.scope.
Nov 29 02:51:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdca21ec1a1a056f13acc1f30dcbaacd54ce32254db78a3620fb33a71c80e2ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdca21ec1a1a056f13acc1f30dcbaacd54ce32254db78a3620fb33a71c80e2ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdca21ec1a1a056f13acc1f30dcbaacd54ce32254db78a3620fb33a71c80e2ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdca21ec1a1a056f13acc1f30dcbaacd54ce32254db78a3620fb33a71c80e2ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:34 np0005539550 podman[269238]: 2025-11-29 07:51:34.555358795 +0000 UTC m=+0.026601604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:34 np0005539550 podman[269238]: 2025-11-29 07:51:34.65285358 +0000 UTC m=+0.124096389 container init 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:51:34 np0005539550 podman[269238]: 2025-11-29 07:51:34.659878735 +0000 UTC m=+0.131121524 container start 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:51:34 np0005539550 podman[269238]: 2025-11-29 07:51:34.663200373 +0000 UTC m=+0.134443252 container attach 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 02:51:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]: {
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:    "0": [
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:        {
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "devices": [
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "/dev/loop3"
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            ],
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "lv_name": "ceph_lv0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "lv_size": "7511998464",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "name": "ceph_lv0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "tags": {
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.cluster_name": "ceph",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.crush_device_class": "",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.encrypted": "0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.osd_id": "0",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.type": "block",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:                "ceph.vdo": "0"
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            },
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "type": "block",
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:            "vg_name": "ceph_vg0"
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:        }
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]:    ]
Nov 29 02:51:35 np0005539550 interesting_feynman[269254]: }
Nov 29 02:51:35 np0005539550 nova_compute[257631]: 2025-11-29 07:51:35.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:35 np0005539550 systemd[1]: libpod-46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8.scope: Deactivated successfully.
Nov 29 02:51:35 np0005539550 podman[269238]: 2025-11-29 07:51:35.461926218 +0000 UTC m=+0.933169007 container died 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:51:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fdca21ec1a1a056f13acc1f30dcbaacd54ce32254db78a3620fb33a71c80e2ad-merged.mount: Deactivated successfully.
Nov 29 02:51:35 np0005539550 podman[269238]: 2025-11-29 07:51:35.526439052 +0000 UTC m=+0.997681841 container remove 46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:51:35 np0005539550 systemd[1]: libpod-conmon-46a03cf2d7798c0c5dccd3c582f0207150405783e306754fbfed315eb11ecbf8.scope: Deactivated successfully.
Nov 29 02:51:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:36.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.086491354 +0000 UTC m=+0.034247976 container create aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:51:36 np0005539550 systemd[1]: Started libpod-conmon-aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5.scope.
Nov 29 02:51:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.162171073 +0000 UTC m=+0.109927715 container init aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.072426442 +0000 UTC m=+0.020183084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.169532597 +0000 UTC m=+0.117289219 container start aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.173193124 +0000 UTC m=+0.120949746 container attach aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:51:36 np0005539550 epic_poitras[269433]: 167 167
Nov 29 02:51:36 np0005539550 systemd[1]: libpod-aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5.scope: Deactivated successfully.
Nov 29 02:51:36 np0005539550 conmon[269433]: conmon aca914daa7156b6d3b70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5.scope/container/memory.events
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.177643681 +0000 UTC m=+0.125400303 container died aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:51:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c5f6a84b157ea3e4c78a8d0faa5e5d66541f50190d7f6eadf753ed393a9aea44-merged.mount: Deactivated successfully.
Nov 29 02:51:36 np0005539550 podman[269417]: 2025-11-29 07:51:36.216181669 +0000 UTC m=+0.163938291 container remove aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poitras, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:51:36 np0005539550 systemd[1]: libpod-conmon-aca914daa7156b6d3b70294a6537343073b4cb25605a482a5b5008f515727df5.scope: Deactivated successfully.
Nov 29 02:51:36 np0005539550 podman[269458]: 2025-11-29 07:51:36.373483824 +0000 UTC m=+0.036567377 container create b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:51:36 np0005539550 systemd[1]: Started libpod-conmon-b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e.scope.
Nov 29 02:51:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:51:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6cac433b44dbfe6bb29ae6764f8283ea452c643a15f2fc910a18a7054086ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6cac433b44dbfe6bb29ae6764f8283ea452c643a15f2fc910a18a7054086ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6cac433b44dbfe6bb29ae6764f8283ea452c643a15f2fc910a18a7054086ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6cac433b44dbfe6bb29ae6764f8283ea452c643a15f2fc910a18a7054086ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:51:36 np0005539550 podman[269458]: 2025-11-29 07:51:36.356947347 +0000 UTC m=+0.020030920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:51:36 np0005539550 podman[269458]: 2025-11-29 07:51:36.460678517 +0000 UTC m=+0.123762100 container init b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:36 np0005539550 podman[269458]: 2025-11-29 07:51:36.468910814 +0000 UTC m=+0.131994367 container start b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:51:36 np0005539550 podman[269458]: 2025-11-29 07:51:36.472987392 +0000 UTC m=+0.136070955 container attach b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:51:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 2.6 KiB/s wr, 68 op/s
Nov 29 02:51:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:37.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:37 np0005539550 confident_khorana[269476]: {
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:        "osd_id": 0,
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:        "type": "bluestore"
Nov 29 02:51:37 np0005539550 confident_khorana[269476]:    }
Nov 29 02:51:37 np0005539550 confident_khorana[269476]: }
Nov 29 02:51:37 np0005539550 systemd[1]: libpod-b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e.scope: Deactivated successfully.
Nov 29 02:51:37 np0005539550 podman[269458]: 2025-11-29 07:51:37.356973379 +0000 UTC m=+1.020056952 container died b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:51:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5e6cac433b44dbfe6bb29ae6764f8283ea452c643a15f2fc910a18a7054086ec-merged.mount: Deactivated successfully.
Nov 29 02:51:37 np0005539550 podman[269458]: 2025-11-29 07:51:37.627965507 +0000 UTC m=+1.291049070 container remove b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:51:37 np0005539550 systemd[1]: libpod-conmon-b4793e2f51616385fe3ae644508c554be8743a2f8abc163ab40fad4b2ae3683e.scope: Deactivated successfully.
Nov 29 02:51:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:51:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:51:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ae279aa5-efab-48e3-8bb6-a773f1dff4f8 does not exist
Nov 29 02:51:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1beaed9f-6d0b-43d5-a3eb-1e702f68048e does not exist
Nov 29 02:51:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8069ca29-06f0-4963-a3c7-ad61476a4513 does not exist
Nov 29 02:51:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 2.6 KiB/s wr, 63 op/s
Nov 29 02:51:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.553 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.784 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.876 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402684.8745942, bc395e50-9261-47b8-a618-4af9eb4933fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.876 257641 INFO nova.compute.manager [-] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:39 np0005539550 nova_compute[257631]: 2025-11-29 07:51:39.902 257641 DEBUG nova.compute.manager [None req-bc6e0d86-fa36-44e2-9fb9-d6bad2957819 - - - - - -] [instance: bc395e50-9261-47b8-a618-4af9eb4933fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:40 np0005539550 nova_compute[257631]: 2025-11-29 07:51:40.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 40 op/s
Nov 29 02:51:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:41.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:42.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 7 op/s
Nov 29 02:51:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:43.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:44.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.516 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402689.5151925, 65be1187-6d11-4752-8ae6-ba034b77b9e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.516 257641 INFO nova.compute.manager [-] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.561 257641 DEBUG nova.compute.manager [None req-0bec586d-45a1-4d1f-8917-b8440c325ec2 - - - - - -] [instance: 65be1187-6d11-4752-8ae6-ba034b77b9e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.561 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 852 B/s wr, 7 op/s
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.608 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "558df211-792b-4f19-b366-d27b780a73ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.609 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.628 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.730 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.731 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.741 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.741 257641 INFO nova.compute.claims [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:51:44 np0005539550 nova_compute[257631]: 2025-11-29 07:51:44.874 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:45.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125748916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.382 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.389 257641 DEBUG nova.compute.provider_tree [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.420 257641 DEBUG nova.scheduler.client.report [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.449 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.453 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.454 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.525 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.526 257641 DEBUG nova.network.neutron [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.557 257641 INFO nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.584 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.706 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.708 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.709 257641 INFO nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Creating image(s)#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.741 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.779 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.809 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.814 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.880 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.881 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.882 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.883 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.914 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:45 np0005539550 nova_compute[257631]: 2025-11-29 07:51:45.918 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 558df211-792b-4f19-b366-d27b780a73ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:46.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:46 np0005539550 nova_compute[257631]: 2025-11-29 07:51:46.233 257641 DEBUG nova.network.neutron [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:51:46 np0005539550 nova_compute[257631]: 2025-11-29 07:51:46.234 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:51:46 np0005539550 nova_compute[257631]: 2025-11-29 07:51:46.254 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 558df211-792b-4f19-b366-d27b780a73ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:46 np0005539550 podman[269732]: 2025-11-29 07:51:46.332048994 +0000 UTC m=+0.058790294 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:51:46 np0005539550 podman[269731]: 2025-11-29 07:51:46.338011201 +0000 UTC m=+0.065614614 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:51:46 np0005539550 nova_compute[257631]: 2025-11-29 07:51:46.348 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] resizing rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:51:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.050 257641 DEBUG nova.objects.instance [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 558df211-792b-4f19-b366-d27b780a73ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:47.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.068 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.068 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Ensure instance console log exists: /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.069 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.069 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.070 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.071 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.075 257641 WARNING nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.081 257641 DEBUG nova.virt.libvirt.host [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.082 257641 DEBUG nova.virt.libvirt.host [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.086 257641 DEBUG nova.virt.libvirt.host [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.087 257641 DEBUG nova.virt.libvirt.host [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.088 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.088 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.089 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.089 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.089 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.089 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.089 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.090 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.090 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.090 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.090 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.091 257641 DEBUG nova.virt.hardware [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.093 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:51:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551966730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.576 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.601 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:47 np0005539550 nova_compute[257631]: 2025-11-29 07:51:47.605 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:48.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:51:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1229978982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.065 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.067 257641 DEBUG nova.objects.instance [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 558df211-792b-4f19-b366-d27b780a73ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.226 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <uuid>558df211-792b-4f19-b366-d27b780a73ff</uuid>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <name>instance-0000000b</name>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:name>tempest-TenantUsagesTestJSON-server-1505690600</nova:name>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:51:47</nova:creationTime>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:user uuid="87697f1c359a420b87e5117546f93346">tempest-TenantUsagesTestJSON-1762847679-project-member</nova:user>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <nova:project uuid="e2d0cea1586c4a168b1727ff2ee000c9">tempest-TenantUsagesTestJSON-1762847679</nova:project>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="serial">558df211-792b-4f19-b366-d27b780a73ff</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="uuid">558df211-792b-4f19-b366-d27b780a73ff</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/558df211-792b-4f19-b366-d27b780a73ff_disk">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/558df211-792b-4f19-b366-d27b780a73ff_disk.config">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/console.log" append="off"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:51:48 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:51:48 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:51:48 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:51:48 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.308 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.309 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.310 257641 INFO nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Using config drive#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.336 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 61 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 919 KiB/s wr, 24 op/s
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.849 257641 INFO nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Creating config drive at /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config#033[00m
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.855 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7rqmg02p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:48 np0005539550 nova_compute[257631]: 2025-11-29 07:51:48.983 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7rqmg02p" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.014 257641 DEBUG nova.storage.rbd_utils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] rbd image 558df211-792b-4f19-b366-d27b780a73ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.018 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config 558df211-792b-4f19-b366-d27b780a73ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:49.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.214 257641 DEBUG oslo_concurrency.processutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config 558df211-792b-4f19-b366-d27b780a73ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.215 257641 INFO nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Deleting local config drive /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff/disk.config because it was imported into RBD.#033[00m
Nov 29 02:51:49 np0005539550 systemd-machined[216673]: New machine qemu-5-instance-0000000b.
Nov 29 02:51:49 np0005539550 systemd[1]: Started Virtual Machine qemu-5-instance-0000000b.
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.562 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.723 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402709.7230046, 558df211-792b-4f19-b366-d27b780a73ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.724 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.729 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.729 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.735 257641 INFO nova.virt.libvirt.driver [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance spawned successfully.#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.736 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.756 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.763 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.766 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.766 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.767 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.767 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.768 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.769 257641 DEBUG nova.virt.libvirt.driver [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.806 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.807 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402709.724339, 558df211-792b-4f19-b366-d27b780a73ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.807 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] VM Started (Lifecycle Event)#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.867 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.872 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.900 257641 INFO nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Took 4.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.901 257641 DEBUG nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:51:49 np0005539550 nova_compute[257631]: 2025-11-29 07:51:49.904 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:51:50 np0005539550 nova_compute[257631]: 2025-11-29 07:51:50.002 257641 INFO nova.compute.manager [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Took 5.30 seconds to build instance.#033[00m
Nov 29 02:51:50 np0005539550 nova_compute[257631]: 2025-11-29 07:51:50.030 257641 DEBUG oslo_concurrency.lockutils [None req-276e9578-3e26-4da3-b603-d89f648c2217 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:50.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:50 np0005539550 nova_compute[257631]: 2025-11-29 07:51:50.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 02:51:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:51.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:51:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:52.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.279 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "558df211-792b-4f19-b366-d27b780a73ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.280 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.280 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "558df211-792b-4f19-b366-d27b780a73ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.281 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.281 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.282 257641 INFO nova.compute.manager [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Terminating instance#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.283 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "refresh_cache-558df211-792b-4f19-b366-d27b780a73ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.283 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquired lock "refresh_cache-558df211-792b-4f19-b366-d27b780a73ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.284 257641 DEBUG nova.network.neutron [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:51:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 29 02:51:52 np0005539550 nova_compute[257631]: 2025-11-29 07:51:52.900 257641 DEBUG nova.network.neutron [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:53 np0005539550 nova_compute[257631]: 2025-11-29 07:51:53.325 257641 DEBUG nova.network.neutron [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:53 np0005539550 nova_compute[257631]: 2025-11-29 07:51:53.349 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Releasing lock "refresh_cache-558df211-792b-4f19-b366-d27b780a73ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:51:53 np0005539550 nova_compute[257631]: 2025-11-29 07:51:53.351 257641 DEBUG nova.compute.manager [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:51:53 np0005539550 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 29 02:51:53 np0005539550 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Consumed 4.132s CPU time.
Nov 29 02:51:53 np0005539550 systemd-machined[216673]: Machine qemu-5-instance-0000000b terminated.
Nov 29 02:51:53 np0005539550 nova_compute[257631]: 2025-11-29 07:51:53.572 257641 INFO nova.virt.libvirt.driver [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance destroyed successfully.#033[00m
Nov 29 02:51:53 np0005539550 nova_compute[257631]: 2025-11-29 07:51:53.573 257641 DEBUG nova.objects.instance [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lazy-loading 'resources' on Instance uuid 558df211-792b-4f19-b366-d27b780a73ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:51:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.547 257641 INFO nova.virt.libvirt.driver [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Deleting instance files /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff_del#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.548 257641 INFO nova.virt.libvirt.driver [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Deletion of /var/lib/nova/instances/558df211-792b-4f19-b366-d27b780a73ff_del complete#033[00m
Nov 29 02:51:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.611 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.625 257641 INFO nova.compute.manager [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.626 257641 DEBUG oslo.service.loopingcall [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.626 257641 DEBUG nova.compute.manager [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.626 257641 DEBUG nova.network.neutron [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.881 257641 DEBUG nova.network.neutron [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.904 257641 DEBUG nova.network.neutron [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.919 257641 INFO nova.compute.manager [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Took 0.29 seconds to deallocate network for instance.#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.962 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:51:54 np0005539550 nova_compute[257631]: 2025-11-29 07:51:54.964 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.040 257641 DEBUG oslo_concurrency.processutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:51:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:51:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:55.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.453 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:51:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:51:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880890820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.497 257641 DEBUG oslo_concurrency.processutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.504 257641 DEBUG nova.compute.provider_tree [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.531 257641 DEBUG nova.scheduler.client.report [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.583 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:55 np0005539550 nova_compute[257631]: 2025-11-29 07:51:55.634 257641 INFO nova.scheduler.client.report [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Deleted allocations for instance 558df211-792b-4f19-b366-d27b780a73ff#033[00m
Nov 29 02:51:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:56.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:56 np0005539550 nova_compute[257631]: 2025-11-29 07:51:56.219 257641 DEBUG oslo_concurrency.lockutils [None req-1ad00589-bcb6-47fb-90aa-fd89b0a1838e 87697f1c359a420b87e5117546f93346 e2d0cea1586c4a168b1727ff2ee000c9 - - default default] Lock "558df211-792b-4f19-b366-d27b780a73ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:51:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 61 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Nov 29 02:51:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:57.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:51:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:51:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:51:58.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:58 np0005539550 podman[270069]: 2025-11-29 07:51:58.564744837 +0000 UTC m=+0.101981425 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:51:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.956003) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402718956106, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2232, "num_deletes": 258, "total_data_size": 3915588, "memory_usage": 3967696, "flush_reason": "Manual Compaction"}
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402718983401, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3845219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21374, "largest_seqno": 23605, "table_properties": {"data_size": 3835247, "index_size": 6274, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 20751, "raw_average_key_size": 20, "raw_value_size": 3814943, "raw_average_value_size": 3693, "num_data_blocks": 277, "num_entries": 1033, "num_filter_entries": 1033, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402494, "oldest_key_time": 1764402494, "file_creation_time": 1764402718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 27468 microseconds, and 10182 cpu microseconds.
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.983482) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3845219 bytes OK
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.983507) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.986033) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.986068) EVENT_LOG_v1 {"time_micros": 1764402718986060, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.986083) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3906414, prev total WAL file size 3917725, number of live WAL files 2.
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.987271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3755KB)], [50(8015KB)]
Nov 29 02:51:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402718987362, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 12053259, "oldest_snapshot_seqno": -1}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5546 keys, 11894606 bytes, temperature: kUnknown
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719077662, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 11894606, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11854863, "index_size": 24785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 140650, "raw_average_key_size": 25, "raw_value_size": 11751965, "raw_average_value_size": 2118, "num_data_blocks": 1019, "num_entries": 5546, "num_filter_entries": 5546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764402718, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:51:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:51:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:51:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:51:59.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.078007) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 11894606 bytes
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.080091) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.3 rd, 131.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.8 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.2) write-amplify(3.1) OK, records in: 6077, records dropped: 531 output_compression: NoCompression
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.080121) EVENT_LOG_v1 {"time_micros": 1764402719080109, "job": 26, "event": "compaction_finished", "compaction_time_micros": 90398, "compaction_time_cpu_micros": 30053, "output_level": 6, "num_output_files": 1, "total_output_size": 11894606, "num_input_records": 6077, "num_output_records": 5546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719081116, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719082556, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:58.987105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.082658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.082665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.082667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.082669) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.082670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.083115) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719083151, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 13330, "memory_usage": 19832, "flush_reason": "Manual Compaction"}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719085074, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 13305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23606, "largest_seqno": 23861, "table_properties": {"data_size": 11552, "index_size": 50, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 4640, "raw_average_key_size": 18, "raw_value_size": 8177, "raw_average_value_size": 31, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402718, "oldest_key_time": 1764402718, "file_creation_time": 1764402719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 2011 microseconds, and 628 cpu microseconds.
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.085111) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 13305 bytes OK
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.085139) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.086993) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.087006) EVENT_LOG_v1 {"time_micros": 1764402719087002, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.087018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 11311, prev total WAL file size 11311, number of live WAL files 2.
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.087342) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(12KB)], [53(11MB)]
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719087420, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11907911, "oldest_snapshot_seqno": -1}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5296 keys, 9775593 bytes, temperature: kUnknown
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719163504, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9775593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9739395, "index_size": 21880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 136150, "raw_average_key_size": 25, "raw_value_size": 9642647, "raw_average_value_size": 1820, "num_data_blocks": 888, "num_entries": 5296, "num_filter_entries": 5296, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764402719, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.163784) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9775593 bytes
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.165901) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.3 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.3 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(1629.7) write-amplify(734.7) OK, records in: 5802, records dropped: 506 output_compression: NoCompression
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.165925) EVENT_LOG_v1 {"time_micros": 1764402719165915, "job": 28, "event": "compaction_finished", "compaction_time_micros": 76167, "compaction_time_cpu_micros": 22767, "output_level": 6, "num_output_files": 1, "total_output_size": 9775593, "num_input_records": 5802, "num_output_records": 5296, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719166226, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402719168874, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.087217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.169042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.169048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.169051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.169053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:51:59.169059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:51:59
Nov 29 02:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'volumes', 'images', 'default.rgw.log', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 02:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:51:59 np0005539550 nova_compute[257631]: 2025-11-29 07:51:59.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:00.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:00 np0005539550 nova_compute[257631]: 2025-11-29 07:52:00.454 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 908 KiB/s wr, 102 op/s
Nov 29 02:52:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:01.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:52:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5192 writes, 23K keys, 5187 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5192 writes, 5187 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1325 writes, 6194 keys, 1323 commit groups, 1.0 writes per commit group, ingest: 9.41 MB, 0.02 MB/s#012Interval WAL: 1325 writes, 1323 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     17.3      1.77              0.10        14    0.126       0      0       0.0       0.0#012  L6      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     16.0     13.8      9.47              0.37        13    0.729     67K   6556       0.0       0.0#012 Sum      1/0    9.32 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   5.3     13.5     14.3     11.24              0.48        27    0.416     67K   6556       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.2     60.7     60.6      1.14              0.21        12    0.095     34K   3521       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     16.0     13.8      9.47              0.37        13    0.729     67K   6556       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     17.3      1.76              0.10        13    0.135       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.030, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.07 MB/s write, 0.15 GB read, 0.06 MB/s read, 11.2 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 13.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000169 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(755,12.99 MB,4.27273%) FilterBlock(28,190.30 KB,0.0611305%) IndexBlock(28,371.77 KB,0.119425%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 02:52:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:02.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 98 op/s
Nov 29 02:52:02 np0005539550 nova_compute[257631]: 2025-11-29 07:52:02.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:02 np0005539550 nova_compute[257631]: 2025-11-29 07:52:02.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:52:02 np0005539550 nova_compute[257631]: 2025-11-29 07:52:02.982 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:52:02 np0005539550 nova_compute[257631]: 2025-11-29 07:52:02.982 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.006 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.006 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.006 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.007 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.007 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:03.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/997347005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.449 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.659 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.661 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4801MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.661 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.662 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.759 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.760 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:52:03 np0005539550 nova_compute[257631]: 2025-11-29 07:52:03.785 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690428604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.238 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.244 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.258 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.284 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.285 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:04.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 KiB/s wr, 98 op/s
Nov 29 02:52:04 np0005539550 nova_compute[257631]: 2025-11-29 07:52:04.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:05.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.222 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.223 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.288 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.288 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.289 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.290 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.456 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:05.508 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:05.510 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:05 np0005539550 nova_compute[257631]: 2025-11-29 07:52:05.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 965 KiB/s rd, 1.2 KiB/s wr, 56 op/s
Nov 29 02:52:06 np0005539550 nova_compute[257631]: 2025-11-29 07:52:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:52:06 np0005539550 nova_compute[257631]: 2025-11-29 07:52:06.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:52:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:07.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:52:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:08 np0005539550 nova_compute[257631]: 2025-11-29 07:52:08.572 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402713.571167, 558df211-792b-4f19-b366-d27b780a73ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:08 np0005539550 nova_compute[257631]: 2025-11-29 07:52:08.573 257641 INFO nova.compute.manager [-] [instance: 558df211-792b-4f19-b366-d27b780a73ff] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Nov 29 02:52:08 np0005539550 nova_compute[257631]: 2025-11-29 07:52:08.659 257641 DEBUG nova.compute.manager [None req-e3026c1f-7aa9-46f7-add8-69729324d399 - - - - - -] [instance: 558df211-792b-4f19-b366-d27b780a73ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:52:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:09.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:09 np0005539550 nova_compute[257631]: 2025-11-29 07:52:09.652 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:10.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:10 np0005539550 nova_compute[257631]: 2025-11-29 07:52:10.460 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:13.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:14.512 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:14 np0005539550 nova_compute[257631]: 2025-11-29 07:52:14.722 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:15 np0005539550 nova_compute[257631]: 2025-11-29 07:52:15.461 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:16.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:17 np0005539550 podman[270202]: 2025-11-29 07:52:17.322885889 +0000 UTC m=+0.062964454 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 02:52:17 np0005539550 podman[270201]: 2025-11-29 07:52:17.325476897 +0000 UTC m=+0.067388130 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 02:52:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:18.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:18.924 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:18.925 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:18.925 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:52:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:52:19 np0005539550 nova_compute[257631]: 2025-11-29 07:52:19.723 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:20 np0005539550 nova_compute[257631]: 2025-11-29 07:52:20.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:20.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:24.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:24 np0005539550 nova_compute[257631]: 2025-11-29 07:52:24.725 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:25.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:25 np0005539550 nova_compute[257631]: 2025-11-29 07:52:25.465 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:27.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 02:52:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:29.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 02:52:29 np0005539550 podman[270297]: 2025-11-29 07:52:29.35216163 +0000 UTC m=+0.091798756 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:29 np0005539550 nova_compute[257631]: 2025-11-29 07:52:29.727 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:30 np0005539550 nova_compute[257631]: 2025-11-29 07:52:30.467 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:30 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:52:30 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 02:52:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:32.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:33.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:34 np0005539550 nova_compute[257631]: 2025-11-29 07:52:34.728 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:35.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:35 np0005539550 nova_compute[257631]: 2025-11-29 07:52:35.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:36.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:37 np0005539550 nova_compute[257631]: 2025-11-29 07:52:37.932 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:37 np0005539550 nova_compute[257631]: 2025-11-29 07:52:37.932 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:37 np0005539550 nova_compute[257631]: 2025-11-29 07:52:37.948 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.040 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.041 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.048 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.048 257641 INFO nova.compute.claims [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.315 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:38.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 261 MiB used, 21 GiB / 21 GiB avail
Nov 29 02:52:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:52:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680709925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.787 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.799 257641 DEBUG nova.compute.provider_tree [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.829 257641 DEBUG nova.scheduler.client.report [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.857 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.858 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.939 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.939 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:52:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.966 257641 INFO nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:52:38 np0005539550 nova_compute[257631]: 2025-11-29 07:52:38.998 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 96c10629-0684-4719-883d-f6ecad415616 does not exist
Nov 29 02:52:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 217e2624-bd35-4af5-9638-82c655b8dc9c does not exist
Nov 29 02:52:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8faf9001-68d5-49a0-9021-e5101d54517c does not exist
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.116 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.117 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.117 257641 INFO nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Creating image(s)#033[00m
Nov 29 02:52:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:39.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.144 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.180 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.206 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.211 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.273 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.275 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.276 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.276 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.307 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.311 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3fde18f7-843a-48d8-b394-c299cf479c37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.343 257641 DEBUG nova.policy [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7f80aba0abfa403b80928e251377a7cd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ba323f7dc95a4f11911e6559a1b3c99e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.730 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:39 np0005539550 podman[270713]: 2025-11-29 07:52:39.639636359 +0000 UTC m=+0.023616573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:39 np0005539550 podman[270713]: 2025-11-29 07:52:39.932960419 +0000 UTC m=+0.316940613 container create 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:52:39 np0005539550 nova_compute[257631]: 2025-11-29 07:52:39.956 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3fde18f7-843a-48d8-b394-c299cf479c37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:39 np0005539550 systemd[1]: Started libpod-conmon-25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d.scope.
Nov 29 02:52:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:40 np0005539550 podman[270713]: 2025-11-29 07:52:40.02602743 +0000 UTC m=+0.410007644 container init 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:52:40 np0005539550 podman[270713]: 2025-11-29 07:52:40.038114778 +0000 UTC m=+0.422094972 container start 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.041 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] resizing rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:52:40 np0005539550 podman[270713]: 2025-11-29 07:52:40.043163966 +0000 UTC m=+0.427144160 container attach 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:52:40 np0005539550 flamboyant_raman[270746]: 167 167
Nov 29 02:52:40 np0005539550 systemd[1]: libpod-25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d.scope: Deactivated successfully.
Nov 29 02:52:40 np0005539550 conmon[270746]: conmon 25161de3fd5e623cb4af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d.scope/container/memory.events
Nov 29 02:52:40 np0005539550 podman[270713]: 2025-11-29 07:52:40.049110528 +0000 UTC m=+0.433090732 container died 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:52:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-05158c414080ceba93059963185d4924b61f339d691e932822cb9ee399b95c41-merged.mount: Deactivated successfully.
Nov 29 02:52:40 np0005539550 podman[270713]: 2025-11-29 07:52:40.099277035 +0000 UTC m=+0.483257229 container remove 25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_raman, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:52:40 np0005539550 systemd[1]: libpod-conmon-25161de3fd5e623cb4afcbb75f8f69506b5ad30b3a6b8842e46cc112341d212d.scope: Deactivated successfully.
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.163 257641 DEBUG nova.objects.instance [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'migration_context' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.208 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.208 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Ensure instance console log exists: /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.209 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.209 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.209 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:40 np0005539550 podman[270825]: 2025-11-29 07:52:40.296666163 +0000 UTC m=+0.057848285 container create 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 02:52:40 np0005539550 systemd[1]: Started libpod-conmon-86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267.scope.
Nov 29 02:52:40 np0005539550 podman[270825]: 2025-11-29 07:52:40.277018612 +0000 UTC m=+0.038200764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:40 np0005539550 podman[270825]: 2025-11-29 07:52:40.395224383 +0000 UTC m=+0.156406525 container init 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:52:40 np0005539550 podman[270825]: 2025-11-29 07:52:40.404821837 +0000 UTC m=+0.166003959 container start 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:52:40 np0005539550 podman[270825]: 2025-11-29 07:52:40.409371173 +0000 UTC m=+0.170553325 container attach 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 02:52:40 np0005539550 nova_compute[257631]: 2025-11-29 07:52:40.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:40.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 57 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 488 KiB/s wr, 2 op/s
Nov 29 02:52:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:52:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:41.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:52:41 np0005539550 zen_panini[270842]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:52:41 np0005539550 zen_panini[270842]: --> relative data size: 1.0
Nov 29 02:52:41 np0005539550 zen_panini[270842]: --> All data devices are unavailable
Nov 29 02:52:41 np0005539550 systemd[1]: libpod-86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267.scope: Deactivated successfully.
Nov 29 02:52:41 np0005539550 podman[270857]: 2025-11-29 07:52:41.298707214 +0000 UTC m=+0.023823478 container died 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:52:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9623bbe910f9057af5b6660e5bf1a902796f273291f242756b38662b4801fb13-merged.mount: Deactivated successfully.
Nov 29 02:52:41 np0005539550 podman[270857]: 2025-11-29 07:52:41.35314081 +0000 UTC m=+0.078257044 container remove 86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:52:41 np0005539550 systemd[1]: libpod-conmon-86df97cd3cfc31f85f0981514d2b2afe184a20fda58879c42c7e677292fcc267.scope: Deactivated successfully.
Nov 29 02:52:41 np0005539550 podman[271010]: 2025-11-29 07:52:41.942554592 +0000 UTC m=+0.042727179 container create 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 02:52:41 np0005539550 systemd[1]: Started libpod-conmon-3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3.scope.
Nov 29 02:52:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:42.019930293 +0000 UTC m=+0.120102900 container init 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:41.928075813 +0000 UTC m=+0.028248420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:42.028378888 +0000 UTC m=+0.128551475 container start 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:42.032362659 +0000 UTC m=+0.132535316 container attach 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:52:42 np0005539550 elegant_raman[271027]: 167 167
Nov 29 02:52:42 np0005539550 systemd[1]: libpod-3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3.scope: Deactivated successfully.
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:42.034250987 +0000 UTC m=+0.134423574 container died 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 02:52:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-81ec75226770b13c20097d11e60804a083a4914f6d2c3a77741548cee6e8f278-merged.mount: Deactivated successfully.
Nov 29 02:52:42 np0005539550 podman[271010]: 2025-11-29 07:52:42.07048999 +0000 UTC m=+0.170662577 container remove 3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_raman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 02:52:42 np0005539550 systemd[1]: libpod-conmon-3f34a7181361a18c5714b64f2873331dbed0937bc2ceef0a443582501bf61be3.scope: Deactivated successfully.
Nov 29 02:52:42 np0005539550 podman[271051]: 2025-11-29 07:52:42.241368802 +0000 UTC m=+0.043010986 container create 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:52:42 np0005539550 systemd[1]: Started libpod-conmon-0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697.scope.
Nov 29 02:52:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452e1722e420a5656bc7249771ec3ec4babbbd5dfe4902bf4f03623547b7b1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452e1722e420a5656bc7249771ec3ec4babbbd5dfe4902bf4f03623547b7b1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452e1722e420a5656bc7249771ec3ec4babbbd5dfe4902bf4f03623547b7b1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/452e1722e420a5656bc7249771ec3ec4babbbd5dfe4902bf4f03623547b7b1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:42 np0005539550 podman[271051]: 2025-11-29 07:52:42.221533527 +0000 UTC m=+0.023175731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:42 np0005539550 podman[271051]: 2025-11-29 07:52:42.321553595 +0000 UTC m=+0.123195799 container init 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 02:52:42 np0005539550 podman[271051]: 2025-11-29 07:52:42.327355862 +0000 UTC m=+0.128998046 container start 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:52:42 np0005539550 podman[271051]: 2025-11-29 07:52:42.330673567 +0000 UTC m=+0.132315781 container attach 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 02:52:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:42.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 57 MiB data, 262 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 488 KiB/s wr, 2 op/s
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]: {
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:    "0": [
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:        {
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "devices": [
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "/dev/loop3"
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            ],
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "lv_name": "ceph_lv0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "lv_size": "7511998464",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "name": "ceph_lv0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "tags": {
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.cluster_name": "ceph",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.crush_device_class": "",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.encrypted": "0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.osd_id": "0",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.type": "block",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:                "ceph.vdo": "0"
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            },
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "type": "block",
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:            "vg_name": "ceph_vg0"
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:        }
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]:    ]
Nov 29 02:52:43 np0005539550 modest_ritchie[271068]: }
Nov 29 02:52:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:52:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:43.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:52:43 np0005539550 systemd[1]: libpod-0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697.scope: Deactivated successfully.
Nov 29 02:52:43 np0005539550 conmon[271068]: conmon 0b300fcdd0ae4e23100c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697.scope/container/memory.events
Nov 29 02:52:43 np0005539550 podman[271051]: 2025-11-29 07:52:43.15339676 +0000 UTC m=+0.955038964 container died 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:52:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-452e1722e420a5656bc7249771ec3ec4babbbd5dfe4902bf4f03623547b7b1da-merged.mount: Deactivated successfully.
Nov 29 02:52:43 np0005539550 podman[271051]: 2025-11-29 07:52:43.385969673 +0000 UTC m=+1.187611857 container remove 0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:52:43 np0005539550 systemd[1]: libpod-conmon-0b300fcdd0ae4e23100c3db1bb9250701649c7c055c1d6ebce8dc818d9c67697.scope: Deactivated successfully.
Nov 29 02:52:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.115388831 +0000 UTC m=+0.039902777 container create 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:44 np0005539550 systemd[1]: Started libpod-conmon-53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4.scope.
Nov 29 02:52:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.096438258 +0000 UTC m=+0.020952234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.200136299 +0000 UTC m=+0.124650285 container init 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.210096713 +0000 UTC m=+0.134610679 container start 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.213826908 +0000 UTC m=+0.138340894 container attach 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:52:44 np0005539550 silly_mirzakhani[271246]: 167 167
Nov 29 02:52:44 np0005539550 systemd[1]: libpod-53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4.scope: Deactivated successfully.
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.215533222 +0000 UTC m=+0.140047178 container died 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:52:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3a536a74ff7a3aef458b448c1e47ce81dfe867bd2109a87dd5e39447d8873a24-merged.mount: Deactivated successfully.
Nov 29 02:52:44 np0005539550 podman[271230]: 2025-11-29 07:52:44.258221059 +0000 UTC m=+0.182735015 container remove 53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:52:44 np0005539550 systemd[1]: libpod-conmon-53a7e68ac1b655e737fe55aac786999fea8bc1e5cea06b681212abf1077871b4.scope: Deactivated successfully.
Nov 29 02:52:44 np0005539550 podman[271270]: 2025-11-29 07:52:44.407900281 +0000 UTC m=+0.039488877 container create 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:52:44 np0005539550 systemd[1]: Started libpod-conmon-005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895.scope.
Nov 29 02:52:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9804a40f7548e3caea47409280742ed77d5f42a41105e19420df734a84e2cdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9804a40f7548e3caea47409280742ed77d5f42a41105e19420df734a84e2cdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9804a40f7548e3caea47409280742ed77d5f42a41105e19420df734a84e2cdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9804a40f7548e3caea47409280742ed77d5f42a41105e19420df734a84e2cdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:44 np0005539550 podman[271270]: 2025-11-29 07:52:44.390043536 +0000 UTC m=+0.021632162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:52:44 np0005539550 podman[271270]: 2025-11-29 07:52:44.497229966 +0000 UTC m=+0.128818592 container init 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:52:44 np0005539550 podman[271270]: 2025-11-29 07:52:44.503340592 +0000 UTC m=+0.134929188 container start 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:52:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:44 np0005539550 podman[271270]: 2025-11-29 07:52:44.507384585 +0000 UTC m=+0.138973211 container attach 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:52:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:52:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:44.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:52:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Nov 29 02:52:44 np0005539550 nova_compute[257631]: 2025-11-29 07:52:44.732 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:45.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]: {
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:        "osd_id": 0,
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:        "type": "bluestore"
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]:    }
Nov 29 02:52:45 np0005539550 magical_jepsen[271287]: }
Nov 29 02:52:45 np0005539550 systemd[1]: libpod-005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895.scope: Deactivated successfully.
Nov 29 02:52:45 np0005539550 podman[271270]: 2025-11-29 07:52:45.421593159 +0000 UTC m=+1.053181755 container died 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:52:45 np0005539550 nova_compute[257631]: 2025-11-29 07:52:45.474 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b9804a40f7548e3caea47409280742ed77d5f42a41105e19420df734a84e2cdf-merged.mount: Deactivated successfully.
Nov 29 02:52:45 np0005539550 podman[271270]: 2025-11-29 07:52:45.541819081 +0000 UTC m=+1.173407677 container remove 005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 29 02:52:45 np0005539550 systemd[1]: libpod-conmon-005865c2e542ff15b7df4caea8e63235b55dff87e0826a0d57264cb822315895.scope: Deactivated successfully.
Nov 29 02:52:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:52:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:52:45 np0005539550 nova_compute[257631]: 2025-11-29 07:52:45.849 257641 DEBUG oslo_concurrency.processutils [None req-2f02e130-73e6-474a-b86f-24f8cdcaa215 92319944dde243528ab04f8564d28b6e 88e5226afcce44e69b246f5b76428158 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:45 np0005539550 nova_compute[257631]: 2025-11-29 07:52:45.873 257641 DEBUG oslo_concurrency.processutils [None req-2f02e130-73e6-474a-b86f-24f8cdcaa215 92319944dde243528ab04f8564d28b6e 88e5226afcce44e69b246f5b76428158 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b38c8fae-b9c6-4bc7-87db-1f51e78f0c8d does not exist
Nov 29 02:52:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5f486b6c-b0dd-416f-83ab-71a77e944452 does not exist
Nov 29 02:52:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a3313cf3-3b9a-46ca-86f2-554cd43af805 does not exist
Nov 29 02:52:46 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:46Z|00039|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:52:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:52:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:46.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:52:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:52:46 np0005539550 nova_compute[257631]: 2025-11-29 07:52:46.619 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Successfully created port: 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:52:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:47.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:47 np0005539550 podman[271427]: 2025-11-29 07:52:47.43274359 +0000 UTC m=+0.057345292 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 02:52:47 np0005539550 podman[271426]: 2025-11-29 07:52:47.438725712 +0000 UTC m=+0.066290809 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.097 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Successfully updated port: 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.139 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.139 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquired lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.140 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.323 257641 DEBUG nova.compute.manager [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-changed-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.324 257641 DEBUG nova.compute.manager [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Refreshing instance network info cache due to event network-changed-60b6c6b1-2f34-40e6-b2e4-53a564e65b87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.324 257641 DEBUG oslo_concurrency.lockutils [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:48 np0005539550 nova_compute[257631]: 2025-11-29 07:52:48.384 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:52:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:48.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:49.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.737 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.786 257641 DEBUG nova.network.neutron [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating instance_info_cache with network_info: [{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.858 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Releasing lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.859 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Instance network_info: |[{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.859 257641 DEBUG oslo_concurrency.lockutils [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.859 257641 DEBUG nova.network.neutron [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Refreshing network info cache for port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.862 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Start _get_guest_xml network_info=[{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.867 257641 WARNING nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.877 257641 DEBUG nova.virt.libvirt.host [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.878 257641 DEBUG nova.virt.libvirt.host [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.882 257641 DEBUG nova.virt.libvirt.host [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.882 257641 DEBUG nova.virt.libvirt.host [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.884 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.884 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.884 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.885 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.885 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.885 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.885 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.886 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.886 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.886 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.886 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.887 257641 DEBUG nova.virt.hardware [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:52:49 np0005539550 nova_compute[257631]: 2025-11-29 07:52:49.890 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682937601' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.476 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.492 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:50.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.520 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.524 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 29 02:52:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:52:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393558194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.963 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.965 257641 DEBUG nova.virt.libvirt.vif [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1811613913',display_name='tempest-VolumesAdminNegativeTest-server-1811613913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1811613913',id=12,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4GT4boGwht+cjugoIvpANSRA3BQP7vzSKB82n4K09WTLIRFW6T0zh97MlQcefGWJRusLdcEMw8Z3lZ13dqkjmEuhfgLmaxnGEuBK2Aidtxqfs+WoPADX4XLKAYu7APUQ==',key_name='tempest-keypair-2138985612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba323f7dc95a4f11911e6559a1b3c99e',ramdisk_id='',reservation_id='r-4ynef4sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1736550981',owner_user_name='tempest-VolumesAdminNegativeTest-1736550981-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f80aba0abfa403b80928e251377a7cd',uuid=3fde18f7-843a-48d8-b394-c299cf479c37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.966 257641 DEBUG nova.network.os_vif_util [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converting VIF {"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.966 257641 DEBUG nova.network.os_vif_util [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.967 257641 DEBUG nova.objects.instance [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'pci_devices' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:52:50 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.996 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <uuid>3fde18f7-843a-48d8-b394-c299cf479c37</uuid>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <name>instance-0000000c</name>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:name>tempest-VolumesAdminNegativeTest-server-1811613913</nova:name>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:52:49</nova:creationTime>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:user uuid="7f80aba0abfa403b80928e251377a7cd">tempest-VolumesAdminNegativeTest-1736550981-project-member</nova:user>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:project uuid="ba323f7dc95a4f11911e6559a1b3c99e">tempest-VolumesAdminNegativeTest-1736550981</nova:project>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        <nova:port uuid="60b6c6b1-2f34-40e6-b2e4-53a564e65b87">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="serial">3fde18f7-843a-48d8-b394-c299cf479c37</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="uuid">3fde18f7-843a-48d8-b394-c299cf479c37</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:52:50 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/3fde18f7-843a-48d8-b394-c299cf479c37_disk">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/3fde18f7-843a-48d8-b394-c299cf479c37_disk.config">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:30:80:a9"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <target dev="tap60b6c6b1-2f"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/console.log" append="off"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:52:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:52:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:52:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:52:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.998 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Preparing to wait for external event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.999 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:50.999 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.000 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.000 257641 DEBUG nova.virt.libvirt.vif [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1811613913',display_name='tempest-VolumesAdminNegativeTest-server-1811613913',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1811613913',id=12,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4GT4boGwht+cjugoIvpANSRA3BQP7vzSKB82n4K09WTLIRFW6T0zh97MlQcefGWJRusLdcEMw8Z3lZ13dqkjmEuhfgLmaxnGEuBK2Aidtxqfs+WoPADX4XLKAYu7APUQ==',key_name='tempest-keypair-2138985612',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ba323f7dc95a4f11911e6559a1b3c99e',ramdisk_id='',reservation_id='r-4ynef4sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-1736550981',owner_user_name='tempest-VolumesAdminNegativeTest-1736550981-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:52:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f80aba0abfa403b80928e251377a7cd',uuid=3fde18f7-843a-48d8-b394-c299cf479c37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.000 257641 DEBUG nova.network.os_vif_util [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converting VIF {"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.001 257641 DEBUG nova.network.os_vif_util [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.001 257641 DEBUG os_vif [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.002 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.003 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.003 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.009 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60b6c6b1-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.010 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60b6c6b1-2f, col_values=(('external_ids', {'iface-id': '60b6c6b1-2f34-40e6-b2e4-53a564e65b87', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:80:a9', 'vm-uuid': '3fde18f7-843a-48d8-b394-c299cf479c37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.011 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:51 np0005539550 NetworkManager[49039]: <info>  [1764402771.0132] manager: (tap60b6c6b1-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.019 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.020 257641 INFO os_vif [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f')#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.107 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.109 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.109 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No VIF found with MAC fa:16:3e:30:80:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.110 257641 INFO nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Using config drive#033[00m
Nov 29 02:52:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.145 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.680 257641 INFO nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Creating config drive at /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.687 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz27tcbcg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.816 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz27tcbcg" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.907 257641 DEBUG nova.storage.rbd_utils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] rbd image 3fde18f7-843a-48d8-b394-c299cf479c37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:52:51 np0005539550 nova_compute[257631]: 2025-11-29 07:52:51.910 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config 3fde18f7-843a-48d8-b394-c299cf479c37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.290 257641 DEBUG nova.network.neutron [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updated VIF entry in instance network info cache for port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.291 257641 DEBUG nova.network.neutron [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating instance_info_cache with network_info: [{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.371 257641 DEBUG oslo_concurrency.lockutils [req-699600d9-6d58-4657-aa02-47f031e0fd21 req-953e62c7-da8d-41b7-8522-76b566172539 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.447 257641 DEBUG oslo_concurrency.processutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config 3fde18f7-843a-48d8-b394-c299cf479c37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.448 257641 INFO nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Deleting local config drive /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37/disk.config because it was imported into RBD.#033[00m
Nov 29 02:52:52 np0005539550 kernel: tap60b6c6b1-2f: entered promiscuous mode
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.5075] manager: (tap60b6c6b1-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 02:52:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:52Z|00040|binding|INFO|Claiming lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for this chassis.
Nov 29 02:52:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:52Z|00041|binding|INFO|60b6c6b1-2f34-40e6-b2e4-53a564e65b87: Claiming fa:16:3e:30:80:a9 10.100.0.4
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:52.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:52 np0005539550 systemd-udevd[271602]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:52:52 np0005539550 systemd-machined[216673]: New machine qemu-6-instance-0000000c.
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.5560] device (tap60b6c6b1-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.5573] device (tap60b6c6b1-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:52:52 np0005539550 systemd[1]: Started Virtual Machine qemu-6-instance-0000000c.
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:52Z|00042|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 ovn-installed in OVS
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:52Z|00043|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 up in Southbound
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.590 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:80:a9 10.100.0.4'], port_security=['fa:16:3e:30:80:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3fde18f7-843a-48d8-b394-c299cf479c37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba323f7dc95a4f11911e6559a1b3c99e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2225496-b125-4ce8-9536-dbe1ff7d1166', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55a9360d-4d01-4bc9-9d17-f718c6f118ad, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=60b6c6b1-2f34-40e6-b2e4-53a564e65b87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.591 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 in datapath 7f03fa2b-ae90-41ca-b1ba-770fedbd8710 bound to our chassis#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.592 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7f03fa2b-ae90-41ca-b1ba-770fedbd8710#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.604 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[702dc2a1-a971-458a-8c52-4ae6d6d98d3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.605 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7f03fa2b-a1 in ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.607 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7f03fa2b-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.607 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3847d6f8-8dd3-40cc-a4d5-4d35d306c8ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.608 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45d35465-6307-4972-b57b-ed671c961f32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.621 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b99ed8e7-7150-4bce-96d2-7383d12ff7aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 24 op/s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.643 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[15025811-ebac-4c3f-a869-451833b60d7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.673 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9e33cf-054f-44f6-9963-6179c70382b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 systemd-udevd[271604]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.681 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d0f2d748-4bf7-42b8-8eb6-85b611224af9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.6826] manager: (tap7f03fa2b-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.714 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[952259f3-84ed-425b-9c69-94a96a282867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.718 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ad31c403-78dd-451e-a2ce-266d313d4425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.7434] device (tap7f03fa2b-a0): carrier: link connected
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.750 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[99e5bd1c-d062-42c4-87ef-4fcd770c7fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.768 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c372253d-4342-4f38-a480-c924fce697eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f03fa2b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:3f:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582040, 'reachable_time': 26785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271635, 'error': None, 'target': 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.783 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[765bd4cd-51a0-44db-a276-84ef6db3a6a5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:3f6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 582040, 'tstamp': 582040}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271636, 'error': None, 'target': 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.800 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[13b30913-7713-472a-a2b6-d5b320de63ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7f03fa2b-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:3f:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582040, 'reachable_time': 26785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271637, 'error': None, 'target': 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.834 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ebce3467-faf1-4e4e-8b7a-cdd283bd65bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.892 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c71c8ed6-0016-4003-989a-130a7c3ed08d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.894 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f03fa2b-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.894 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.894 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f03fa2b-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.896 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 NetworkManager[49039]: <info>  [1764402772.8971] manager: (tap7f03fa2b-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 02:52:52 np0005539550 kernel: tap7f03fa2b-a0: entered promiscuous mode
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.899 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.904 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7f03fa2b-a0, col_values=(('external_ids', {'iface-id': '82bd8b70-8c30-4969-a403-8eda14d88f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.906 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7f03fa2b-ae90-41ca-b1ba-770fedbd8710.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7f03fa2b-ae90-41ca-b1ba-770fedbd8710.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.907 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[52dddceb-6f64-423c-9350-c5f52caddb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.909 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7f03fa2b-ae90-41ca-b1ba-770fedbd8710
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:52:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:52Z|00044|binding|INFO|Releasing lport 82bd8b70-8c30-4969-a403-8eda14d88f57 from this chassis (sb_readonly=0)
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7f03fa2b-ae90-41ca-b1ba-770fedbd8710.pid.haproxy
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7f03fa2b-ae90-41ca-b1ba-770fedbd8710
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:52:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:52:52.910 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'env', 'PROCESS_TAG=haproxy-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7f03fa2b-ae90-41ca-b1ba-770fedbd8710.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:52:52 np0005539550 nova_compute[257631]: 2025-11-29 07:52:52.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.090 257641 DEBUG nova.compute.manager [req-536a3a5a-88c8-4f20-837c-e9e841467a56 req-ec57a48f-90f5-4401-b1f2-f97a46217766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.091 257641 DEBUG oslo_concurrency.lockutils [req-536a3a5a-88c8-4f20-837c-e9e841467a56 req-ec57a48f-90f5-4401-b1f2-f97a46217766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.091 257641 DEBUG oslo_concurrency.lockutils [req-536a3a5a-88c8-4f20-837c-e9e841467a56 req-ec57a48f-90f5-4401-b1f2-f97a46217766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.091 257641 DEBUG oslo_concurrency.lockutils [req-536a3a5a-88c8-4f20-837c-e9e841467a56 req-ec57a48f-90f5-4401-b1f2-f97a46217766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.092 257641 DEBUG nova.compute.manager [req-536a3a5a-88c8-4f20-837c-e9e841467a56 req-ec57a48f-90f5-4401-b1f2-f97a46217766 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Processing event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:52:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:53.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:53 np0005539550 podman[271687]: 2025-11-29 07:52:53.268838975 +0000 UTC m=+0.025353860 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.556 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402773.5556602, 3fde18f7-843a-48d8-b394-c299cf479c37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.556 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] VM Started (Lifecycle Event)#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.559 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.563 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.566 257641 INFO nova.virt.libvirt.driver [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Instance spawned successfully.#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.566 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.595 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.599 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.611 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.611 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.612 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.612 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.613 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.613 257641 DEBUG nova.virt.libvirt.driver [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.672 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.673 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402773.555943, 3fde18f7-843a-48d8-b394-c299cf479c37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.673 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.830 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.834 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402773.5626342, 3fde18f7-843a-48d8-b394-c299cf479c37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.834 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.849 257641 INFO nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Took 14.73 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.850 257641 DEBUG nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.864 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.868 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:52:53 np0005539550 nova_compute[257631]: 2025-11-29 07:52:53.948 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:52:54 np0005539550 nova_compute[257631]: 2025-11-29 07:52:54.022 257641 INFO nova.compute.manager [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Took 16.01 seconds to build instance.#033[00m
Nov 29 02:52:54 np0005539550 nova_compute[257631]: 2025-11-29 07:52:54.090 257641 DEBUG oslo_concurrency.lockutils [None req-3be04824-4f5f-42a5-a132-b6cc49de1cb4 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:54 np0005539550 podman[271687]: 2025-11-29 07:52:54.287202093 +0000 UTC m=+1.043716958 container create 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:52:54 np0005539550 systemd[1]: Started libpod-conmon-8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008.scope.
Nov 29 02:52:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:52:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f42deaecc5401e4ad05a2e78105babe6430acd5b401f551a06acfdf2a2cb319/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:52:54 np0005539550 podman[271687]: 2025-11-29 07:52:54.436582901 +0000 UTC m=+1.193097766 container init 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 02:52:54 np0005539550 podman[271687]: 2025-11-29 07:52:54.444175175 +0000 UTC m=+1.200690040 container start 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:52:54 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [NOTICE]   (271731) : New worker (271733) forked
Nov 29 02:52:54 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [NOTICE]   (271731) : Loading success.
Nov 29 02:52:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Nov 29 02:52:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:55.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.491 257641 DEBUG nova.compute.manager [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.492 257641 DEBUG oslo_concurrency.lockutils [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.492 257641 DEBUG oslo_concurrency.lockutils [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.492 257641 DEBUG oslo_concurrency.lockutils [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.492 257641 DEBUG nova.compute.manager [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:52:55 np0005539550 nova_compute[257631]: 2025-11-29 07:52:55.493 257641 WARNING nova.compute.manager [req-ea864bb9-e7d3-46af-b731-5e46ecc58c35 req-a7ccb7e0-b3ed-47f7-9a86-5a16ec3ba399 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received unexpected event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:52:56 np0005539550 nova_compute[257631]: 2025-11-29 07:52:56.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:56.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 523 KiB/s rd, 14 KiB/s wr, 25 op/s
Nov 29 02:52:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:57.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:52:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:52:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:52:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:52:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:52:58 np0005539550 nova_compute[257631]: 2025-11-29 07:52:58.635 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:58 np0005539550 NetworkManager[49039]: <info>  [1764402778.6393] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 02:52:58 np0005539550 NetworkManager[49039]: <info>  [1764402778.6403] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 02:52:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 02:52:58 np0005539550 ovn_controller[148680]: 2025-11-29T07:52:58Z|00045|binding|INFO|Releasing lport 82bd8b70-8c30-4969-a403-8eda14d88f57 from this chassis (sb_readonly=0)
Nov 29 02:52:58 np0005539550 nova_compute[257631]: 2025-11-29 07:52:58.717 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:58 np0005539550 nova_compute[257631]: 2025-11-29 07:52:58.728 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:52:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:52:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:52:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:52:59.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:52:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:52:59
Nov 29 02:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.meta', '.rgw.root', 'volumes']
Nov 29 02:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:52:59 np0005539550 nova_compute[257631]: 2025-11-29 07:52:59.428 257641 DEBUG nova.compute.manager [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-changed-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:52:59 np0005539550 nova_compute[257631]: 2025-11-29 07:52:59.428 257641 DEBUG nova.compute.manager [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Refreshing instance network info cache due to event network-changed-60b6c6b1-2f34-40e6-b2e4-53a564e65b87. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:52:59 np0005539550 nova_compute[257631]: 2025-11-29 07:52:59.429 257641 DEBUG oslo_concurrency.lockutils [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:52:59 np0005539550 nova_compute[257631]: 2025-11-29 07:52:59.429 257641 DEBUG oslo_concurrency.lockutils [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:52:59 np0005539550 nova_compute[257631]: 2025-11-29 07:52:59.429 257641 DEBUG nova.network.neutron [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Refreshing network info cache for port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:53:00 np0005539550 podman[271748]: 2025-11-29 07:53:00.341783236 +0000 UTC m=+0.082494599 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:00 np0005539550 nova_compute[257631]: 2025-11-29 07:53:00.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 02:53:01 np0005539550 nova_compute[257631]: 2025-11-29 07:53:01.015 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:01 np0005539550 nova_compute[257631]: 2025-11-29 07:53:01.256 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:02.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.726 257641 DEBUG nova.network.neutron [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updated VIF entry in instance network info cache for port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.727 257641 DEBUG nova.network.neutron [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating instance_info_cache with network_info: [{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.760 257641 DEBUG oslo_concurrency.lockutils [req-e8f08c06-3e5b-4fe5-b39e-d4ebb94b898c req-68abca47-1bbd-47b8-99a8-79f88dc6b50c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:53:02 np0005539550 nova_compute[257631]: 2025-11-29 07:53:02.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:53:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:03 np0005539550 nova_compute[257631]: 2025-11-29 07:53:03.161 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:03 np0005539550 nova_compute[257631]: 2025-11-29 07:53:03.161 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:03 np0005539550 nova_compute[257631]: 2025-11-29 07:53:03.161 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:53:03 np0005539550 nova_compute[257631]: 2025-11-29 07:53:03.162 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:04.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 29 02:53:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.460 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating instance_info_cache with network_info: [{"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.480 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-3fde18f7-843a-48d8-b394-c299cf479c37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.481 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.481 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.482 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.484 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.511 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.511 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.512 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.512 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.512 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.763 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/879583704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:05 np0005539550 nova_compute[257631]: 2025-11-29 07:53:05.945 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.014 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.015 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.017 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.166 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.167 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4629MB free_disk=20.967357635498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.167 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.168 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.266 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 3fde18f7-843a-48d8-b394-c299cf479c37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.266 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.267 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.306 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:06.402 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:06.403 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:06.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Nov 29 02:53:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451169579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.938 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.943 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:06 np0005539550 nova_compute[257631]: 2025-11-29 07:53:06.966 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.003 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.003 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:07.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.439 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.439 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.440 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.440 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.440 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.440 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:07 np0005539550 nova_compute[257631]: 2025-11-29 07:53:07.440 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:53:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:08Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:80:a9 10.100.0.4
Nov 29 02:53:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:08Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:80:a9 10.100.0.4
Nov 29 02:53:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 97 MiB data, 292 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 63 op/s
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:53:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:09.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:09.405 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:10 np0005539550 nova_compute[257631]: 2025-11-29 07:53:10.099 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:10 np0005539550 nova_compute[257631]: 2025-11-29 07:53:10.484 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:10.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:10 np0005539550 nova_compute[257631]: 2025-11-29 07:53:10.622 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 115 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Nov 29 02:53:11 np0005539550 nova_compute[257631]: 2025-11-29 07:53:11.018 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:11.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:12.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 115 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 204 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Nov 29 02:53:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:13.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:14.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 02:53:14 np0005539550 nova_compute[257631]: 2025-11-29 07:53:14.711 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:15.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:15 np0005539550 nova_compute[257631]: 2025-11-29 07:53:15.486 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:16 np0005539550 nova_compute[257631]: 2025-11-29 07:53:16.020 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:16.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:16 np0005539550 nova_compute[257631]: 2025-11-29 07:53:16.595 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Nov 29 02:53:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:17.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:17 np0005539550 nova_compute[257631]: 2025-11-29 07:53:17.654 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:17 np0005539550 nova_compute[257631]: 2025-11-29 07:53:17.654 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:17 np0005539550 nova_compute[257631]: 2025-11-29 07:53:17.674 257641 DEBUG nova.objects.instance [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'flavor' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:17 np0005539550 nova_compute[257631]: 2025-11-29 07:53:17.750 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:18 np0005539550 podman[271880]: 2025-11-29 07:53:18.336904459 +0000 UTC m=+0.063892871 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:53:18 np0005539550 podman[271879]: 2025-11-29 07:53:18.339585051 +0000 UTC m=+0.066514992 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.392 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.392 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.393 257641 INFO nova.compute.manager [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Attaching volume 8da13aec-bfc8-4fe1-b17e-45f2a3f532cf to /dev/vdb#033[00m
Nov 29 02:53:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:18.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.644 257641 DEBUG os_brick.utils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.646 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 283 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.660 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.660 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[4719990a-9fb8-4391-8bac-0c86352d0b48]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.662 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.669 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.669 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2ce9c7-170c-4806-bccc-87ef550205fd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.671 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.679 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.679 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[30655694-9f1f-4503-8177-59c7c9062b93]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.680 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[0c79193d-1805-4215-9472-f0c933ca283f]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.681 257641 DEBUG oslo_concurrency.processutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.701 257641 DEBUG oslo_concurrency.processutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.704 257641 DEBUG os_brick.initiator.connectors.lightos [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.704 257641 DEBUG os_brick.initiator.connectors.lightos [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.704 257641 DEBUG os_brick.initiator.connectors.lightos [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.705 257641 DEBUG os_brick.utils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] <== get_connector_properties: return (59ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:53:18 np0005539550 nova_compute[257631]: 2025-11-29 07:53:18.705 257641 DEBUG nova.virt.block_device [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating existing volume attachment record: 96e6bae3-ab2a-4445-adb3-d5c7a0dfa7b0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 02:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:18.925 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:18.926 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:18.926 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:19.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:19 np0005539550 nova_compute[257631]: 2025-11-29 07:53:19.554 257641 DEBUG nova.objects.instance [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'flavor' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:19 np0005539550 nova_compute[257631]: 2025-11-29 07:53:19.579 257641 DEBUG nova.virt.libvirt.driver [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Attempting to attach volume 8da13aec-bfc8-4fe1-b17e-45f2a3f532cf with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 02:53:19 np0005539550 nova_compute[257631]: 2025-11-29 07:53:19.582 257641 DEBUG nova.virt.libvirt.guest [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8da13aec-bfc8-4fe1-b17e-45f2a3f532cf">
Nov 29 02:53:19 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 02:53:19 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  </auth>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:19 np0005539550 nova_compute[257631]:  <serial>8da13aec-bfc8-4fe1-b17e-45f2a3f532cf</serial>
Nov 29 02:53:19 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:53:19 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021705023962404614 of space, bias 1.0, pg target 0.6511507188721384 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:53:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.019 257641 DEBUG nova.virt.libvirt.driver [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.019 257641 DEBUG nova.virt.libvirt.driver [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.019 257641 DEBUG nova.virt.libvirt.driver [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.019 257641 DEBUG nova.virt.libvirt.driver [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] No VIF found with MAC fa:16:3e:30:80:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.323 257641 DEBUG oslo_concurrency.lockutils [None req-d81bbdd6-6c1a-46ed-a098-e39bed6ac5f8 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.490 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:20.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 990 KiB/s wr, 50 op/s
Nov 29 02:53:20 np0005539550 nova_compute[257631]: 2025-11-29 07:53:20.659 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:21 np0005539550 nova_compute[257631]: 2025-11-29 07:53:21.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:21.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.381 257641 DEBUG oslo_concurrency.lockutils [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.382 257641 DEBUG oslo_concurrency.lockutils [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.405 257641 INFO nova.compute.manager [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Detaching volume 8da13aec-bfc8-4fe1-b17e-45f2a3f532cf#033[00m
Nov 29 02:53:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:22.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 79 KiB/s rd, 65 KiB/s wr, 13 op/s
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.691 257641 INFO nova.virt.block_device [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Attempting to driver detach volume 8da13aec-bfc8-4fe1-b17e-45f2a3f532cf from mountpoint /dev/vdb#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.699 257641 DEBUG nova.virt.libvirt.driver [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Attempting to detach device vdb from instance 3fde18f7-843a-48d8-b394-c299cf479c37 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.699 257641 DEBUG nova.virt.libvirt.guest [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8da13aec-bfc8-4fe1-b17e-45f2a3f532cf">
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <serial>8da13aec-bfc8-4fe1-b17e-45f2a3f532cf</serial>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:53:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.732 257641 INFO nova.virt.libvirt.driver [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Successfully detached device vdb from instance 3fde18f7-843a-48d8-b394-c299cf479c37 from the persistent domain config.#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.733 257641 DEBUG nova.virt.libvirt.driver [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3fde18f7-843a-48d8-b394-c299cf479c37 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.733 257641 DEBUG nova.virt.libvirt.guest [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8da13aec-bfc8-4fe1-b17e-45f2a3f532cf">
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <serial>8da13aec-bfc8-4fe1-b17e-45f2a3f532cf</serial>
Nov 29 02:53:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 02:53:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 02:53:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.846 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764402802.8459773, 3fde18f7-843a-48d8-b394-c299cf479c37 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.847 257641 DEBUG nova.virt.libvirt.driver [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3fde18f7-843a-48d8-b394-c299cf479c37 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 02:53:22 np0005539550 nova_compute[257631]: 2025-11-29 07:53:22.849 257641 INFO nova.virt.libvirt.driver [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Successfully detached device vdb from instance 3fde18f7-843a-48d8-b394-c299cf479c37 from the live domain config.#033[00m
Nov 29 02:53:23 np0005539550 nova_compute[257631]: 2025-11-29 07:53:23.173 257641 DEBUG nova.objects.instance [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'flavor' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:23 np0005539550 nova_compute[257631]: 2025-11-29 07:53:23.214 257641 DEBUG oslo_concurrency.lockutils [None req-86bf1f70-8967-4864-ad65-7f30d664d277 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:24.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 81 KiB/s rd, 65 KiB/s wr, 15 op/s
Nov 29 02:53:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:25 np0005539550 nova_compute[257631]: 2025-11-29 07:53:25.491 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:26 np0005539550 nova_compute[257631]: 2025-11-29 07:53:26.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:26.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 13 KiB/s wr, 4 op/s
Nov 29 02:53:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:28.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 14 KiB/s wr, 3 op/s
Nov 29 02:53:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:30 np0005539550 nova_compute[257631]: 2025-11-29 07:53:30.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:30 np0005539550 podman[272003]: 2025-11-29 07:53:30.613974656 +0000 UTC m=+0.089786724 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 02:53:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 160 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 MiB/s wr, 20 op/s
Nov 29 02:53:31 np0005539550 nova_compute[257631]: 2025-11-29 07:53:31.027 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 160 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 MiB/s wr, 20 op/s
Nov 29 02:53:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:33.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:53:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:34.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:53:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 256 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 5.3 MiB/s wr, 73 op/s
Nov 29 02:53:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:35 np0005539550 nova_compute[257631]: 2025-11-29 07:53:35.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539550 nova_compute[257631]: 2025-11-29 07:53:36.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:36.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 275 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 6.0 MiB/s wr, 82 op/s
Nov 29 02:53:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 306 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 7.1 MiB/s wr, 173 op/s
Nov 29 02:53:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:40 np0005539550 nova_compute[257631]: 2025-11-29 07:53:40.497 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:40.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 291 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.1 MiB/s wr, 244 op/s
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.031 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:41.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.249 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Creating tmpfile /var/lib/nova/instances/tmpikznkx1y to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.355 257641 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.376 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.376 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.629 257641 INFO nova.compute.rpcapi [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Nov 29 02:53:41 np0005539550 nova_compute[257631]: 2025-11-29 07:53:41.630 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:42.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 291 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.9 MiB/s wr, 226 op/s
Nov 29 02:53:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:43.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.460 257641 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2bb12f77-8958-446b-813d-a59f149a549b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.493 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.494 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.494 257641 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.559 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Creating tmpfile /var/lib/nova/instances/tmp3464m1ig to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:53:43 np0005539550 nova_compute[257631]: 2025-11-29 07:53:43.560 257641 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:53:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:44.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 260 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 5.9 MiB/s wr, 405 op/s
Nov 29 02:53:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:45.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:45 np0005539550 nova_compute[257631]: 2025-11-29 07:53:45.334 257641 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:53:45 np0005539550 nova_compute[257631]: 2025-11-29 07:53:45.366 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:45 np0005539550 nova_compute[257631]: 2025-11-29 07:53:45.366 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:45 np0005539550 nova_compute[257631]: 2025-11-29 07:53:45.367 257641 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:45 np0005539550 nova_compute[257631]: 2025-11-29 07:53:45.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.091 257641 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating instance_info_cache with network_info: [{"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.618 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.620 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2bb12f77-8958-446b-813d-a59f149a549b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.620 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Creating instance directory: /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.621 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Ensure instance console log exists: /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.621 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.622 257641 DEBUG nova.virt.libvirt.vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2050487278',display_name='tempest-LiveMigrationTest-server-2050487278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2050487278',id=13,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-f3pegydb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:37Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=2bb12f77-8958-446b-813d-a59f149a549b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.622 257641 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.623 257641 DEBUG nova.network.os_vif_util [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.623 257641 DEBUG os_vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.624 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.653 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.655 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.662 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.663 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap230cb1ef-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.9 MiB/s wr, 396 op/s
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.664 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap230cb1ef-c5, col_values=(('external_ids', {'iface-id': '230cb1ef-c551-4666-88fe-e49994b798e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:fa:cd', 'vm-uuid': '2bb12f77-8958-446b-813d-a59f149a549b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.666 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 NetworkManager[49039]: <info>  [1764402826.6680] manager: (tap230cb1ef-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.668 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.678 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.681 257641 INFO os_vif [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5')#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.682 257641 DEBUG nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:53:46 np0005539550 nova_compute[257631]: 2025-11-29 07:53:46.682 257641 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2bb12f77-8958-446b-813d-a59f149a549b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:53:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:47.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:47 np0005539550 podman[272269]: 2025-11-29 07:53:47.497048142 +0000 UTC m=+0.170847543 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 02:53:47 np0005539550 podman[272269]: 2025-11-29 07:53:47.609355068 +0000 UTC m=+0.283154469 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.881 257641 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.905 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.907 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.907 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Creating instance directory: /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.908 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Ensure instance console log exists: /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.908 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.909 257641 DEBUG nova.virt.libvirt.vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:39Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.909 257641 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.910 257641 DEBUG nova.network.os_vif_util [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.910 257641 DEBUG os_vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.910 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.911 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.911 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.915 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384b014a-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.915 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap384b014a-c4, col_values=(('external_ids', {'iface-id': '384b014a-c4e8-4d83-a8d1-09e70342722f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a8:74:d4', 'vm-uuid': '9bef976c-2981-4d19-aa60-8a550b7093ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:47 np0005539550 NetworkManager[49039]: <info>  [1764402827.9199] manager: (tap384b014a-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.927 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.929 257641 INFO os_vif [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4')#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.930 257641 DEBUG nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:53:47 np0005539550 nova_compute[257631]: 2025-11-29 07:53:47.930 257641 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:53:48 np0005539550 podman[272427]: 2025-11-29 07:53:48.281655854 +0000 UTC m=+0.061139528 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:53:48 np0005539550 podman[272427]: 2025-11-29 07:53:48.293304036 +0000 UTC m=+0.072787710 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 02:53:48 np0005539550 podman[272491]: 2025-11-29 07:53:48.495348614 +0000 UTC m=+0.051782597 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 02:53:48 np0005539550 podman[272491]: 2025-11-29 07:53:48.508343012 +0000 UTC m=+0.064776995 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:53:48 np0005539550 podman[272526]: 2025-11-29 07:53:48.635278949 +0000 UTC m=+0.052370513 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:53:48 np0005539550 podman[272525]: 2025-11-29 07:53:48.643267953 +0000 UTC m=+0.062148404 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 02:53:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:48.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 214 MiB data, 349 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 1.1 MiB/s wr, 388 op/s
Nov 29 02:53:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.452 257641 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Port 230cb1ef-c551-4666-88fe-e49994b798e9 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.454 257641 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpikznkx1y',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='2bb12f77-8958-446b-813d-a59f149a549b',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 18a9d27e-9eed-471a-96a6-d31c8a436bb7 does not exist
Nov 29 02:53:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 44b17be7-67e8-4207-8d0d-77ad0fac372f does not exist
Nov 29 02:53:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 77d45660-2827-4336-bc1b-3072b68ba90d does not exist
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:53:49 np0005539550 systemd[1]: Starting libvirt proxy daemon...
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:53:49 np0005539550 systemd[1]: Started libvirt proxy daemon.
Nov 29 02:53:49 np0005539550 NetworkManager[49039]: <info>  [1764402829.7415] manager: (tap230cb1ef-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 02:53:49 np0005539550 kernel: tap230cb1ef-c5: entered promiscuous mode
Nov 29 02:53:49 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:49Z|00046|binding|INFO|Claiming lport 230cb1ef-c551-4666-88fe-e49994b798e9 for this additional chassis.
Nov 29 02:53:49 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:49Z|00047|binding|INFO|230cb1ef-c551-4666-88fe-e49994b798e9: Claiming fa:16:3e:80:fa:cd 10.100.0.12
Nov 29 02:53:49 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:49Z|00048|binding|INFO|Claiming lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a for this additional chassis.
Nov 29 02:53:49 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:49Z|00049|binding|INFO|25733cd0-1d42-411e-be69-7bf3a59b5a2a: Claiming fa:16:3e:58:c7:5e 19.80.0.38
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.755 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539550 systemd-udevd[272823]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:49 np0005539550 NetworkManager[49039]: <info>  [1764402829.7880] device (tap230cb1ef-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:53:49 np0005539550 NetworkManager[49039]: <info>  [1764402829.7893] device (tap230cb1ef-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:53:49 np0005539550 systemd-machined[216673]: New machine qemu-7-instance-0000000d.
Nov 29 02:53:49 np0005539550 systemd[1]: Started Virtual Machine qemu-7-instance-0000000d.
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:49Z|00050|binding|INFO|Setting lport 230cb1ef-c551-4666-88fe-e49994b798e9 ovn-installed in OVS
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.828 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.919 257641 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Port 384b014a-c4e8-4d83-a8d1-09e70342722f updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:53:49 np0005539550 nova_compute[257631]: 2025-11-29 07:53:49.921 257641 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3464m1ig',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='9bef976c-2981-4d19-aa60-8a550b7093ca',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:53:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.106831659 +0000 UTC m=+0.046336282 container create a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:53:50 np0005539550 kernel: tap384b014a-c4: entered promiscuous mode
Nov 29 02:53:50 np0005539550 systemd-udevd[272827]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:50 np0005539550 NetworkManager[49039]: <info>  [1764402830.1515] manager: (tap384b014a-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.154 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00051|binding|INFO|Claiming lport 384b014a-c4e8-4d83-a8d1-09e70342722f for this additional chassis.
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00052|binding|INFO|384b014a-c4e8-4d83-a8d1-09e70342722f: Claiming fa:16:3e:a8:74:d4 10.100.0.6
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00053|binding|INFO|Claiming lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a for this additional chassis.
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00054|binding|INFO|f3deccbd-dd81-439e-9ba4-ebc80268aa7a: Claiming fa:16:3e:d7:b4:cf 19.80.0.168
Nov 29 02:53:50 np0005539550 systemd[1]: Started libpod-conmon-a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b.scope.
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.162 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 NetworkManager[49039]: <info>  [1764402830.1674] device (tap384b014a-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:53:50 np0005539550 NetworkManager[49039]: <info>  [1764402830.1680] device (tap384b014a-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.082308902 +0000 UTC m=+0.021813555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.218 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00055|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f ovn-installed in OVS
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.222 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 systemd[1]: Started Virtual Machine qemu-8-instance-00000010.
Nov 29 02:53:50 np0005539550 systemd-machined[216673]: New machine qemu-8-instance-00000010.
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.225967547 +0000 UTC m=+0.165472220 container init a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.239421328 +0000 UTC m=+0.178925961 container start a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.243488426 +0000 UTC m=+0.182993049 container attach a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:50 np0005539550 systemd[1]: libpod-a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b.scope: Deactivated successfully.
Nov 29 02:53:50 np0005539550 thirsty_mendel[272904]: 167 167
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.249163708 +0000 UTC m=+0.188668341 container died a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:53:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5f9a2cfef5e68341536a6f8979caf371f623d6468bee78d36fdae6380c201bba-merged.mount: Deactivated successfully.
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.285 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.286 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.287 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.287 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.287 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.288 257641 INFO nova.compute.manager [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Terminating instance#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.289 257641 DEBUG nova.compute.manager [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:53:50 np0005539550 podman[272878]: 2025-11-29 07:53:50.305805015 +0000 UTC m=+0.245309638 container remove a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 02:53:50 np0005539550 systemd[1]: libpod-conmon-a8e8dcbca9961f8b8951cbd60a2bc5e038862d315f7352dd707d079971a2cf6b.scope: Deactivated successfully.
Nov 29 02:53:50 np0005539550 kernel: tap60b6c6b1-2f (unregistering): left promiscuous mode
Nov 29 02:53:50 np0005539550 NetworkManager[49039]: <info>  [1764402830.3667] device (tap60b6c6b1-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00056|binding|INFO|Releasing lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 from this chassis (sb_readonly=0)
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00057|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 down in Southbound
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00058|binding|INFO|Removing iface tap60b6c6b1-2f ovn-installed in OVS
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.379 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.382 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.387 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:80:a9 10.100.0.4'], port_security=['fa:16:3e:30:80:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3fde18f7-843a-48d8-b394-c299cf479c37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba323f7dc95a4f11911e6559a1b3c99e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2225496-b125-4ce8-9536-dbe1ff7d1166', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55a9360d-4d01-4bc9-9d17-f718c6f118ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=60b6c6b1-2f34-40e6-b2e4-53a564e65b87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.389 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 in datapath 7f03fa2b-ae90-41ca-b1ba-770fedbd8710 unbound from our chassis#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.392 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f03fa2b-ae90-41ca-b1ba-770fedbd8710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.394 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fa8763-9b97-43ab-b806-de9c9d263044]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.395 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 namespace which is not needed anymore#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 29 02:53:50 np0005539550 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000c.scope: Consumed 16.528s CPU time.
Nov 29 02:53:50 np0005539550 systemd-machined[216673]: Machine qemu-6-instance-0000000c terminated.
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.502 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 kernel: tap60b6c6b1-2f: entered promiscuous mode
Nov 29 02:53:50 np0005539550 NetworkManager[49039]: <info>  [1764402830.5138] manager: (tap60b6c6b1-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00059|binding|INFO|Claiming lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for this chassis.
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00060|binding|INFO|60b6c6b1-2f34-40e6-b2e4-53a564e65b87: Claiming fa:16:3e:30:80:a9 10.100.0.4
Nov 29 02:53:50 np0005539550 kernel: tap60b6c6b1-2f (unregistering): left promiscuous mode
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.517 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.523 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:80:a9 10.100.0.4'], port_security=['fa:16:3e:30:80:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3fde18f7-843a-48d8-b394-c299cf479c37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba323f7dc95a4f11911e6559a1b3c99e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2225496-b125-4ce8-9536-dbe1ff7d1166', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55a9360d-4d01-4bc9-9d17-f718c6f118ad, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=60b6c6b1-2f34-40e6-b2e4-53a564e65b87) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:50 np0005539550 podman[272969]: 2025-11-29 07:53:50.543099106 +0000 UTC m=+0.065669399 container create 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00061|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 ovn-installed in OVS
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00062|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 up in Southbound
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00063|binding|INFO|Releasing lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 from this chassis (sb_readonly=1)
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00064|if_status|INFO|Not setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 down as sb is readonly
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00065|binding|INFO|Removing iface tap60b6c6b1-2f ovn-installed in OVS
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.549 257641 INFO nova.virt.libvirt.driver [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Instance destroyed successfully.#033[00m
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00066|binding|INFO|Releasing lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 from this chassis (sb_readonly=0)
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.550 257641 DEBUG nova.objects.instance [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lazy-loading 'resources' on Instance uuid 3fde18f7-843a-48d8-b394-c299cf479c37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:53:50 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:50Z|00067|binding|INFO|Setting lport 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 down in Southbound
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.560 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:80:a9 10.100.0.4'], port_security=['fa:16:3e:30:80:a9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '3fde18f7-843a-48d8-b394-c299cf479c37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ba323f7dc95a4f11911e6559a1b3c99e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2225496-b125-4ce8-9536-dbe1ff7d1166', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55a9360d-4d01-4bc9-9d17-f718c6f118ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=60b6c6b1-2f34-40e6-b2e4-53a564e65b87) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.562 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.583 257641 DEBUG nova.virt.libvirt.vif [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:52:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-1811613913',display_name='tempest-VolumesAdminNegativeTest-server-1811613913',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-1811613913',id=12,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK4GT4boGwht+cjugoIvpANSRA3BQP7vzSKB82n4K09WTLIRFW6T0zh97MlQcefGWJRusLdcEMw8Z3lZ13dqkjmEuhfgLmaxnGEuBK2Aidtxqfs+WoPADX4XLKAYu7APUQ==',key_name='tempest-keypair-2138985612',keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:52:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ba323f7dc95a4f11911e6559a1b3c99e',ramdisk_id='',reservation_id='r-4ynef4sk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-1736550981',owner_user_name='tempest-VolumesAdminNegativeTest-1736550981-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:52:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7f80aba0abfa403b80928e251377a7cd',uuid=3fde18f7-843a-48d8-b394-c299cf479c37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.584 257641 DEBUG nova.network.os_vif_util [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converting VIF {"id": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "address": "fa:16:3e:30:80:a9", "network": {"id": "7f03fa2b-ae90-41ca-b1ba-770fedbd8710", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1259962014-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ba323f7dc95a4f11911e6559a1b3c99e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b6c6b1-2f", "ovs_interfaceid": "60b6c6b1-2f34-40e6-b2e4-53a564e65b87", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.584 257641 DEBUG nova.network.os_vif_util [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.585 257641 DEBUG os_vif [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.587 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60b6c6b1-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.594 257641 INFO os_vif [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:30:80:a9,bridge_name='br-int',has_traffic_filtering=True,id=60b6c6b1-2f34-40e6-b2e4-53a564e65b87,network=Network(7f03fa2b-ae90-41ca-b1ba-770fedbd8710),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b6c6b1-2f')#033[00m
Nov 29 02:53:50 np0005539550 systemd[1]: Started libpod-conmon-2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0.scope.
Nov 29 02:53:50 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [NOTICE]   (271731) : haproxy version is 2.8.14-c23fe91
Nov 29 02:53:50 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [NOTICE]   (271731) : path to executable is /usr/sbin/haproxy
Nov 29 02:53:50 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [WARNING]  (271731) : Exiting Master process...
Nov 29 02:53:50 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [ALERT]    (271731) : Current worker (271733) exited with code 143 (Terminated)
Nov 29 02:53:50 np0005539550 neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710[271727]: [WARNING]  (271731) : All workers exited. Exiting... (0)
Nov 29 02:53:50 np0005539550 systemd[1]: libpod-8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008.scope: Deactivated successfully.
Nov 29 02:53:50 np0005539550 podman[272989]: 2025-11-29 07:53:50.616064359 +0000 UTC m=+0.085875049 container died 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 02:53:50 np0005539550 podman[272969]: 2025-11-29 07:53:50.52196153 +0000 UTC m=+0.044531853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.656 257641 DEBUG nova.compute.manager [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.656 257641 DEBUG oslo_concurrency.lockutils [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.656 257641 DEBUG oslo_concurrency.lockutils [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.657 257641 DEBUG oslo_concurrency.lockutils [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.657 257641 DEBUG nova.compute.manager [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.657 257641 DEBUG nova.compute.manager [req-10841e0e-6ba3-401e-9448-d8efdc77fc5c req-30a2f931-b86d-4774-9a02-5896abebe4d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:50.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:50 np0005539550 podman[272969]: 2025-11-29 07:53:50.664438014 +0000 UTC m=+0.187008327 container init 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 224 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 648 KiB/s wr, 310 op/s
Nov 29 02:53:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008-userdata-shm.mount: Deactivated successfully.
Nov 29 02:53:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1f42deaecc5401e4ad05a2e78105babe6430acd5b401f551a06acfdf2a2cb319-merged.mount: Deactivated successfully.
Nov 29 02:53:50 np0005539550 podman[272969]: 2025-11-29 07:53:50.674768271 +0000 UTC m=+0.197338574 container start 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:53:50 np0005539550 podman[272969]: 2025-11-29 07:53:50.682473217 +0000 UTC m=+0.205043530 container attach 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:53:50 np0005539550 podman[272989]: 2025-11-29 07:53:50.69045241 +0000 UTC m=+0.160263080 container cleanup 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:53:50 np0005539550 systemd[1]: libpod-conmon-8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008.scope: Deactivated successfully.
Nov 29 02:53:50 np0005539550 podman[273071]: 2025-11-29 07:53:50.784805996 +0000 UTC m=+0.061337123 container remove 8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3bbcca-1a6b-4564-832c-7ddf2799201f]: (4, ('Sat Nov 29 07:53:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 (8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008)\n8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008\nSat Nov 29 07:53:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 (8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008)\n8df94b1891940955b45ef3f1b41e9f08cb6a9d6bedcf94713edca06d6cf7e008\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.796 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b519c8a7-5e4c-4bc8-b520-e84e67222953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.797 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f03fa2b-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 kernel: tap7f03fa2b-a0: left promiscuous mode
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.814 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.820 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bc9556-3cf4-4a75-b4d2-7d5bba779af5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.840 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6715cb39-5e53-4f5a-a665-5d62e7c3e9d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.841 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78b6cff0-e31b-4e6a-bad9-4fc46bb14259]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.863 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f0e9d6-8a39-4022-a9bb-b07c80cea2bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 582033, 'reachable_time': 44404, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273122, 'error': None, 'target': 'ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.869 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7f03fa2b-ae90-41ca-b1ba-770fedbd8710 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.869 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[5d37e500-8790-433b-87fd-83977bb60472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.870 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 in datapath 7f03fa2b-ae90-41ca-b1ba-770fedbd8710 unbound from our chassis#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.872 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f03fa2b-ae90-41ca-b1ba-770fedbd8710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.873 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7adaf6-f0a9-4781-81dd-dec90ee35d5a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.873 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 60b6c6b1-2f34-40e6-b2e4-53a564e65b87 in datapath 7f03fa2b-ae90-41ca-b1ba-770fedbd8710 unbound from our chassis#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.875 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7f03fa2b-ae90-41ca-b1ba-770fedbd8710, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:53:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:50.876 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5afdae48-63ba-4dd3-89db-c40553144fcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.943 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402830.9427848, 9bef976c-2981-4d19-aa60-8a550b7093ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.943 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:50 np0005539550 nova_compute[257631]: 2025-11-29 07:53:50.963 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:51 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7f03fa2b\x2dae90\x2d41ca\x2db1ba\x2d770fedbd8710.mount: Deactivated successfully.
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.160 257641 INFO nova.virt.libvirt.driver [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Deleting instance files /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37_del#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.160 257641 INFO nova.virt.libvirt.driver [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Deletion of /var/lib/nova/instances/3fde18f7-843a-48d8-b394-c299cf479c37_del complete#033[00m
Nov 29 02:53:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.215 257641 INFO nova.compute.manager [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.215 257641 DEBUG oslo.service.loopingcall [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.216 257641 DEBUG nova.compute.manager [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.216 257641 DEBUG nova.network.neutron [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.541 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402831.540936, 9bef976c-2981-4d19-aa60-8a550b7093ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.541 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.558 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.562 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.583 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:53:51 np0005539550 hungry_proskuriakova[273034]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:53:51 np0005539550 hungry_proskuriakova[273034]: --> relative data size: 1.0
Nov 29 02:53:51 np0005539550 hungry_proskuriakova[273034]: --> All data devices are unavailable
Nov 29 02:53:51 np0005539550 systemd[1]: libpod-2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0.scope: Deactivated successfully.
Nov 29 02:53:51 np0005539550 podman[272969]: 2025-11-29 07:53:51.653029525 +0000 UTC m=+1.175599818 container died 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:53:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-555aa658de17053297fe64fffe2a74c3635e544ff4fab112aa39a06465ceb86a-merged.mount: Deactivated successfully.
Nov 29 02:53:51 np0005539550 podman[272969]: 2025-11-29 07:53:51.712901608 +0000 UTC m=+1.235471901 container remove 2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:51 np0005539550 systemd[1]: libpod-conmon-2b186a7e2361325ad5bf80daac1fbdf6c7b4cfbb1a735a6874e52d8e975e32d0.scope: Deactivated successfully.
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.799 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402831.7991633, 2bb12f77-8958-446b-813d-a59f149a549b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.800 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:53:51 np0005539550 nova_compute[257631]: 2025-11-29 07:53:51.826 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.36043857 +0000 UTC m=+0.040873635 container create eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.391 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402832.3909843, 2bb12f77-8958-446b-813d-a59f149a549b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.391 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:53:52 np0005539550 systemd[1]: Started libpod-conmon-eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41.scope.
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.429 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.433 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.342815219 +0000 UTC m=+0.023250304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.448478087 +0000 UTC m=+0.128913162 container init eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.455 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.457465927 +0000 UTC m=+0.137900992 container start eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.462064651 +0000 UTC m=+0.142499756 container attach eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:53:52 np0005539550 quizzical_ishizaka[273306]: 167 167
Nov 29 02:53:52 np0005539550 systemd[1]: libpod-eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.464077854 +0000 UTC m=+0.144512919 container died eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 02:53:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-958849e6b06aeb05bbc9e8b508933c86cbfb14feac926be6eb89a67245b3a512-merged.mount: Deactivated successfully.
Nov 29 02:53:52 np0005539550 podman[273290]: 2025-11-29 07:53:52.50238822 +0000 UTC m=+0.182823275 container remove eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:53:52 np0005539550 systemd[1]: libpod-conmon-eb98ee8a33733b7912c8e207dd24edde8d67c12e54d7db45247059312d4c6a41.scope: Deactivated successfully.
Nov 29 02:53:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 224 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 610 KiB/s wr, 239 op/s
Nov 29 02:53:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:52.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:52 np0005539550 podman[273329]: 2025-11-29 07:53:52.695586991 +0000 UTC m=+0.053014090 container create b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:52 np0005539550 systemd[1]: Started libpod-conmon-b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477.scope.
Nov 29 02:53:52 np0005539550 podman[273329]: 2025-11-29 07:53:52.670402527 +0000 UTC m=+0.027829646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb09a4c0bb990ef708c42ab3bd38a435dd4b9fbae7ec732a709894f02bc4977/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb09a4c0bb990ef708c42ab3bd38a435dd4b9fbae7ec732a709894f02bc4977/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb09a4c0bb990ef708c42ab3bd38a435dd4b9fbae7ec732a709894f02bc4977/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb09a4c0bb990ef708c42ab3bd38a435dd4b9fbae7ec732a709894f02bc4977/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:52 np0005539550 podman[273329]: 2025-11-29 07:53:52.789836704 +0000 UTC m=+0.147263803 container init b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.792 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.794 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.794 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.794 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.794 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.794 257641 WARNING nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received unexpected event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 WARNING nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received unexpected event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.795 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.796 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.796 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.796 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.796 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.796 257641 WARNING nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received unexpected event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.798 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.798 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.798 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.799 257641 DEBUG oslo_concurrency.lockutils [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.799 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:52 np0005539550 nova_compute[257631]: 2025-11-29 07:53:52.799 257641 DEBUG nova.compute.manager [req-4ccaac38-d3fe-4869-adaf-0fe0f7eff251 req-4a0bb96e-c596-4338-afff-dd2897cc94e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-unplugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:53:52 np0005539550 podman[273329]: 2025-11-29 07:53:52.801608299 +0000 UTC m=+0.159035398 container start b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:53:52 np0005539550 podman[273329]: 2025-11-29 07:53:52.813895158 +0000 UTC m=+0.171322257 container attach b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:53:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]: {
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:    "0": [
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:        {
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "devices": [
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "/dev/loop3"
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            ],
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "lv_name": "ceph_lv0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "lv_size": "7511998464",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "name": "ceph_lv0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "tags": {
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.cluster_name": "ceph",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.crush_device_class": "",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.encrypted": "0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.osd_id": "0",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.type": "block",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:                "ceph.vdo": "0"
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            },
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "type": "block",
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:            "vg_name": "ceph_vg0"
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:        }
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]:    ]
Nov 29 02:53:53 np0005539550 amazing_rhodes[273346]: }
Nov 29 02:53:53 np0005539550 systemd[1]: libpod-b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477.scope: Deactivated successfully.
Nov 29 02:53:53 np0005539550 conmon[273346]: conmon b7b9a3940b855f5c280b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477.scope/container/memory.events
Nov 29 02:53:53 np0005539550 podman[273355]: 2025-11-29 07:53:53.769338563 +0000 UTC m=+0.046891036 container died b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:53:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cbb09a4c0bb990ef708c42ab3bd38a435dd4b9fbae7ec732a709894f02bc4977-merged.mount: Deactivated successfully.
Nov 29 02:53:53 np0005539550 podman[273355]: 2025-11-29 07:53:53.819546017 +0000 UTC m=+0.097098470 container remove b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:53 np0005539550 systemd[1]: libpod-conmon-b7b9a3940b855f5c280bfedf03b6a1eee0c8009e3ff1d67383504f439f706477.scope: Deactivated successfully.
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00068|binding|INFO|Claiming lport 384b014a-c4e8-4d83-a8d1-09e70342722f for this chassis.
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00069|binding|INFO|384b014a-c4e8-4d83-a8d1-09e70342722f: Claiming fa:16:3e:a8:74:d4 10.100.0.6
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00070|binding|INFO|Claiming lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a for this chassis.
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00071|binding|INFO|f3deccbd-dd81-439e-9ba4-ebc80268aa7a: Claiming fa:16:3e:d7:b4:cf 19.80.0.168
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00072|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f up in Southbound
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00073|binding|INFO|Setting lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a up in Southbound
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.057 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:74:d4 10.100.0.6'], port_security=['fa:16:3e:a8:74:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1819185199', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bef976c-2981-4d19-aa60-8a550b7093ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1819185199', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '9', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=384b014a-c4e8-4d83-a8d1-09e70342722f) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.059 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b4:cf 19.80.0.168'], port_security=['fa:16:3e:d7:b4:cf 19.80.0.168'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['384b014a-c4e8-4d83-a8d1-09e70342722f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-367077291', 'neutron:cidrs': '19.80.0.168/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-367077291', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=86c926f4-a652-4447-9b71-a6da44a90627, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3deccbd-dd81-439e-9ba4-ebc80268aa7a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.060 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 384b014a-c4e8-4d83-a8d1-09e70342722f in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 bound to our chassis#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.062 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.072 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0639dc-c27f-46b2-8a41-a44ebe918380]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.073 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad69a0f4-01 in ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.075 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad69a0f4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.075 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dab2e782-882a-4df4-ac75-ddd170261e40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.076 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a30fe7-cbaa-4528-8395-5c6fc5aa5cb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.087 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[493e7a6f-777d-482e-ba95-aba4dd75335b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00074|binding|INFO|Claiming lport 230cb1ef-c551-4666-88fe-e49994b798e9 for this chassis.
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00075|binding|INFO|230cb1ef-c551-4666-88fe-e49994b798e9: Claiming fa:16:3e:80:fa:cd 10.100.0.12
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00076|binding|INFO|Claiming lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a for this chassis.
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00077|binding|INFO|25733cd0-1d42-411e-be69-7bf3a59b5a2a: Claiming fa:16:3e:58:c7:5e 19.80.0.38
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00078|binding|INFO|Setting lport 230cb1ef-c551-4666-88fe-e49994b798e9 up in Southbound
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00079|binding|INFO|Setting lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a up in Southbound
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.100 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:fa:cd 10.100.0.12'], port_security=['fa:16:3e:80:fa:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1631774297', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2bb12f77-8958-446b-813d-a59f149a549b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1631774297', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '11', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=230cb1ef-c551-4666-88fe-e49994b798e9) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.102 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:c7:5e 19.80.0.38'], port_security=['fa:16:3e:58:c7:5e 19.80.0.38'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['230cb1ef-c551-4666-88fe-e49994b798e9'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-815355501', 'neutron:cidrs': '19.80.0.38/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67969051-efe2-48f0-99e2-c96ec0167864', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-815355501', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=aa62182a-1418-4867-a065-405baf63a28f, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=25733cd0-1d42-411e-be69-7bf3a59b5a2a) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.100 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b9db8a8d-79bb-43f8-bc6a-67db97930420]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.131 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec6a96a-68aa-47ad-9e06-89b65cba6c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 NetworkManager[49039]: <info>  [1764402834.1407] manager: (tapad69a0f4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.139 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[74a282e3-16a2-4280-ad9b-375d06d18980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 systemd-udevd[273478]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.177 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d185debf-f1a0-43ef-972d-f3b4cc9699a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.180 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[177edd0d-c3d8-45f9-b174-c12f964fc498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 NetworkManager[49039]: <info>  [1764402834.2056] device (tapad69a0f4-00): carrier: link connected
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.210 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f332d482-24e5-48ee-95dd-77231d333140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.227 257641 INFO nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Post operation of migration started#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.227 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4964f4-c407-44ce-bc86-e37ed744312b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 22097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273497, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.244 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[be187693-3ec1-4afa-9191-9363e102c452]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:a12d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588187, 'tstamp': 588187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273500, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.261 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc0f7a7-beca-4d19-90d7-a3a3d0c0ddad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 22097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273506, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.300 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[76c3e4ec-17ad-4448-b8ce-af6ce0eb4ce5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a8:74:d4 10.100.0.6
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a8:74:d4 10.100.0.6
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.363 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[41a89dd0-7dbd-45b8-ab90-03933910372d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.364 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.365 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.365 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:54 np0005539550 kernel: tapad69a0f4-00: entered promiscuous mode
Nov 29 02:53:54 np0005539550 NetworkManager[49039]: <info>  [1764402834.3676] manager: (tapad69a0f4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.367 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.373 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:54 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:54Z|00080|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.389 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.390 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e77e00eb-90e6-4fcf-998b-42e91ff21924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.391 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.pid.haproxy
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID ad69a0f4-0000-474b-9649-72cf1bf9f5c1
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.392 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'env', 'PROCESS_TAG=haproxy-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad69a0f4-0000-474b-9649-72cf1bf9f5c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.456960678 +0000 UTC m=+0.052030533 container create 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.466 257641 DEBUG nova.network.neutron [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.486 257641 INFO nova.compute.manager [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Took 3.27 seconds to deallocate network for instance.#033[00m
Nov 29 02:53:54 np0005539550 systemd[1]: Started libpod-conmon-31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8.scope.
Nov 29 02:53:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.438549706 +0000 UTC m=+0.033619581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.538 257641 DEBUG nova.compute.manager [req-d702ef4a-887e-4756-8383-228e7d74f834 req-0fe6d6e6-aac3-4496-bd7f-fd9051375a46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-deleted-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.54031372 +0000 UTC m=+0.135383655 container init 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.542 257641 INFO nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Post operation of migration started#033[00m
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.551202351 +0000 UTC m=+0.146272206 container start 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.553844922 +0000 UTC m=+0.148914887 container attach 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:54 np0005539550 systemd[1]: libpod-31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8.scope: Deactivated successfully.
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.571 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.571 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539550 interesting_jennings[273561]: 167 167
Nov 29 02:53:54 np0005539550 conmon[273561]: conmon 31cd58fd1df7ebc3f791 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8.scope/container/memory.events
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.573474757 +0000 UTC m=+0.168544622 container died 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.582 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cb869deb581bbcb5bb5c19a2db47635c22dc1ee53f8fbe769af08c5ca3b8ab67-merged.mount: Deactivated successfully.
Nov 29 02:53:54 np0005539550 podman[273541]: 2025-11-29 07:53:54.621496342 +0000 UTC m=+0.216566197 container remove 31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jennings, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:53:54 np0005539550 systemd[1]: libpod-conmon-31cd58fd1df7ebc3f791da28fd711978f47ea60b65cfbd40ad652320380000a8.scope: Deactivated successfully.
Nov 29 02:53:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 207 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.1 MiB/s wr, 336 op/s
Nov 29 02:53:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:54.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.693 257641 DEBUG oslo_concurrency.processutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.722 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.723 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.723 257641 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:54 np0005539550 podman[273607]: 2025-11-29 07:53:54.8048769 +0000 UTC m=+0.044873742 container create 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:53:54 np0005539550 podman[273604]: 2025-11-29 07:53:54.819998325 +0000 UTC m=+0.064196569 container create 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:53:54 np0005539550 systemd[1]: Started libpod-conmon-617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64.scope.
Nov 29 02:53:54 np0005539550 systemd[1]: Started libpod-conmon-0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b.scope.
Nov 29 02:53:54 np0005539550 podman[273604]: 2025-11-29 07:53:54.783397895 +0000 UTC m=+0.027596169 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:54 np0005539550 podman[273607]: 2025-11-29 07:53:54.784299079 +0000 UTC m=+0.024295931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.898 257641 DEBUG nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.899 257641 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.899 257641 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.899 257641 DEBUG oslo_concurrency.lockutils [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.900 257641 DEBUG nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] No waiting events found dispatching network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:53:54 np0005539550 nova_compute[257631]: 2025-11-29 07:53:54.900 257641 WARNING nova.compute.manager [req-96a0a1e2-6cc9-4502-bf1d-965381108e77 req-d23787c0-fa91-49fc-a34b-4e3578551af4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Received unexpected event network-vif-plugged-60b6c6b1-2f34-40e6-b2e4-53a564e65b87 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 02:53:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ccce0fe786a30644c2110ee8f4ba60ef970e1091e294a91f013bfdb27c56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ccce0fe786a30644c2110ee8f4ba60ef970e1091e294a91f013bfdb27c56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ccce0fe786a30644c2110ee8f4ba60ef970e1091e294a91f013bfdb27c56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63d34bacd8bcea6462ae34e4029eb4a655aed6678d0c62d69b2330cba30c1668/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ccce0fe786a30644c2110ee8f4ba60ef970e1091e294a91f013bfdb27c56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:54 np0005539550 podman[273607]: 2025-11-29 07:53:54.918717767 +0000 UTC m=+0.158714609 container init 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:53:54 np0005539550 podman[273604]: 2025-11-29 07:53:54.928065297 +0000 UTC m=+0.172263551 container init 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:53:54 np0005539550 podman[273607]: 2025-11-29 07:53:54.929006803 +0000 UTC m=+0.169003645 container start 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:53:54 np0005539550 podman[273607]: 2025-11-29 07:53:54.932496326 +0000 UTC m=+0.172493158 container attach 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:53:54 np0005539550 podman[273604]: 2025-11-29 07:53:54.93488849 +0000 UTC m=+0.179086744 container start 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 02:53:54 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [NOTICE]   (273664) : New worker (273666) forked
Nov 29 02:53:54 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [NOTICE]   (273664) : Loading success.
Nov 29 02:53:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.994 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f3deccbd-dd81-439e-9ba4-ebc80268aa7a in datapath 54dd896a-93d6-4056-93b9-fe4c87eb0b97 unbound from our chassis#033[00m
Nov 29 02:53:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:54.996 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 54dd896a-93d6-4056-93b9-fe4c87eb0b97#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.006 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1fb49e-5b8b-41b3-acda-5cffbd235e78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.007 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap54dd896a-91 in ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.009 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap54dd896a-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.010 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21aa12f8-53a4-47f5-8494-fa1010a48947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.010 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d84bf8-038d-4c37-9a54-9de0218d77a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.024 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[41735324-0695-406a-9571-2bb9b06080a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.047 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[673c61df-3bef-4322-a31e-d039a670b5c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.082 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9f507cd1-11ce-457b-9d44-6cd72a985a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.091 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f977d16f-9844-4ca3-a06f-0707256aaae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 NetworkManager[49039]: <info>  [1764402835.0933] manager: (tap54dd896a-90): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 02:53:55 np0005539550 systemd-udevd[273488]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.115 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.116 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.116 257641 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.136 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[83c1d78a-3b32-40cd-9ef6-6d517dfb72cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.140 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[813ac1f1-b3e4-444d-a90b-7e910468fc90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:53:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294861864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:53:55 np0005539550 NetworkManager[49039]: <info>  [1764402835.1691] device (tap54dd896a-90): carrier: link connected
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.172 257641 DEBUG oslo_concurrency.processutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.177 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ace65533-9011-46c2-be46-d42eaa74be52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.179 257641 DEBUG nova.compute.provider_tree [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.193 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ed533a67-e7c6-439e-a765-b3c6213f95bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54dd896a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:bb:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588283, 'reachable_time': 24580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273687, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.198 257641 DEBUG nova.scheduler.client.report [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.207 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a042ff21-7d06-4a91-9b0e-75bff63859ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:bbff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588283, 'tstamp': 588283}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273688, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:55.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.223 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.224 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f13d8fa-5040-4388-9f01-b39d0cde3120]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap54dd896a-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:bb:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588283, 'reachable_time': 24580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273689, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.253 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eee11cf2-5758-4c35-9beb-722f60a73b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.254 257641 INFO nova.scheduler.client.report [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Deleted allocations for instance 3fde18f7-843a-48d8-b394-c299cf479c37#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.313 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1748ea-5dd9-4f1f-a0b0-ce777783495d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.314 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54dd896a-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.314 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.315 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54dd896a-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.318 257641 DEBUG oslo_concurrency.lockutils [None req-57cd8146-0e94-4530-9eda-aac46ae3c02e 7f80aba0abfa403b80928e251377a7cd ba323f7dc95a4f11911e6559a1b3c99e - - default default] Lock "3fde18f7-843a-48d8-b394-c299cf479c37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:55 np0005539550 kernel: tap54dd896a-90: entered promiscuous mode
Nov 29 02:53:55 np0005539550 NetworkManager[49039]: <info>  [1764402835.3676] manager: (tap54dd896a-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.371 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap54dd896a-90, col_values=(('external_ids', {'iface-id': 'a9eac8df-57ef-4a9b-91fd-9eb356860a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:55 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:55Z|00081|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.391 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.391 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.393 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[41fcd623-21e6-48e9-9c5f-d943d6ee471d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.394 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-54dd896a-93d6-4056-93b9-fe4c87eb0b97
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/54dd896a-93d6-4056-93b9-fe4c87eb0b97.pid.haproxy
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 54dd896a-93d6-4056-93b9-fe4c87eb0b97
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.394 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'env', 'PROCESS_TAG=haproxy-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/54dd896a-93d6-4056-93b9-fe4c87eb0b97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539550 nova_compute[257631]: 2025-11-29 07:53:55.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:55 np0005539550 podman[273726]: 2025-11-29 07:53:55.775218304 +0000 UTC m=+0.055469556 container create 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 02:53:55 np0005539550 systemd[1]: Started libpod-conmon-311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66.scope.
Nov 29 02:53:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:55 np0005539550 podman[273726]: 2025-11-29 07:53:55.745141129 +0000 UTC m=+0.025392401 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09bab5037a0090848ec63c05bab99fb5a730cc9c26542ea0b05dfa2cfe1db75b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]: {
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:        "osd_id": 0,
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:        "type": "bluestore"
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]:    }
Nov 29 02:53:55 np0005539550 nervous_knuth[273655]: }
Nov 29 02:53:55 np0005539550 podman[273726]: 2025-11-29 07:53:55.860577338 +0000 UTC m=+0.140828620 container init 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 02:53:55 np0005539550 podman[273726]: 2025-11-29 07:53:55.866503497 +0000 UTC m=+0.146754749 container start 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 02:53:55 np0005539550 systemd[1]: libpod-617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64.scope: Deactivated successfully.
Nov 29 02:53:55 np0005539550 podman[273607]: 2025-11-29 07:53:55.882544376 +0000 UTC m=+1.122541248 container died 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:53:55 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [NOTICE]   (273757) : New worker (273760) forked
Nov 29 02:53:55 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [NOTICE]   (273757) : Loading success.
Nov 29 02:53:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0c65ccce0fe786a30644c2110ee8f4ba60ef970e1091e294a91f013bfdb27c56-merged.mount: Deactivated successfully.
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.930 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 230cb1ef-c551-4666-88fe-e49994b798e9 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.933 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d#033[00m
Nov 29 02:53:55 np0005539550 podman[273607]: 2025-11-29 07:53:55.940614731 +0000 UTC m=+1.180611573 container remove 617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_knuth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.945 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1b251bfa-6e22-4e7b-92cb-49bdc8de6757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.946 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7a06a21a-b1 in ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.948 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7a06a21a-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.948 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0cccd1ea-d458-4cf3-912a-897ef23c1c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 systemd[1]: libpod-conmon-617505f55d7c51333050ed847ec890584b2eeb8772caf779ea5914f292cf2e64.scope: Deactivated successfully.
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.949 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[da7f0413-ace4-4d2a-9dcf-9f23af32084a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.959 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfc16f8-f97a-4123-9294-2f639f58b0b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.972 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3139bb-cb0c-4455-b102-71d67cbd1bca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:55.999 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d21083e2-3f75-4fa1-8bd2-fd0f6c2dec3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 NetworkManager[49039]: <info>  [1764402836.0073] manager: (tap7a06a21a-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.006 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e22208-64a4-4579-8c7c-fd195de220c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.036 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7940ae88-782a-45a5-932b-394474cbf8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.040 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[af76e733-c429-41fe-af07-cace0fd47410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539550 NetworkManager[49039]: <info>  [1764402836.0617] device (tap7a06a21a-b0): carrier: link connected
Nov 29 02:53:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.066 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb9f577-27a9-4708-ba47-1c7835fc6ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.083 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f6afa1ed-b7ef-439f-8d1d-a36b9b5d27ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588372, 'reachable_time': 32771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273790, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.099 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3e2d68-0a51-4ef2-8d70-b325f8eaaeed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe69:44a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588372, 'tstamp': 588372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273791, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 14e39f66-ccba-492b-bf7a-7d59e32bd60b does not exist
Nov 29 02:53:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 94e67a11-0dcb-44d7-bd99-3c24f003da1f does not exist
Nov 29 02:53:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6fa0fba4-06d9-4e9e-8826-eb7f5cc5160c does not exist
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.115 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1185d8c-e1db-4cfe-9084-0fc0526362fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588372, 'reachable_time': 32771, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273792, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.149 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b10d9edd-d9a8-48ed-9f03-236120b164ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.203 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66cefbe4-ffb9-4e90-a40b-9296ce0145c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a06a21a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:56 np0005539550 kernel: tap7a06a21a-b0: entered promiscuous mode
Nov 29 02:53:56 np0005539550 NetworkManager[49039]: <info>  [1764402836.2084] manager: (tap7a06a21a-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.210 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a06a21a-b0, col_values=(('external_ids', {'iface-id': '2b822f56-587d-4c36-9c9a-d54b62b2616c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:56 np0005539550 nova_compute[257631]: 2025-11-29 07:53:56.211 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:56 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:56Z|00082|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:53:56 np0005539550 nova_compute[257631]: 2025-11-29 07:53:56.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.229 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.230 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe75c3f-fca3-4c50-b703-997faa6ecd02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.230 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7a06a21a-ba04-4a14-8d62-c931cbbf124d.pid.haproxy
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7a06a21a-ba04-4a14-8d62-c931cbbf124d
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.231 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'env', 'PROCESS_TAG=haproxy-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7a06a21a-ba04-4a14-8d62-c931cbbf124d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:53:56 np0005539550 podman[273874]: 2025-11-29 07:53:56.584391463 +0000 UTC m=+0.045278353 container create e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:53:56 np0005539550 systemd[1]: Started libpod-conmon-e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed.scope.
Nov 29 02:53:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2f571f500955a172e8e6b0bae006a4a98b01b33455000d6b3ce954d49497ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:56 np0005539550 podman[273874]: 2025-11-29 07:53:56.56151021 +0000 UTC m=+0.022397110 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 184 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.7 MiB/s wr, 200 op/s
Nov 29 02:53:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:56.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:56 np0005539550 podman[273874]: 2025-11-29 07:53:56.67168436 +0000 UTC m=+0.132571270 container init e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:53:56 np0005539550 podman[273874]: 2025-11-29 07:53:56.677614938 +0000 UTC m=+0.138501818 container start e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 02:53:56 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [NOTICE]   (273893) : New worker (273895) forked
Nov 29 02:53:56 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [NOTICE]   (273893) : Loading success.
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.737 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 25733cd0-1d42-411e-be69-7bf3a59b5a2a in datapath 67969051-efe2-48f0-99e2-c96ec0167864 unbound from our chassis#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.739 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67969051-efe2-48f0-99e2-c96ec0167864#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.751 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9080863f-eea8-4254-8074-d3fe0ebdf04a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.752 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67969051-e1 in ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.755 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67969051-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.755 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbdc2e2-df81-43d3-b6c0-723f332c568c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.756 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8eea1d-51d9-4737-9382-c611de8c52e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.767 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[248e2fe6-de0c-413e-8e9b-788ce0e331fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.780 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c6dfed03-3b91-46d0-ba2a-ed300d137e4e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.807 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e2090442-b0c0-4876-920d-1c323e701476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[24d42eb0-8550-45d9-976a-477548b28022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 NetworkManager[49039]: <info>  [1764402836.8142] manager: (tap67969051-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.845 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ad1f90-61ab-4525-927e-e48b6c9f955f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.848 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b44c7427-a477-4eda-9c7c-c697f1d8c273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 NetworkManager[49039]: <info>  [1764402836.8687] device (tap67969051-e0): carrier: link connected
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.873 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0149da9b-cc33-4eae-a48e-9daa8025cfda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.887 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a33d59-c963-4bd7-8654-b44ba12a9f2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67969051-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:b4:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588453, 'reachable_time': 22976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273914, 'error': None, 'target': 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.901 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d17677-5716-4b99-a1d8-8a5c96ee32c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:b4f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588453, 'tstamp': 588453}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273915, 'error': None, 'target': 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.917 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d7bc4f82-fbc6-4dff-b61e-754b413a182c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67969051-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:b4:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588453, 'reachable_time': 22976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273916, 'error': None, 'target': 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.945 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6212b15a-4ace-4453-86da-6a724d0ff353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 nova_compute[257631]: 2025-11-29 07:53:56.957 257641 DEBUG nova.network.neutron [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating instance_info_cache with network_info: [{"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:56 np0005539550 nova_compute[257631]: 2025-11-29 07:53:56.991 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.998 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a409d1ac-f92b-40cf-b5af-07096f9ce402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.999 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67969051-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.999 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:56.999 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67969051-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:57 np0005539550 NetworkManager[49039]: <info>  [1764402837.0020] manager: (tap67969051-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 02:53:57 np0005539550 kernel: tap67969051-e0: entered promiscuous mode
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:57.006 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67969051-e0, col_values=(('external_ids', {'iface-id': 'f4400efd-54c7-4734-b939-0d7fcbfba020'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:53:57 np0005539550 ovn_controller[148680]: 2025-11-29T07:53:57Z|00083|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.008 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.009 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.009 257641 DEBUG oslo_concurrency.lockutils [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.013 257641 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:53:57 np0005539550 virtqemud[256287]: Domain id=7 name='instance-0000000d' uuid=2bb12f77-8958-446b-813d-a59f149a549b is tainted: custom-monitor
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.022 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:57.024 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67969051-efe2-48f0-99e2-c96ec0167864.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67969051-efe2-48f0-99e2-c96ec0167864.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:57.025 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[378c9a1a-b872-4967-9c41-44576cc0aae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:57.026 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-67969051-efe2-48f0-99e2-c96ec0167864
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/67969051-efe2-48f0-99e2-c96ec0167864.pid.haproxy
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 67969051-efe2-48f0-99e2-c96ec0167864
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:53:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:53:57.026 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'env', 'PROCESS_TAG=haproxy-67969051-efe2-48f0-99e2-c96ec0167864', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67969051-efe2-48f0-99e2-c96ec0167864.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.167 257641 DEBUG nova.network.neutron [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.188 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.205 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.206 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.206 257641 DEBUG oslo_concurrency.lockutils [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:53:57 np0005539550 nova_compute[257631]: 2025-11-29 07:53:57.211 257641 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:53:57 np0005539550 virtqemud[256287]: Domain id=8 name='instance-00000010' uuid=9bef976c-2981-4d19-aa60-8a550b7093ca is tainted: custom-monitor
Nov 29 02:53:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:57.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:57 np0005539550 podman[273949]: 2025-11-29 07:53:57.389871773 +0000 UTC m=+0.048200001 container create 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:53:57 np0005539550 systemd[1]: Started libpod-conmon-2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b.scope.
Nov 29 02:53:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:53:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85bc4e50a54bdcc0b1d77014c9144ca55da73dd4688d4a6122302f4ea5065f81/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:53:57 np0005539550 podman[273949]: 2025-11-29 07:53:57.456282251 +0000 UTC m=+0.114610499 container init 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:53:57 np0005539550 podman[273949]: 2025-11-29 07:53:57.366622651 +0000 UTC m=+0.024950899 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:53:57 np0005539550 podman[273949]: 2025-11-29 07:53:57.462028945 +0000 UTC m=+0.120357173 container start 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 02:53:57 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [NOTICE]   (273968) : New worker (273970) forked
Nov 29 02:53:57 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [NOTICE]   (273968) : Loading success.
Nov 29 02:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:58 np0005539550 nova_compute[257631]: 2025-11-29 07:53:58.023 257641 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:53:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:53:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:53:58 np0005539550 nova_compute[257631]: 2025-11-29 07:53:58.217 257641 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:53:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 689 KiB/s rd, 4.3 MiB/s wr, 181 op/s
Nov 29 02:53:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:53:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:53:58.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:53:58 np0005539550 nova_compute[257631]: 2025-11-29 07:53:58.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:53:58 np0005539550 nova_compute[257631]: 2025-11-29 07:53:58.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:53:58 np0005539550 nova_compute[257631]: 2025-11-29 07:53:58.942 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.030 257641 INFO nova.virt.libvirt.driver [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.035 257641 DEBUG nova.compute.manager [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.053 257641 DEBUG nova.objects.instance [None req-88df9dfe-b391-4052-9fac-e30fecd88ab9 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:53:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:53:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:53:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:53:59.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.225 257641 INFO nova.virt.libvirt.driver [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.230 257641 DEBUG nova.compute.manager [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:53:59 np0005539550 nova_compute[257631]: 2025-11-29 07:53:59.247 257641 DEBUG nova.objects.instance [None req-6bac62d9-af53-4455-85ba-2348d89e71bf 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:53:59
Nov 29 02:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control', 'backups', '.mgr']
Nov 29 02:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:53:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:00 np0005539550 nova_compute[257631]: 2025-11-29 07:54:00.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:00 np0005539550 nova_compute[257631]: 2025-11-29 07:54:00.590 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 687 KiB/s rd, 4.3 MiB/s wr, 178 op/s
Nov 29 02:54:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:00 np0005539550 nova_compute[257631]: 2025-11-29 07:54:00.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:01.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:01 np0005539550 podman[273981]: 2025-11-29 07:54:01.37714447 +0000 UTC m=+0.110995812 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 02:54:01 np0005539550 nova_compute[257631]: 2025-11-29 07:54:01.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:01 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:01Z|00084|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:54:01 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:01Z|00085|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:54:01 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:01Z|00086|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:54:01 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:01Z|00087|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:54:01 np0005539550 nova_compute[257631]: 2025-11-29 07:54:01.983 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:02Z|00088|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:54:02 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:02Z|00089|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:54:02 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:02Z|00090|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:54:02 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:02Z|00091|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:54:02 np0005539550 nova_compute[257631]: 2025-11-29 07:54:02.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 649 KiB/s rd, 3.7 MiB/s wr, 164 op/s
Nov 29 02:54:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:03.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.932 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.962 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.963 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.963 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.963 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:54:03 np0005539550 nova_compute[257631]: 2025-11-29 07:54:03.964 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770817273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.420 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.495 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.496 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.499 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.499 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:54:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 649 KiB/s rd, 3.7 MiB/s wr, 165 op/s
Nov 29 02:54:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:04.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.713 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.714 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4396MB free_disk=20.8974609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.714 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:04 np0005539550 nova_compute[257631]: 2025-11-29 07:54:04.715 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.031 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 2bb12f77-8958-446b-813d-a59f149a549b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.031 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9bef976c-2981-4d19-aa60-8a550b7093ca actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.032 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.032 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:54:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:05.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.281 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.511 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.547 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402830.544316, 3fde18f7-843a-48d8-b394-c299cf479c37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.548 257641 INFO nova.compute.manager [-] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.565 257641 DEBUG nova.compute.manager [None req-99ff0334-6a92-4300-aafb-34a75b772318 - - - - - -] [instance: 3fde18f7-843a-48d8-b394-c299cf479c37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913128521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.724 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.729 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.748 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.770 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.770 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.771 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:05 np0005539550 nova_compute[257631]: 2025-11-29 07:54:05.771 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:54:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 1.2 MiB/s wr, 68 op/s
Nov 29 02:54:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.777 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.778 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.778 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.981 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.981 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.981 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:54:06 np0005539550 nova_compute[257631]: 2025-11-29 07:54:06.982 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2bb12f77-8958-446b-813d-a59f149a549b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:07.103 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:07 np0005539550 nova_compute[257631]: 2025-11-29 07:54:07.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:07.105 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:07.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 602 KiB/s wr, 26 op/s
Nov 29 02:54:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:08.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.210 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating instance_info_cache with network_info: [{"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:09.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.242 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-2bb12f77-8958-446b-813d-a59f149a549b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.243 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.243 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.243 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.244 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.244 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.244 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.244 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:09 np0005539550 nova_compute[257631]: 2025-11-29 07:54:09.244 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:54:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:10.106 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:10 np0005539550 nova_compute[257631]: 2025-11-29 07:54:10.381 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:10 np0005539550 nova_compute[257631]: 2025-11-29 07:54:10.382 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:10 np0005539550 nova_compute[257631]: 2025-11-29 07:54:10.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539550 nova_compute[257631]: 2025-11-29 07:54:10.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 17 KiB/s wr, 14 op/s
Nov 29 02:54:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:10.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:11.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 16 KiB/s wr, 14 op/s
Nov 29 02:54:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:12.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 200 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 18 KiB/s wr, 15 op/s
Nov 29 02:54:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:14.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:15 np0005539550 nova_compute[257631]: 2025-11-29 07:54:15.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:15 np0005539550 nova_compute[257631]: 2025-11-29 07:54:15.596 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 235 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.1 MiB/s wr, 59 op/s
Nov 29 02:54:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:16.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 320 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 84 op/s
Nov 29 02:54:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:18.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:18.926 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:18.926 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:18.927 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:19 np0005539550 podman[274115]: 2025-11-29 07:54:19.313265173 +0000 UTC m=+0.053347879 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:54:19 np0005539550 podman[274116]: 2025-11-29 07:54:19.337395209 +0000 UTC m=+0.076096708 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0050127663781872325 of space, bias 1.0, pg target 1.5038299134561697 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0019776617462311558 of space, bias 1.0, pg target 0.5913208621231156 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:54:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:54:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:20 np0005539550 nova_compute[257631]: 2025-11-29 07:54:20.517 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:20 np0005539550 nova_compute[257631]: 2025-11-29 07:54:20.598 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 339 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 5.3 MiB/s wr, 108 op/s
Nov 29 02:54:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:20.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 339 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 5.3 MiB/s wr, 96 op/s
Nov 29 02:54:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:22.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:23.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2553130174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 354 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 5.7 MiB/s wr, 110 op/s
Nov 29 02:54:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:24.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:25.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:25 np0005539550 nova_compute[257631]: 2025-11-29 07:54:25.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:25 np0005539550 nova_compute[257631]: 2025-11-29 07:54:25.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 385 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 7.1 MiB/s wr, 122 op/s
Nov 29 02:54:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:27.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 385 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 6.0 MiB/s wr, 86 op/s
Nov 29 02:54:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:29.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:30 np0005539550 nova_compute[257631]: 2025-11-29 07:54:30.559 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539550 nova_compute[257631]: 2025-11-29 07:54:30.600 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 385 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 694 KiB/s rd, 2.4 MiB/s wr, 88 op/s
Nov 29 02:54:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:30.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:32 np0005539550 podman[274216]: 2025-11-29 07:54:32.344765602 +0000 UTC m=+0.083638999 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:54:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 385 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 677 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Nov 29 02:54:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:32.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 02:54:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2401.4 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3289 syncs, 3.73 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3194 writes, 11K keys, 3194 commit groups, 1.0 writes per commit group, ingest: 11.78 MB, 0.02 MB/s#012Interval WAL: 3194 writes, 1311 syncs, 2.44 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 02:54:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 386 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 29 02:54:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:34.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:35 np0005539550 nova_compute[257631]: 2025-11-29 07:54:35.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:35 np0005539550 nova_compute[257631]: 2025-11-29 07:54:35.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 386 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.5 MiB/s wr, 160 op/s
Nov 29 02:54:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:36.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.834 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.834 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.851 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.927 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.927 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.934 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:36 np0005539550 nova_compute[257631]: 2025-11-29 07:54:36.934 257641 INFO nova.compute.claims [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.068 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:37.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740944184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.510 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.516 257641 DEBUG nova.compute.provider_tree [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.533 257641 DEBUG nova.scheduler.client.report [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.554 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.555 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.603 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.604 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.623 257641 INFO nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.644 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.745 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.747 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.747 257641 INFO nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Creating image(s)#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.773 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.800 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.825 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.829 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.858 257641 DEBUG nova.policy [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2cc9de54e1764131aa2748a7f9a1df6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c9c2bff0c0a24bebb1149177689b64d7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.897 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.897 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.898 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.899 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.923 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:37 np0005539550 nova_compute[257631]: 2025-11-29 07:54:37.927 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 386 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 60 KiB/s wr, 177 op/s
Nov 29 02:54:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:38.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:39.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.490 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.583 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] resizing rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.699 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Successfully created port: 5f3dee32-7330-47f1-a98f-0647728e6e29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.902 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating tmpfile /var/lib/nova/instances/tmpny0czsgk to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.903 257641 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.956 257641 DEBUG nova.objects.instance [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lazy-loading 'migration_context' on Instance uuid 7a5f1bd4-70c2-4571-bcd7-070a08c471ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.977 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.977 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Ensure instance console log exists: /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.978 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.978 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:39 np0005539550 nova_compute[257631]: 2025-11-29 07:54:39.978 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:40 np0005539550 nova_compute[257631]: 2025-11-29 07:54:40.596 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539550 nova_compute[257631]: 2025-11-29 07:54:40.604 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 408 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.2 MiB/s wr, 254 op/s
Nov 29 02:54:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:40.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.371 257641 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.399 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.400 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.400 257641 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.549 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Successfully updated port: 5f3dee32-7330-47f1-a98f-0647728e6e29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.590 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.590 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquired lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.591 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.693 257641 DEBUG nova.compute.manager [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.694 257641 DEBUG nova.compute.manager [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing instance network info cache due to event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.694 257641 DEBUG oslo_concurrency.lockutils [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:41 np0005539550 nova_compute[257631]: 2025-11-29 07:54:41.887 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:54:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 408 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.2 MiB/s wr, 225 op/s
Nov 29 02:54:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:42.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:43.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.616 257641 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.621 257641 DEBUG nova.network.neutron [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updating instance_info_cache with network_info: [{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.645 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Releasing lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.645 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Instance network_info: |[{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.646 257641 DEBUG oslo_concurrency.lockutils [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.646 257641 DEBUG nova.network.neutron [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.650 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Start _get_guest_xml network_info=[{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.655 257641 WARNING nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.663 257641 DEBUG nova.virt.libvirt.host [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.664 257641 DEBUG nova.virt.libvirt.host [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.686 257641 DEBUG nova.virt.libvirt.host [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.687 257641 DEBUG nova.virt.libvirt.host [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.688 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.689 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.689 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.689 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.690 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.690 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.690 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.690 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.691 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.691 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.691 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.691 257641 DEBUG nova.virt.hardware [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.694 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.717 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.720 257641 DEBUG os_brick.utils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.722 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.732 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.733 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[99451a45-9290-4c34-b8b5-7c921e71092a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.734 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.742 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.742 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[cffed66f-314f-45e7-acfa-324dce19ed94]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.744 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.754 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.754 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[fc97b290-2cc9-436c-98ad-bdf1cd0e42ea]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.755 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[2b34a55a-7a2d-4471-a419-438175319567]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.756 257641 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.779 257641 DEBUG oslo_concurrency.processutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.782 257641 DEBUG os_brick.initiator.connectors.lightos [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.782 257641 DEBUG os_brick.initiator.connectors.lightos [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.782 257641 DEBUG os_brick.initiator.connectors.lightos [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.783 257641 DEBUG os_brick.utils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] <== get_connector_properties: return (62ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.873 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating tmpfile /var/lib/nova/instances/tmpg_cjv671 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Nov 29 02:54:43 np0005539550 nova_compute[257631]: 2025-11-29 07:54:43.874 257641 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Nov 29 02:54:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551200963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.164 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.192 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.196 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:54:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058519376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.670 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.671 257641 DEBUG nova.virt.libvirt.vif [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-148123572',id=21,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9c2bff0c0a24bebb1149177689b64d7',ramdisk_id='',reservation_id='r-f426gwlv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:37Z,user_data=None,user_id='2cc9de54e1764131aa2748a7f9a1df6d',uuid=7a5f1bd4-70c2-4571-bcd7-070a08c471ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.672 257641 DEBUG nova.network.os_vif_util [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converting VIF {"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.673 257641 DEBUG nova.network.os_vif_util [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.674 257641 DEBUG nova.objects.instance [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a5f1bd4-70c2-4571-bcd7-070a08c471ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:54:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 458 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.4 MiB/s wr, 311 op/s
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.690 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <uuid>7a5f1bd4-70c2-4571-bcd7-070a08c471ae</uuid>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <name>instance-00000015</name>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728</nova:name>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:54:43</nova:creationTime>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:user uuid="2cc9de54e1764131aa2748a7f9a1df6d">tempest-FloatingIPsAssociationNegativeTestJSON-2080071708-project-member</nova:user>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:project uuid="c9c2bff0c0a24bebb1149177689b64d7">tempest-FloatingIPsAssociationNegativeTestJSON-2080071708</nova:project>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <nova:port uuid="5f3dee32-7330-47f1-a98f-0647728e6e29">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="serial">7a5f1bd4-70c2-4571-bcd7-070a08c471ae</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="uuid">7a5f1bd4-70c2-4571-bcd7-070a08c471ae</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:48:de:26"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <target dev="tap5f3dee32-73"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/console.log" append="off"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:54:44 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:54:44 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:54:44 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:54:44 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.691 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Preparing to wait for external event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.692 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.692 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.692 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.693 257641 DEBUG nova.virt.libvirt.vif [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-148123572',id=21,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9c2bff0c0a24bebb1149177689b64d7',ramdisk_id='',reservation_id='r-f426gwlv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:54:37Z,user_data=None,user_id='2cc9de54e1764131aa2748a7f9a1df6d',uuid=7a5f1bd4-70c2-4571-bcd7-070a08c471ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.693 257641 DEBUG nova.network.os_vif_util [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converting VIF {"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.694 257641 DEBUG nova.network.os_vif_util [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.694 257641 DEBUG os_vif [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.695 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.696 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.696 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.700 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.701 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f3dee32-73, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.701 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5f3dee32-73, col_values=(('external_ids', {'iface-id': '5f3dee32-7330-47f1-a98f-0647728e6e29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:de:26', 'vm-uuid': '7a5f1bd4-70c2-4571-bcd7-070a08c471ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539550 NetworkManager[49039]: <info>  [1764402884.7041] manager: (tap5f3dee32-73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.710 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.712 257641 INFO os_vif [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73')#033[00m
Nov 29 02:54:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:44.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.761 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.762 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.762 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] No VIF found with MAC fa:16:3e:48:de:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.763 257641 INFO nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Using config drive#033[00m
Nov 29 02:54:44 np0005539550 nova_compute[257631]: 2025-11-29 07:54:44.794 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.072 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='73064fe3-3d3a-4388-bf17-21bd966882ad'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.073 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Creating instance directory: /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.073 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Ensure instance console log exists: /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.074 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.076 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.077 257641 DEBUG nova.virt.libvirt.vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:31Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.077 257641 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.078 257641 DEBUG nova.network.os_vif_util [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.078 257641 DEBUG os_vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.079 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.079 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.082 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.082 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d7fa9ca-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.082 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5d7fa9ca-8f, col_values=(('external_ids', {'iface-id': '5d7fa9ca-8f51-4047-a121-6c4534fc5ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:bf:87', 'vm-uuid': '7067462a-37a6-458e-b96c-76adcea5fdfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:45 np0005539550 NetworkManager[49039]: <info>  [1764402885.0844] manager: (tap5d7fa9ca-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.083 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.086 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.091 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.093 257641 INFO os_vif [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.095 257641 DEBUG nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.096 257641 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='73064fe3-3d3a-4388-bf17-21bd966882ad'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.174 257641 INFO nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Creating config drive at /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.179 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpral9r4eg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:45.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.305 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpral9r4eg" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.334 257641 DEBUG nova.storage.rbd_utils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] rbd image 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.338 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.723 257641 DEBUG nova.network.neutron [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updated VIF entry in instance network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.723 257641 DEBUG nova.network.neutron [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updating instance_info_cache with network_info: [{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.732 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.739 257641 DEBUG oslo_concurrency.lockutils [req-7fdbe5f1-0f06-41c9-9b78-3e72f6204012 req-5d0a5218-bc6d-4394-abd9-ec924b44dfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.757 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 7a5f1bd4-70c2-4571-bcd7-070a08c471ae _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.757 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 2bb12f77-8958-446b-813d-a59f149a549b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.758 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 9bef976c-2981-4d19-aa60-8a550b7093ca _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "2bb12f77-8958-446b-813d-a59f149a549b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.794 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "2bb12f77-8958-446b-813d-a59f149a549b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:45 np0005539550 nova_compute[257631]: 2025-11-29 07:54:45.796 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 485 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.3 MiB/s wr, 281 op/s
Nov 29 02:54:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:46.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.027 257641 DEBUG oslo_concurrency.processutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config 7a5f1bd4-70c2-4571-bcd7-070a08c471ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.027 257641 INFO nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Deleting local config drive /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae/disk.config because it was imported into RBD.#033[00m
Nov 29 02:54:47 np0005539550 kernel: tap5f3dee32-73: entered promiscuous mode
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.0791] manager: (tap5f3dee32-73): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00092|binding|INFO|Claiming lport 5f3dee32-7330-47f1-a98f-0647728e6e29 for this chassis.
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00093|binding|INFO|5f3dee32-7330-47f1-a98f-0647728e6e29: Claiming fa:16:3e:48:de:26 10.100.0.8
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.084 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.096 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:de:26 10.100.0.8'], port_security=['fa:16:3e:48:de:26 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7a5f1bd4-70c2-4571-bcd7-070a08c471ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9c2bff0c0a24bebb1149177689b64d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '58029651-e55c-435e-9816-e1e89148ab8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=965864e2-b803-4413-bf33-a40ccb124236, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5f3dee32-7330-47f1-a98f-0647728e6e29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.097 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5f3dee32-7330-47f1-a98f-0647728e6e29 in datapath 3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 bound to our chassis#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.099 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7286c6-ac88-4dec-b3b7-bbb14d2182fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.111 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3b1c62d9-c1 in ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:54:47 np0005539550 systemd-machined[216673]: New machine qemu-9-instance-00000015.
Nov 29 02:54:47 np0005539550 systemd-udevd[274635]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.115 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3b1c62d9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.115 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cf54eec9-ed8a-43fc-b682-4f0408971154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.116 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fe26b768-d56a-4548-8841-9e1884c20b04]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.1270] device (tap5f3dee32-73): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.1280] device (tap5f3dee32-73): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.127 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d490fd1c-98a7-43c4-9cc4-dd40e4433ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 systemd[1]: Started Virtual Machine qemu-9-instance-00000015.
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.152 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3775c8-e02b-437d-b44d-5b309a7763e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.161 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00094|binding|INFO|Setting lport 5f3dee32-7330-47f1-a98f-0647728e6e29 ovn-installed in OVS
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00095|binding|INFO|Setting lport 5f3dee32-7330-47f1-a98f-0647728e6e29 up in Southbound
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.167 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.183 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7af0fec3-afe1-4861-abbe-f849d721695f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.1893] manager: (tap3b1c62d9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.188 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5d17e5e3-e47f-4bc4-b9b3-cc6ed3d07663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.198 257641 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.222 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b18711-875e-4e36-a953-c75734d93e43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.225 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[78b4cd85-5c02-4c3a-b5da-6653b58df09e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.2507] device (tap3b1c62d9-c0): carrier: link connected
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.257 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[094a3d5e-e3bd-449e-81ea-dbe589abdb3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:47.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.275 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1337d672-5888-4ee3-b7e2-b947ee217a3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b1c62d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593491, 'reachable_time': 25669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274671, 'error': None, 'target': 'ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.292 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c9570e-9872-4249-8a17-2b0e3d0472c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:f795'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593491, 'tstamp': 593491}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274672, 'error': None, 'target': 'ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.309 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28e7ce24-4e08-46b2-bd12-d654a0d0db01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3b1c62d9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f7:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593491, 'reachable_time': 25669, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274673, 'error': None, 'target': 'ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.338 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5633d373-efaa-4d33-a460-cd63ee9715b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.391 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b56e5a-ac63-4f4c-a256-eab3f350109e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.393 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b1c62d9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.393 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.393 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b1c62d9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.395 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.3958] manager: (tap3b1c62d9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 02:54:47 np0005539550 kernel: tap3b1c62d9-c0: entered promiscuous mode
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.403 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3b1c62d9-c0, col_values=(('external_ids', {'iface-id': '47714b16-ac75-406a-b14d-bad4812e27d7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00096|binding|INFO|Releasing lport 47714b16-ac75-406a-b14d-bad4812e27d7 from this chassis (sb_readonly=0)
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.425 257641 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpny0czsgk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='73064fe3-3d3a-4388-bf17-21bd966882ad'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.427 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.428 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d938dc-f47f-4595-bf36-7da2d5c4d2d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.429 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3.pid.haproxy
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:54:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:47.430 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'env', 'PROCESS_TAG=haproxy-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.472 257641 DEBUG nova.compute.manager [req-968bb705-3207-434b-be40-e4a634a57d68 req-a0d4a28e-0b7c-4af7-91ed-41046ffbf497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.473 257641 DEBUG oslo_concurrency.lockutils [req-968bb705-3207-434b-be40-e4a634a57d68 req-a0d4a28e-0b7c-4af7-91ed-41046ffbf497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.473 257641 DEBUG oslo_concurrency.lockutils [req-968bb705-3207-434b-be40-e4a634a57d68 req-a0d4a28e-0b7c-4af7-91ed-41046ffbf497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.473 257641 DEBUG oslo_concurrency.lockutils [req-968bb705-3207-434b-be40-e4a634a57d68 req-a0d4a28e-0b7c-4af7-91ed-41046ffbf497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.473 257641 DEBUG nova.compute.manager [req-968bb705-3207-434b-be40-e4a634a57d68 req-a0d4a28e-0b7c-4af7-91ed-41046ffbf497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Processing event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.614 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402887.6137693, 7a5f1bd4-70c2-4571-bcd7-070a08c471ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.614 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.618 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.628 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.638 257641 INFO nova.virt.libvirt.driver [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Instance spawned successfully.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.639 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.645 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.6478] manager: (tap5d7fa9ca-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 02:54:47 np0005539550 kernel: tap5d7fa9ca-8f: entered promiscuous mode
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00097|binding|INFO|Claiming lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for this additional chassis.
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00098|binding|INFO|5d7fa9ca-8f51-4047-a121-6c4534fc5ae6: Claiming fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.652 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 systemd-udevd[274656]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.669 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.6702] device (tap5d7fa9ca-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:54:47 np0005539550 NetworkManager[49039]: <info>  [1764402887.6713] device (tap5d7fa9ca-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.676 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.677 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:47Z|00099|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 ovn-installed in OVS
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.677 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.678 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.678 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.679 257641 DEBUG nova.virt.libvirt.driver [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:47 np0005539550 systemd-machined[216673]: New machine qemu-10-instance-00000014.
Nov 29 02:54:47 np0005539550 systemd[1]: Started Virtual Machine qemu-10-instance-00000014.
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.731 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.732 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402887.6139019, 7a5f1bd4-70c2-4571-bcd7-070a08c471ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.732 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.755 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.765 257641 INFO nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Took 10.02 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.765 257641 DEBUG nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.767 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402887.622474, 7a5f1bd4-70c2-4571-bcd7-070a08c471ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.767 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.794 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.798 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:47 np0005539550 podman[274766]: 2025-11-29 07:54:47.827488837 +0000 UTC m=+0.051321745 container create f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.834 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.840 257641 INFO nova.compute.manager [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Took 10.94 seconds to build instance.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.866 257641 DEBUG oslo_concurrency.lockutils [None req-6f922633-a12b-4461-bf4f-0b44923c73d4 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.867 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.867 257641 INFO nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:54:47 np0005539550 nova_compute[257631]: 2025-11-29 07:54:47.871 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:47 np0005539550 systemd[1]: Started libpod-conmon-f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae.scope.
Nov 29 02:54:47 np0005539550 podman[274766]: 2025-11-29 07:54:47.802297472 +0000 UTC m=+0.026130390 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:54:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:54:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2912e99ed2baa07e5b60d341407c066b42a4594ad6fb26b55666a3ec6fa3a7d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:47 np0005539550 podman[274766]: 2025-11-29 07:54:47.937891442 +0000 UTC m=+0.161724370 container init f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:54:47 np0005539550 podman[274766]: 2025-11-29 07:54:47.943574534 +0000 UTC m=+0.167407432 container start f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 02:54:47 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [NOTICE]   (274789) : New worker (274791) forked
Nov 29 02:54:47 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [NOTICE]   (274789) : Loading success.
Nov 29 02:54:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 534 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 320 op/s
Nov 29 02:54:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:48.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:48 np0005539550 nova_compute[257631]: 2025-11-29 07:54:48.920 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402888.919823, 7067462a-37a6-458e-b96c-76adcea5fdfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:48 np0005539550 nova_compute[257631]: 2025-11-29 07:54:48.920 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Started (Lifecycle Event)#033[00m
Nov 29 02:54:48 np0005539550 nova_compute[257631]: 2025-11-29 07:54:48.978 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:49.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.574 257641 DEBUG nova.compute.manager [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.574 257641 DEBUG oslo_concurrency.lockutils [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.575 257641 DEBUG oslo_concurrency.lockutils [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.575 257641 DEBUG oslo_concurrency.lockutils [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.575 257641 DEBUG nova.compute.manager [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] No waiting events found dispatching network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.575 257641 WARNING nova.compute.manager [req-d032f074-e141-4359-8218-4610571662c5 req-0d282c66-2fb9-49bb-95db-10e9492765e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received unexpected event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 for instance with vm_state active and task_state None.#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.578 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402889.5780275, 7067462a-37a6-458e-b96c-76adcea5fdfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.578 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.602 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.605 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:54:49 np0005539550 nova_compute[257631]: 2025-11-29 07:54:49.628 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:54:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:50 np0005539550 nova_compute[257631]: 2025-11-29 07:54:50.084 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539550 podman[274843]: 2025-11-29 07:54:50.345092995 +0000 UTC m=+0.071933527 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:54:50 np0005539550 podman[274844]: 2025-11-29 07:54:50.359688575 +0000 UTC m=+0.086730682 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 02:54:50 np0005539550 nova_compute[257631]: 2025-11-29 07:54:50.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 551 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 8.6 MiB/s wr, 349 op/s
Nov 29 02:54:50 np0005539550 nova_compute[257631]: 2025-11-29 07:54:50.704 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:50.705 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:50.706 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:54:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:54:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:50.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:54:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:51.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:51 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:51Z|00100|binding|INFO|Claiming lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for this chassis.
Nov 29 02:54:51 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:51Z|00101|binding|INFO|5d7fa9ca-8f51-4047-a121-6c4534fc5ae6: Claiming fa:16:3e:17:bf:87 10.100.0.5
Nov 29 02:54:51 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:51Z|00102|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 up in Southbound
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.399 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '11', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.400 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d bound to our chassis#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.402 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.416 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a96dbd0-4cbc-49d3-b25c-51731921a9c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.447 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b324893c-bb6f-48cb-b16f-62009e2962e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.451 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a65dd642-1906-4283-8968-3ebf3de690a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.484 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5db55608-2a80-458c-bd3e-65e5b198b23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.499 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f9d9c1-8efe-415e-989d-e709dcd8db60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1204, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1204, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588372, 'reachable_time': 22423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274888, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.515 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30fa14d1-494d-49cd-8def-0e2ea9697ca1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a06a21a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588383, 'tstamp': 588383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274889, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a06a21a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588386, 'tstamp': 588386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274889, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.517 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539550 nova_compute[257631]: 2025-11-29 07:54:51.518 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539550 nova_compute[257631]: 2025-11-29 07:54:51.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.520 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a06a21a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.520 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.520 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a06a21a-b0, col_values=(('external_ids', {'iface-id': '2b822f56-587d-4c36-9c9a-d54b62b2616c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:51.521 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:54:51 np0005539550 nova_compute[257631]: 2025-11-29 07:54:51.861 257641 INFO nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Post operation of migration started#033[00m
Nov 29 02:54:52 np0005539550 NetworkManager[49039]: <info>  [1764402892.2207] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.220 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539550 NetworkManager[49039]: <info>  [1764402892.2216] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.400 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.400 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.400 257641 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:52Z|00103|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:52Z|00104|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:52Z|00105|binding|INFO|Releasing lport 47714b16-ac75-406a-b14d-bad4812e27d7 from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:52Z|00106|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539550 ovn_controller[148680]: 2025-11-29T07:54:52Z|00107|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:54:52 np0005539550 nova_compute[257631]: 2025-11-29 07:54:52.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 551 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.5 MiB/s wr, 265 op/s
Nov 29 02:54:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:52.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:53.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 583 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.7 MiB/s wr, 367 op/s
Nov 29 02:54:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:54.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.783 257641 DEBUG nova.network.neutron [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.812 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.831 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.831 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.831 257641 DEBUG oslo_concurrency.lockutils [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:54:54 np0005539550 nova_compute[257631]: 2025-11-29 07:54:54.835 257641 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:54:54 np0005539550 virtqemud[256287]: Domain id=10 name='instance-00000014' uuid=7067462a-37a6-458e-b96c-76adcea5fdfa is tainted: custom-monitor
Nov 29 02:54:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:55 np0005539550 nova_compute[257631]: 2025-11-29 07:54:55.085 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:55.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:55 np0005539550 nova_compute[257631]: 2025-11-29 07:54:55.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:54:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:54:55.709 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:54:55 np0005539550 nova_compute[257631]: 2025-11-29 07:54:55.845 257641 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.409 257641 DEBUG nova.compute.manager [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.409 257641 DEBUG nova.compute.manager [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing instance network info cache due to event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.410 257641 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.410 257641 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.410 257641 DEBUG nova.network.neutron [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:54:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 598 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 8.6 MiB/s wr, 348 op/s
Nov 29 02:54:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:56.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.850 257641 INFO nova.virt.libvirt.driver [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.857 257641 DEBUG nova.compute.manager [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:54:56 np0005539550 nova_compute[257631]: 2025-11-29 07:54:56.887 257641 DEBUG nova.objects.instance [None req-14e676da-932f-4d2c-ad81-13d8a6ffeccf 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:54:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:57.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:54:57 np0005539550 nova_compute[257631]: 2025-11-29 07:54:57.766 257641 DEBUG nova.network.neutron [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updated VIF entry in instance network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:54:57 np0005539550 nova_compute[257631]: 2025-11-29 07:54:57.768 257641 DEBUG nova.network.neutron [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updating instance_info_cache with network_info: [{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5a077936-cea3-4104-9325-45e9f8a885da does not exist
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9668eea7-5b04-4a6f-ad08-669c4e0c9b34 does not exist
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 53438ba4-d13b-4c41-ad25-539295b3eda8 does not exist
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:54:57 np0005539550 nova_compute[257631]: 2025-11-29 07:54:57.793 257641 DEBUG oslo_concurrency.lockutils [req-aafe9edd-f44b-4ebd-acfd-f37bda247487 req-4c8d7dda-9a24-4535-92f1-c581c8cab370 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:54:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.396524727 +0000 UTC m=+0.051396106 container create faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 02:54:58 np0005539550 systemd[1]: Started libpod-conmon-faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5.scope.
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.371173629 +0000 UTC m=+0.026045038 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.489216298 +0000 UTC m=+0.144087707 container init faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.496671278 +0000 UTC m=+0.151542667 container start faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.50087267 +0000 UTC m=+0.155744079 container attach faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:54:58 np0005539550 peaceful_babbage[275188]: 167 167
Nov 29 02:54:58 np0005539550 systemd[1]: libpod-faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5.scope: Deactivated successfully.
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.505060473 +0000 UTC m=+0.159931892 container died faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 02:54:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-df38c7a13d62b5e5f39135c2797ebf9dfa49d939bcc0d2aa0331e28d5a133fd1-merged.mount: Deactivated successfully.
Nov 29 02:54:58 np0005539550 podman[275172]: 2025-11-29 07:54:58.549821981 +0000 UTC m=+0.204693360 container remove faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_babbage, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:54:58 np0005539550 systemd[1]: libpod-conmon-faf772cefc16c3feae609329cfb3db5c49676d417cf58a2c8924c5c5551b6ba5.scope: Deactivated successfully.
Nov 29 02:54:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 604 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.7 MiB/s wr, 342 op/s
Nov 29 02:54:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:54:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:54:58.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:54:58 np0005539550 podman[275212]: 2025-11-29 07:54:58.760598413 +0000 UTC m=+0.049963129 container create f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 02:54:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:54:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:54:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:54:58 np0005539550 systemd[1]: Started libpod-conmon-f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce.scope.
Nov 29 02:54:58 np0005539550 podman[275212]: 2025-11-29 07:54:58.73361136 +0000 UTC m=+0.022976076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:54:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:54:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:54:58 np0005539550 podman[275212]: 2025-11-29 07:54:58.881412966 +0000 UTC m=+0.170777702 container init f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:54:58 np0005539550 podman[275212]: 2025-11-29 07:54:58.889408791 +0000 UTC m=+0.178773507 container start f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 02:54:58 np0005539550 podman[275212]: 2025-11-29 07:54:58.893069298 +0000 UTC m=+0.182434034 container attach f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.107 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.109 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.135 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.227 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.228 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.236 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.237 257641 INFO nova.compute.claims [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:54:59
Nov 29 02:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'images', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.control']
Nov 29 02:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:54:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:54:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:54:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:54:59.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.346 257641 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.348 257641 DEBUG nova.scheduler.client.report [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.364 257641 DEBUG nova.scheduler.client.report [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.365 257641 DEBUG nova.compute.provider_tree [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.385 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.385 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.386 257641 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.391 257641 DEBUG nova.scheduler.client.report [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.425 257641 DEBUG nova.scheduler.client.report [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.546 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.750 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Check if temp file /var/lib/nova/instances/tmpto9guicl exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:54:59 np0005539550 nova_compute[257631]: 2025-11-29 07:54:59.752 257641 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:54:59 np0005539550 happy_keller[275229]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:54:59 np0005539550 happy_keller[275229]: --> relative data size: 1.0
Nov 29 02:54:59 np0005539550 happy_keller[275229]: --> All data devices are unavailable
Nov 29 02:54:59 np0005539550 systemd[1]: libpod-f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce.scope: Deactivated successfully.
Nov 29 02:54:59 np0005539550 podman[275212]: 2025-11-29 07:54:59.805913772 +0000 UTC m=+1.095278498 container died f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:54:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8cd19ed22a401672ec63d45a356abe04fd54cacf8dea1129dd015b8d8782c680-merged.mount: Deactivated successfully.
Nov 29 02:54:59 np0005539550 podman[275212]: 2025-11-29 07:54:59.863793871 +0000 UTC m=+1.153158577 container remove f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 02:54:59 np0005539550 systemd[1]: libpod-conmon-f35d2daf2b128afbffdfe93d8fbf921b8c852ddc01cab0de2517ab778ae310ce.scope: Deactivated successfully.
Nov 29 02:54:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:54:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:54:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291533116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.012 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.021 257641 DEBUG nova.compute.provider_tree [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.038 257641 DEBUG nova.scheduler.client.report [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.063 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.064 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.087 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.118 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.119 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.156 257641 INFO nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.196 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.326 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.327 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.327 257641 INFO nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Creating image(s)#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.359 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.389 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.418 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.422 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.479892572 +0000 UTC m=+0.040904196 container create aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.484 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.485 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.486 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.487 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.510 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.513 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:00 np0005539550 systemd[1]: Started libpod-conmon-aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7.scope.
Nov 29 02:55:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.55453787 +0000 UTC m=+0.115549524 container init aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.463088543 +0000 UTC m=+0.024100197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.561090506 +0000 UTC m=+0.122102130 container start aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.565375981 +0000 UTC m=+0.126387615 container attach aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 02:55:00 np0005539550 great_pare[275507]: 167 167
Nov 29 02:55:00 np0005539550 systemd[1]: libpod-aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7.scope: Deactivated successfully.
Nov 29 02:55:00 np0005539550 conmon[275507]: conmon aa4529a98dbd542d8ee2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7.scope/container/memory.events
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.567921869 +0000 UTC m=+0.128933513 container died aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-563c2fea1bbfbbe60279d39e3502f1957952b12b3c9a948afcb1d8d36bace8c5-merged.mount: Deactivated successfully.
Nov 29 02:55:00 np0005539550 podman[275470]: 2025-11-29 07:55:00.601178759 +0000 UTC m=+0.162190383 container remove aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_pare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.686 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 610 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.4 MiB/s wr, 292 op/s
Nov 29 02:55:00 np0005539550 systemd[1]: libpod-conmon-aa4529a98dbd542d8ee2614eae348d34d7d04330d0167fb3122bb98b635345d7.scope: Deactivated successfully.
Nov 29 02:55:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:00.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:00 np0005539550 nova_compute[257631]: 2025-11-29 07:55:00.803 257641 DEBUG nova.policy [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '446b0f05699845e8bd9f7d59c787f671', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f48e629446148199d44b34243b98b8a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 02:55:00 np0005539550 podman[275546]: 2025-11-29 07:55:00.893138994 +0000 UTC m=+0.054216902 container create 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:00 np0005539550 systemd[1]: Started libpod-conmon-60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95.scope.
Nov 29 02:55:00 np0005539550 podman[275546]: 2025-11-29 07:55:00.870536619 +0000 UTC m=+0.031614557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:55:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1308f646b927f770ce84745bcb9e622973e2ecdb952b7a4dd3939a7f77704e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1308f646b927f770ce84745bcb9e622973e2ecdb952b7a4dd3939a7f77704e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1308f646b927f770ce84745bcb9e622973e2ecdb952b7a4dd3939a7f77704e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1308f646b927f770ce84745bcb9e622973e2ecdb952b7a4dd3939a7f77704e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:01 np0005539550 podman[275546]: 2025-11-29 07:55:01.020183384 +0000 UTC m=+0.181261302 container init 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:55:01 np0005539550 podman[275546]: 2025-11-29 07:55:01.029073612 +0000 UTC m=+0.190151520 container start 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 02:55:01 np0005539550 podman[275546]: 2025-11-29 07:55:01.032875494 +0000 UTC m=+0.193953432 container attach 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.098 257641 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.119 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.121 257641 DEBUG os_brick.utils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.122 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.136 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.137 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[515eea25-c831-41c2-ae2b-d6197c2ec5e4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.138 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.147 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.147 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[55380886-d89c-4831-9dd6-226d1696584d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.150 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.159 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.159 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3b51b1bc-7892-45c2-beae-803e07bca59f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.161 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[fed6b0a3-e258-4e01-930e-0380ff1f60d5]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.161 257641 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.183 257641 DEBUG oslo_concurrency.processutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "nvme version" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.189 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.189 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.189 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.190 257641 DEBUG os_brick.utils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.209 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.291 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] resizing rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:55:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.400 257641 DEBUG nova.objects.instance [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'migration_context' on Instance uuid 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.412 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.412 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Ensure instance console log exists: /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.413 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.413 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.413 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:01 np0005539550 nova_compute[257631]: 2025-11-29 07:55:01.567 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Successfully created port: c6266046-dc59-475f-a0f5-391ba640d669 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 02:55:01 np0005539550 nervous_bell[275566]: {
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:    "0": [
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:        {
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "devices": [
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "/dev/loop3"
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            ],
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "lv_name": "ceph_lv0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "lv_size": "7511998464",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "name": "ceph_lv0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "tags": {
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.cluster_name": "ceph",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.crush_device_class": "",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.encrypted": "0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.osd_id": "0",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.type": "block",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:                "ceph.vdo": "0"
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            },
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "type": "block",
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:            "vg_name": "ceph_vg0"
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:        }
Nov 29 02:55:01 np0005539550 nervous_bell[275566]:    ]
Nov 29 02:55:01 np0005539550 nervous_bell[275566]: }
Nov 29 02:55:01 np0005539550 systemd[1]: libpod-60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95.scope: Deactivated successfully.
Nov 29 02:55:01 np0005539550 podman[275546]: 2025-11-29 07:55:01.891026874 +0000 UTC m=+1.052104782 container died 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f1308f646b927f770ce84745bcb9e622973e2ecdb952b7a4dd3939a7f77704e5-merged.mount: Deactivated successfully.
Nov 29 02:55:02 np0005539550 podman[275546]: 2025-11-29 07:55:02.077130366 +0000 UTC m=+1.238208274 container remove 60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:55:02 np0005539550 systemd[1]: libpod-conmon-60cdf84580d604c0392cdee4fcd5fcd8e82603b3cecd27497e65aeb56593ba95.scope: Deactivated successfully.
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.671497714 +0000 UTC m=+0.038749008 container create 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:55:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 610 MiB data, 625 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 232 op/s
Nov 29 02:55:02 np0005539550 systemd[1]: Started libpod-conmon-4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69.scope.
Nov 29 02:55:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.654589112 +0000 UTC m=+0.021840436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.753317235 +0000 UTC m=+0.120568549 container init 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:55:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:02.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.763754844 +0000 UTC m=+0.131006138 container start 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.768309126 +0000 UTC m=+0.135560420 container attach 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:55:02 np0005539550 mystifying_elgamal[275824]: 167 167
Nov 29 02:55:02 np0005539550 systemd[1]: libpod-4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69.scope: Deactivated successfully.
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.769917899 +0000 UTC m=+0.137169203 container died 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 02:55:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b47dd7930cafe6a1f0b0618258ff3c71f2dd8255bd55817e6a4863bb82c65b35-merged.mount: Deactivated successfully.
Nov 29 02:55:02 np0005539550 podman[275807]: 2025-11-29 07:55:02.820692348 +0000 UTC m=+0.187943642 container remove 4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elgamal, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 02:55:02 np0005539550 podman[275821]: 2025-11-29 07:55:02.829502694 +0000 UTC m=+0.120969629 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 02:55:02 np0005539550 systemd[1]: libpod-conmon-4af07f537d475774cec4e6debd233c5dbc2dd9b4cd63d4311b30bb6e736edf69.scope: Deactivated successfully.
Nov 29 02:55:02 np0005539550 nova_compute[257631]: 2025-11-29 07:55:02.883 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Successfully updated port: c6266046-dc59-475f-a0f5-391ba640d669 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 02:55:02 np0005539550 nova_compute[257631]: 2025-11-29 07:55:02.900 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:02 np0005539550 nova_compute[257631]: 2025-11-29 07:55:02.900 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquired lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:02 np0005539550 nova_compute[257631]: 2025-11-29 07:55:02.901 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.015 257641 DEBUG nova.compute.manager [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-changed-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.016 257641 DEBUG nova.compute.manager [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Refreshing instance network info cache due to event network-changed-c6266046-dc59-475f-a0f5-391ba640d669. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.016 257641 DEBUG oslo_concurrency.lockutils [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:03 np0005539550 podman[275871]: 2025-11-29 07:55:03.025196602 +0000 UTC m=+0.045720335 container create ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:03 np0005539550 systemd[1]: Started libpod-conmon-ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd.scope.
Nov 29 02:55:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:55:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f17787eec7ea4dcddb02933f29abfc4121feaadf1df497dafe3e5e2ead3914c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f17787eec7ea4dcddb02933f29abfc4121feaadf1df497dafe3e5e2ead3914c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f17787eec7ea4dcddb02933f29abfc4121feaadf1df497dafe3e5e2ead3914c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f17787eec7ea4dcddb02933f29abfc4121feaadf1df497dafe3e5e2ead3914c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:03 np0005539550 podman[275871]: 2025-11-29 07:55:03.098892285 +0000 UTC m=+0.119416048 container init ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:55:03 np0005539550 podman[275871]: 2025-11-29 07:55:03.004987471 +0000 UTC m=+0.025511224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:55:03 np0005539550 podman[275871]: 2025-11-29 07:55:03.105719897 +0000 UTC m=+0.126243630 container start ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 02:55:03 np0005539550 podman[275871]: 2025-11-29 07:55:03.109329534 +0000 UTC m=+0.129853277 container attach ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.161 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='086dffa8-4128-4c55-89ad-f4a779ee7ea0'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.162 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Creating instance directory: /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.162 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Ensure instance console log exists: /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.163 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.166 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.167 257641 DEBUG nova.virt.libvirt.vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:38Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.168 257641 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.168 257641 DEBUG nova.network.os_vif_util [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.169 257641 DEBUG os_vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.170 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.170 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.174 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.174 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1330295-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.175 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1330295-51, col_values=(('external_ids', {'iface-id': 'd1330295-51bc-4e64-a620-b63a6d8777fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c2:bc:90', 'vm-uuid': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.177 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:03 np0005539550 NetworkManager[49039]: <info>  [1764402903.1790] manager: (tapd1330295-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.180 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.185 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.186 257641 INFO os_vif [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.189 257641 DEBUG nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.189 257641 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='086dffa8-4128-4c55-89ad-f4a779ee7ea0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Nov 29 02:55:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:03.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:03 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:03Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:de:26 10.100.0.8
Nov 29 02:55:03 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:03Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:de:26 10.100.0.8
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.380 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.563 257641 DEBUG nova.compute.manager [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.563 257641 DEBUG nova.compute.manager [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing instance network info cache due to event network-changed-5f3dee32-7330-47f1-a98f-0647728e6e29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.564 257641 DEBUG oslo_concurrency.lockutils [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.564 257641 DEBUG oslo_concurrency.lockutils [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:03 np0005539550 nova_compute[257631]: 2025-11-29 07:55:03.564 257641 DEBUG nova.network.neutron [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Refreshing network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:03 np0005539550 keen_lewin[275888]: {
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:        "osd_id": 0,
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:        "type": "bluestore"
Nov 29 02:55:03 np0005539550 keen_lewin[275888]:    }
Nov 29 02:55:03 np0005539550 keen_lewin[275888]: }
Nov 29 02:55:04 np0005539550 systemd[1]: libpod-ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd.scope: Deactivated successfully.
Nov 29 02:55:04 np0005539550 podman[275871]: 2025-11-29 07:55:04.013709322 +0000 UTC m=+1.034233055 container died ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:55:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1f17787eec7ea4dcddb02933f29abfc4121feaadf1df497dafe3e5e2ead3914c-merged.mount: Deactivated successfully.
Nov 29 02:55:04 np0005539550 podman[275871]: 2025-11-29 07:55:04.062882208 +0000 UTC m=+1.083405941 container remove ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 02:55:04 np0005539550 systemd[1]: libpod-conmon-ec2767a16b417231c930609d9f57d60176e1915fdeb140c0c65687485e6c3bdd.scope: Deactivated successfully.
Nov 29 02:55:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:55:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:55:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev efcc6f3a-1ab7-4016-b039-a3c32f5dfafc does not exist
Nov 29 02:55:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b085d77c-3853-48e5-9d2c-f772abf67651 does not exist
Nov 29 02:55:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 98f8aaab-3cb1-41dc-9395-9ec6c16627ea does not exist
Nov 29 02:55:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 646 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.6 MiB/s wr, 259 op/s
Nov 29 02:55:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:04.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.814 257641 DEBUG nova.network.neutron [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updating instance_info_cache with network_info: [{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.839 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Releasing lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.840 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Instance network_info: |[{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.840 257641 DEBUG oslo_concurrency.lockutils [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.840 257641 DEBUG nova.network.neutron [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Refreshing network info cache for port c6266046-dc59-475f-a0f5-391ba640d669 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.844 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Start _get_guest_xml network_info=[{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.850 257641 WARNING nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.856 257641 DEBUG nova.virt.libvirt.host [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.857 257641 DEBUG nova.virt.libvirt.host [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.863 257641 DEBUG nova.virt.libvirt.host [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.864 257641 DEBUG nova.virt.libvirt.host [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.865 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.865 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.866 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.866 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.866 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.866 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.867 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.867 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.867 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.867 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.868 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.868 257641 DEBUG nova.virt.hardware [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.871 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:04 np0005539550 nova_compute[257631]: 2025-11-29 07:55:04.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:55:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:55:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:05.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1456838427' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.343 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.370 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.374 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.687 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.758 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077655932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.853 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.855 257641 DEBUG nova.virt.libvirt.vif [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1833838032',display_name='tempest-ServersAdminTestJSON-server-1833838032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1833838032',id=23,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-fmtionkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:00Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=854cbdb6-6ca7-4fa5-8105-3d48b2926d96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.855 257641 DEBUG nova.network.os_vif_util [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.856 257641 DEBUG nova.network.os_vif_util [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.859 257641 DEBUG nova.objects.instance [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'pci_devices' on Instance uuid 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.866 257641 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Port d1330295-51bc-4e64-a620-b63a6d8777fb updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.874 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <uuid>854cbdb6-6ca7-4fa5-8105-3d48b2926d96</uuid>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <name>instance-00000017</name>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAdminTestJSON-server-1833838032</nova:name>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:55:04</nova:creationTime>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:user uuid="446b0f05699845e8bd9f7d59c787f671">tempest-ServersAdminTestJSON-93807439-project-member</nova:user>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:project uuid="1f48e629446148199d44b34243b98b8a">tempest-ServersAdminTestJSON-93807439</nova:project>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <nova:port uuid="c6266046-dc59-475f-a0f5-391ba640d669">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="serial">854cbdb6-6ca7-4fa5-8105-3d48b2926d96</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="uuid">854cbdb6-6ca7-4fa5-8105-3d48b2926d96</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:0a:ad:ee"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <target dev="tapc6266046-dc"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </interface>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/console.log" append="off"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:55:05 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:55:05 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:55:05 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:55:05 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.875 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Preparing to wait for external event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.875 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.875 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.876 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.877 257641 DEBUG nova.virt.libvirt.vif [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T07:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1833838032',display_name='tempest-ServersAdminTestJSON-server-1833838032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1833838032',id=23,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-fmtionkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T07:55:00Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=854cbdb6-6ca7-4fa5-8105-3d48b2926d96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.877 257641 DEBUG nova.network.os_vif_util [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.878 257641 DEBUG nova.network.os_vif_util [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.878 257641 DEBUG os_vif [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.879 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.879 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.880 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.887 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6266046-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.888 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6266046-dc, col_values=(('external_ids', {'iface-id': 'c6266046-dc59-475f-a0f5-391ba640d669', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:ad:ee', 'vm-uuid': '854cbdb6-6ca7-4fa5-8105-3d48b2926d96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.889 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:05 np0005539550 NetworkManager[49039]: <info>  [1764402905.8914] manager: (tapc6266046-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:05 np0005539550 nova_compute[257631]: 2025-11-29 07:55:05.905 257641 INFO os_vif [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc')#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.017 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.018 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.018 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] No VIF found with MAC fa:16:3e:0a:ad:ee, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.019 257641 INFO nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Using config drive#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.048 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.058 257641 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpg_cjv671',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='086dffa8-4128-4c55-89ad-f4a779ee7ea0'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.292 257641 DEBUG nova.network.neutron [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updated VIF entry in instance network info cache for port 5f3dee32-7330-47f1-a98f-0647728e6e29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.293 257641 DEBUG nova.network.neutron [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updating instance_info_cache with network_info: [{"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.310 257641 DEBUG oslo_concurrency.lockutils [req-375c357d-ef66-4820-9e1c-8499a8f7f0bd req-bc296d20-220f-46e0-8734-66d0024b7d68 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7a5f1bd4-70c2-4571-bcd7-070a08c471ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:06 np0005539550 NetworkManager[49039]: <info>  [1764402906.3150] manager: (tapd1330295-51): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 02:55:06 np0005539550 kernel: tapd1330295-51: entered promiscuous mode
Nov 29 02:55:06 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:06Z|00108|binding|INFO|Claiming lport d1330295-51bc-4e64-a620-b63a6d8777fb for this additional chassis.
Nov 29 02:55:06 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:06Z|00109|binding|INFO|d1330295-51bc-4e64-a620-b63a6d8777fb: Claiming fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:06 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:06Z|00110|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb ovn-installed in OVS
Nov 29 02:55:06 np0005539550 systemd-udevd[276124]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.351 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:06 np0005539550 systemd-machined[216673]: New machine qemu-11-instance-00000013.
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.358 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:06 np0005539550 NetworkManager[49039]: <info>  [1764402906.3635] device (tapd1330295-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:06 np0005539550 NetworkManager[49039]: <info>  [1764402906.3643] device (tapd1330295-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:06 np0005539550 systemd[1]: Started Virtual Machine qemu-11-instance-00000013.
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.529 257641 INFO nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Creating config drive at /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.535 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5hdsgs_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.663 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5hdsgs_" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.695 257641 DEBUG nova.storage.rbd_utils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] rbd image 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:55:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 688 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 MiB/s wr, 201 op/s
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.700 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:06.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.960 257641 DEBUG nova.network.neutron [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updated VIF entry in instance network info cache for port c6266046-dc59-475f-a0f5-391ba640d669. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.961 257641 DEBUG nova.network.neutron [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updating instance_info_cache with network_info: [{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:06 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.982 257641 DEBUG oslo_concurrency.lockutils [req-a8c3c912-e416-4636-95c8-3dc7a8c11046 req-a50024fd-ba2e-4dfd-acea-5268dec7d0be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:06.999 257641 INFO nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Took 5.79 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.000 257641 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.021 257641 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpto9guicl',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='7067462a-37a6-458e-b96c-76adcea5fdfa',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(2ab88ec4-0df8-4e1f-a957-0537069aa961),old_vol_attachment_ids={3ef07b78-0409-49cf-a941-8a19b02dd939='0fdf73ae-cbdf-4670-b55b-bd9dfb3efb32'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.025 257641 DEBUG nova.objects.instance [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lazy-loading 'migration_context' on Instance uuid 7067462a-37a6-458e-b96c-76adcea5fdfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.026 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.028 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.028 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.044 257641 DEBUG nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Find same serial number: pos=1, serial=3ef07b78-0409-49cf-a941-8a19b02dd939 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.045 257641 DEBUG nova.virt.libvirt.vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:56Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.045 257641 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.046 257641 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.046 257641 DEBUG nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:55:07 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:17:bf:87"/>
Nov 29 02:55:07 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 02:55:07 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:07 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 02:55:07 np0005539550 nova_compute[257631]:  <target dev="tap5d7fa9ca-8f"/>
Nov 29 02:55:07 np0005539550 nova_compute[257631]: </interface>
Nov 29 02:55:07 np0005539550 nova_compute[257631]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.047 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:55:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.530 257641 DEBUG nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.531 257641 INFO nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.590 257641 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.690 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402907.6903589, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.691 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.712 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.776 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [{"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.797 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-9bef976c-2981-4d19-aa60-8a550b7093ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.797 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.798 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.798 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.798 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.815 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.816 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.816 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.816 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:07 np0005539550 nova_compute[257631]: 2025-11-29 07:55:07.817 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.094 257641 DEBUG nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.095 257641 DEBUG nova.virt.libvirt.migration [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.132 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.133 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.133 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.133 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.133 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.134 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 WARNING nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 DEBUG nova.compute.manager [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing instance network info cache due to event network-changed-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.135 257641 DEBUG nova.network.neutron [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Refreshing network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840889784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.279 257641 DEBUG oslo_concurrency.processutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config 854cbdb6-6ca7-4fa5-8105-3d48b2926d96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.279 257641 INFO nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Deleting local config drive /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96/disk.config because it was imported into RBD.#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.290 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:08 np0005539550 kernel: tapc6266046-dc: entered promiscuous mode
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.3369] manager: (tapc6266046-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 02:55:08 np0005539550 systemd-udevd[276128]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.341 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00111|binding|INFO|Claiming lport c6266046-dc59-475f-a0f5-391ba640d669 for this chassis.
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00112|binding|INFO|c6266046-dc59-475f-a0f5-391ba640d669: Claiming fa:16:3e:0a:ad:ee 10.100.0.4
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.348 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:ad:ee 10.100.0.4'], port_security=['fa:16:3e:0a:ad:ee 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '854cbdb6-6ca7-4fa5-8105-3d48b2926d96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c6266046-dc59-475f-a0f5-391ba640d669) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.3505] device (tapc6266046-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.3515] device (tapc6266046-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.352 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c6266046-dc59-475f-a0f5-391ba640d669 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b bound to our chassis#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.354 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.353 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402908.352444, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.354 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00113|binding|INFO|Setting lport c6266046-dc59-475f-a0f5-391ba640d669 ovn-installed in OVS
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00114|binding|INFO|Setting lport c6266046-dc59-475f-a0f5-391ba640d669 up in Southbound
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.368 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8afc0fac-1e19-4151-94cb-3fccf3933368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.370 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62e5f2a3-c1 in ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.372 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62e5f2a3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.372 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2935759a-0f3a-4350-b9db-518903bca62d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.374 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c346d6fa-623f-40dd-a846-b25919cfd55f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 systemd-machined[216673]: New machine qemu-12-instance-00000017.
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.381 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539550 systemd[1]: Started Virtual Machine qemu-12-instance-00000017.
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.391 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8bfe11-4b3c-454f-9861-3a4e190177f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.399 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.420 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91d1b46c-fe00-4a89-ac7d-ade49b6fcee5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.427 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.444 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402908.444241, 7067462a-37a6-458e-b96c-76adcea5fdfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.444 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.454 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[42224a50-2ec9-409c-af85-2d94db7969e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.4618] manager: (tap62e5f2a3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.460 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[008f01b6-4724-41df-90fe-b7fcdf36b1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.470 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.478 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.497 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2870b402-c0cf-4ea8-8967-468c85296548]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.500 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[95277e69-e1f7-48ed-9185-815a0190c001]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.501 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.5236] device (tap62e5f2a3-c0): carrier: link connected
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.537 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[01e2357b-9bfd-4820-b12f-730d53647ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.541 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.541 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.545 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.545 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.549 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.549 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.553 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.554 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.557 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[929a78e0-d972-4242-9758-b15a5013b4be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595618, 'reachable_time': 41041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276293, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.560 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.560 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.564 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.564 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.575 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b4346954-0b05-4728-880b-6b36f64c4cf9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe93:9d00'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 595618, 'tstamp': 595618}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276294, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.594 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a51184a8-9c90-4024-8501-ce695d1629ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62e5f2a3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:93:9d:00'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595618, 'reachable_time': 41041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276295, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 kernel: tap5d7fa9ca-8f (unregistering): left promiscuous mode
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.6267] device (tap5d7fa9ca-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.630 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00115|binding|INFO|Releasing lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 from this chassis (sb_readonly=0)
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00116|binding|INFO|Setting lport 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 down in Southbound
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00117|binding|INFO|Removing iface tap5d7fa9ca-8f ovn-installed in OVS
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.642 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7a35e2e7-6540-460c-aafd-53a06b80481f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.645 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:bf:87 10.100.0.5'], port_security=['fa:16:3e:17:bf:87 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '479f969f-dbf7-4938-8979-b8532eb113f6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7067462a-37a6-458e-b96c-76adcea5fdfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '18', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 710 MiB data, 709 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.8 MiB/s wr, 197 op/s
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.698 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 29 02:55:08 np0005539550 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000014.scope: Consumed 2.790s CPU time.
Nov 29 02:55:08 np0005539550 systemd-machined[216673]: Machine qemu-10-instance-00000014 terminated.
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.750 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a965490f-a42b-4e18-a75e-1dac0362fb3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.752 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.752 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.753 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62e5f2a3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.754 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.7552] manager: (tap62e5f2a3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 02:55:08 np0005539550 kernel: tap62e5f2a3-c0: entered promiscuous mode
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.762 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.764 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62e5f2a3-c0, col_values=(('external_ids', {'iface-id': 'acbe1c54-69e5-4789-8e0b-6d1b69eab5e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.765 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:08Z|00118|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:55:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:08.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:08 np0005539550 virtqemud[256287]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939: No such file or directory
Nov 29 02:55:08 np0005539550 virtqemud[256287]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-3ef07b78-0409-49cf-a941-8a19b02dd939: No such file or directory
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.789 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.791 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[db54ed80-c0d6-4494-a9fd-931051b8d625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.792 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.pid.haproxy
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 02:55:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:08.793 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'env', 'PROCESS_TAG=haproxy-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 02:55:08 np0005539550 NetworkManager[49039]: <info>  [1764402908.7943] manager: (tap5d7fa9ca-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.811 257641 DEBUG nova.virt.libvirt.guest [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.812 257641 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation has completed#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.812 257641 INFO nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] _post_live_migration() is started..#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.813 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.814 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.814 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.944 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402908.9442544, 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.945 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] VM Started (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.969 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.973 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402908.9447055, 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.974 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.984 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.986 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3883MB free_disk=20.71410369873047GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:08 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.994 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:08.999 257641 DEBUG nova.compute.manager [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.000 257641 DEBUG oslo_concurrency.lockutils [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.000 257641 DEBUG oslo_concurrency.lockutils [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.000 257641 DEBUG oslo_concurrency.lockutils [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.001 257641 DEBUG nova.compute.manager [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.001 257641 DEBUG nova.compute.manager [req-acfc5390-2f98-4e80-91df-d3ce1c442745 req-fbdd0629-4235-4a10-ae2b-ab4b2f610520 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.004 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.049 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.113 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration for instance 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.152 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating resource usage from migration afbc71a1-4ace-4109-aba9-8332d00626a1#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.153 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Starting to track incoming migration afbc71a1-4ace-4109-aba9-8332d00626a1 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.171 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating resource usage from migration 2ab88ec4-0df8-4e1f-a957-0537069aa961#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.213 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 2bb12f77-8958-446b-813d-a59f149a549b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.213 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9bef976c-2981-4d19-aa60-8a550b7093ca actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.213 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 7a5f1bd4-70c2-4571-bcd7-070a08c471ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:09 np0005539550 podman[276389]: 2025-11-29 07:55:09.224006966 +0000 UTC m=+0.074048973 container create 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.238 257641 WARNING nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.239 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.239 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration 2ab88ec4-0df8-4e1f-a957-0537069aa961 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.239 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.239 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1280MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:09 np0005539550 podman[276389]: 2025-11-29 07:55:09.176270018 +0000 UTC m=+0.026312055 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 02:55:09 np0005539550 systemd[1]: Started libpod-conmon-6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb.scope.
Nov 29 02:55:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:55:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6face1aabe5620039d3ed97d608db2a39a78cfbe160ec772f92d8e97f7892f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 02:55:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:09 np0005539550 podman[276389]: 2025-11-29 07:55:09.334802982 +0000 UTC m=+0.184845009 container init 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 02:55:09 np0005539550 podman[276389]: 2025-11-29 07:55:09.340963166 +0000 UTC m=+0.191005173 container start 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 02:55:09 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [NOTICE]   (276409) : New worker (276411) forked
Nov 29 02:55:09 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [NOTICE]   (276409) : Loading success.
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.409 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.451 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.454 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.475 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ff1b60-0185-4802-9890-f8e672bfb8cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.518 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d7c52d71-1f22-40b9-be14-f1d7da5a80ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.522 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e69f4f-b451-43bf-b61b-8bdaf101533a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.550 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[cc65220a-e89b-4527-af89-c345032fac53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.567 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[01dfc2c3-5d20-45d6-899a-ace7897345bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7a06a21a-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:69:44:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 7, 'rx_bytes': 1876, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 7, 'rx_bytes': 1876, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588372, 'reachable_time': 22423, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276445, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.585 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21660dc4-a9f4-451f-ac5f-cea04dafd2f9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7a06a21a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588383, 'tstamp': 588383}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276446, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7a06a21a-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588386, 'tstamp': 588386}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276446, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.587 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.588 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.595 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a06a21a-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.595 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.595 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7a06a21a-b0, col_values=(('external_ids', {'iface-id': '2b822f56-587d-4c36-9c9a-d54b62b2616c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.595 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/850446993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.880 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.887 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.906 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:09 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:09Z|00119|binding|INFO|Claiming lport d1330295-51bc-4e64-a620-b63a6d8777fb for this chassis.
Nov 29 02:55:09 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:09Z|00120|binding|INFO|d1330295-51bc-4e64-a620-b63a6d8777fb: Claiming fa:16:3e:c2:bc:90 10.100.0.12
Nov 29 02:55:09 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:09Z|00121|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb up in Southbound
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.937 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:09 np0005539550 nova_compute[257631]: 2025-11-29 07:55:09.937 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.944 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '10', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.945 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 bound to our chassis#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.947 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:55:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:09.964 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d3378294-0e2f-4931-81ff-4427183b0dbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.000 257641 DEBUG nova.compute.manager [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.000 257641 DEBUG oslo_concurrency.lockutils [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.001 257641 DEBUG oslo_concurrency.lockutils [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.001 257641 DEBUG oslo_concurrency.lockutils [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.001 257641 DEBUG nova.compute.manager [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.001 257641 DEBUG nova.compute.manager [req-0729f542-7842-40ed-892f-3144c8c54017 req-1c5eca7c-ce45-473a-ac33-2b0097619fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-unplugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.002 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[226e312a-58e9-46a9-92c8-2004976183d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.007 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c07d009c-c853-42ba-b715-7993b33df1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.038 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ad56678a-6895-4311-9790-c5136c0e229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.056 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a2612005-e967-4cd8-b10d-ec5d9b24b673]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 5, 'rx_bytes': 868, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 5, 'rx_bytes': 868, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 18122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276456, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.058 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.059 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.059 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.060 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.060 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.060 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.066 257641 DEBUG nova.network.neutron [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updated VIF entry in instance network info cache for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.067 257641 DEBUG nova.network.neutron [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Updating instance_info_cache with network_info: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.071 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb137f8-0461-4434-b5a8-586fd88c9d58]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad69a0f4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588199, 'tstamp': 588199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276457, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapad69a0f4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588202, 'tstamp': 588202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276457, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.073 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.074 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.079 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.079 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.080 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:10.080 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.106 257641 DEBUG oslo_concurrency.lockutils [req-08a721e5-620d-440a-a381-235b7445f989 req-5d434953-e3c7-47c0-9357-b79f3da5d884 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7067462a-37a6-458e-b96c-76adcea5fdfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.137 257641 INFO nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Post operation of migration started#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.262 257641 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.263 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.263 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.264 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.264 257641 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Processing event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.264 257641 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.265 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.265 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.265 257641 DEBUG oslo_concurrency.lockutils [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.265 257641 DEBUG nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] No waiting events found dispatching network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.266 257641 WARNING nova.compute.manager [req-dcad4924-3c5a-4785-ad18-4bbe7aa19cda req-69a480e6-bbf0-4fad-b4dd-7cc50f6797c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received unexpected event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.266 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.270 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402910.2701917, 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.271 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.272 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.276 257641 INFO nova.virt.libvirt.driver [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Instance spawned successfully.#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.276 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.294 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.301 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.304 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.305 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.305 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.305 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.306 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.306 257641 DEBUG nova.virt.libvirt.driver [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.334 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.391 257641 INFO nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Took 10.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.391 257641 DEBUG nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.467 257641 INFO nova.compute.manager [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Took 11.27 seconds to build instance.#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.483 257641 DEBUG oslo_concurrency.lockutils [None req-7f26415b-f97e-4b40-acc9-664cfdeb594b 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.690 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 762 KiB/s rd, 6.1 MiB/s wr, 181 op/s
Nov 29 02:55:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:10.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:10 np0005539550 nova_compute[257631]: 2025-11-29 07:55:10.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.104 257641 DEBUG nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.105 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.105 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.105 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.106 257641 DEBUG nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.106 257641 WARNING nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.106 257641 DEBUG nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.106 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.107 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.107 257641 DEBUG oslo_concurrency.lockutils [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.107 257641 DEBUG nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.107 257641 WARNING nova.compute.manager [req-71d51c73-eb6d-49c4-9e1a-f6a51a6d5424 req-fe45dbde-469a-49de-a41a-952de6f4aa8b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.306 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.307 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.307 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.307 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.307 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.309 257641 INFO nova.compute.manager [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Terminating instance#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.309 257641 DEBUG nova.compute.manager [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:11.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.360 257641 DEBUG nova.network.neutron [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Activated binding for port 5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.361 257641 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.362 257641 DEBUG nova.virt.libvirt.vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1301305546',display_name='tempest-LiveMigrationTest-server-1301305546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1301305546',id=20,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:31Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-7mt4lwbv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:59Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=7067462a-37a6-458e-b96c-76adcea5fdfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.362 257641 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converting VIF {"id": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "address": "fa:16:3e:17:bf:87", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5d7fa9ca-8f", "ovs_interfaceid": "5d7fa9ca-8f51-4047-a121-6c4534fc5ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.363 257641 DEBUG nova.network.os_vif_util [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.364 257641 DEBUG os_vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.367 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.368 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.368 257641 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.370 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d7fa9ca-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.376 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.379 257641 INFO os_vif [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:bf:87,bridge_name='br-int',has_traffic_filtering=True,id=5d7fa9ca-8f51-4047-a121-6c4534fc5ae6,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5d7fa9ca-8f')#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.380 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.380 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.381 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.381 257641 DEBUG nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.382 257641 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deleting instance files /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.382 257641 INFO nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Deletion of /var/lib/nova/instances/7067462a-37a6-458e-b96c-76adcea5fdfa_del complete#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.545 257641 DEBUG oslo_concurrency.lockutils [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] Acquiring lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.545 257641 DEBUG oslo_concurrency.lockutils [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] Acquired lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.546 257641 DEBUG nova.network.neutron [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:55:11 np0005539550 kernel: tap5f3dee32-73 (unregistering): left promiscuous mode
Nov 29 02:55:11 np0005539550 NetworkManager[49039]: <info>  [1764402911.5999] device (tap5f3dee32-73): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:11 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:11Z|00122|binding|INFO|Releasing lport 5f3dee32-7330-47f1-a98f-0647728e6e29 from this chassis (sb_readonly=0)
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.613 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:11Z|00123|binding|INFO|Setting lport 5f3dee32-7330-47f1-a98f-0647728e6e29 down in Southbound
Nov 29 02:55:11 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:11Z|00124|binding|INFO|Removing iface tap5f3dee32-73 ovn-installed in OVS
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.618 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:11.622 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:de:26 10.100.0.8'], port_security=['fa:16:3e:48:de:26 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7a5f1bd4-70c2-4571-bcd7-070a08c471ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9c2bff0c0a24bebb1149177689b64d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '58029651-e55c-435e-9816-e1e89148ab8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=965864e2-b803-4413-bf33-a40ccb124236, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5f3dee32-7330-47f1-a98f-0647728e6e29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:11.623 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5f3dee32-7330-47f1-a98f-0647728e6e29 in datapath 3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 unbound from our chassis#033[00m
Nov 29 02:55:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:11.625 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:11.626 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[34c989d0-33a3-4cc7-9bf2-0da893bb57f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:11.627 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 namespace which is not needed anymore#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 29 02:55:11 np0005539550 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000015.scope: Consumed 15.207s CPU time.
Nov 29 02:55:11 np0005539550 systemd-machined[216673]: Machine qemu-9-instance-00000015 terminated.
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.732 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.742 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.748 257641 INFO nova.virt.libvirt.driver [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Instance destroyed successfully.#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.748 257641 DEBUG nova.objects.instance [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lazy-loading 'resources' on Instance uuid 7a5f1bd4-70c2-4571-bcd7-070a08c471ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.813 257641 DEBUG nova.virt.libvirt.vif [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-1481235728',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-148123572',id=21,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9c2bff0c0a24bebb1149177689b64d7',ramdisk_id='',reservation_id='r-f426gwlv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-2080071708-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:54:47Z,user_data=None,user_id='2cc9de54e1764131aa2748a7f9a1df6d',uuid=7a5f1bd4-70c2-4571-bcd7-070a08c471ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.814 257641 DEBUG nova.network.os_vif_util [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converting VIF {"id": "5f3dee32-7330-47f1-a98f-0647728e6e29", "address": "fa:16:3e:48:de:26", "network": {"id": "3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-436332096-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9c2bff0c0a24bebb1149177689b64d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5f3dee32-73", "ovs_interfaceid": "5f3dee32-7330-47f1-a98f-0647728e6e29", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.814 257641 DEBUG nova.network.os_vif_util [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.815 257641 DEBUG os_vif [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.816 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.816 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f3dee32-73, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.818 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.821 257641 INFO os_vif [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:de:26,bridge_name='br-int',has_traffic_filtering=True,id=5f3dee32-7330-47f1-a98f-0647728e6e29,network=Network(3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5f3dee32-73')#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.984 257641 DEBUG nova.compute.manager [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-unplugged-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.984 257641 DEBUG oslo_concurrency.lockutils [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.985 257641 DEBUG oslo_concurrency.lockutils [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.985 257641 DEBUG oslo_concurrency.lockutils [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.985 257641 DEBUG nova.compute.manager [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] No waiting events found dispatching network-vif-unplugged-5f3dee32-7330-47f1-a98f-0647728e6e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:11 np0005539550 nova_compute[257631]: 2025-11-29 07:55:11.985 257641 DEBUG nova.compute.manager [req-2054fd8b-5a41-417e-b8d7-83ca10f8cdea req-1f0f9729-170d-4c49-ba07-dca87942fb06 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-unplugged-5f3dee32-7330-47f1-a98f-0647728e6e29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [NOTICE]   (274789) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [NOTICE]   (274789) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [WARNING]  (274789) : Exiting Master process...
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [WARNING]  (274789) : Exiting Master process...
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [ALERT]    (274789) : Current worker (274791) exited with code 143 (Terminated)
Nov 29 02:55:11 np0005539550 neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3[274784]: [WARNING]  (274789) : All workers exited. Exiting... (0)
Nov 29 02:55:11 np0005539550 systemd[1]: libpod-f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae.scope: Deactivated successfully.
Nov 29 02:55:12 np0005539550 podman[276483]: 2025-11-29 07:55:12.00036227 +0000 UTC m=+0.276118652 container died f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.110 257641 DEBUG nova.compute.manager [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.111 257641 DEBUG oslo_concurrency.lockutils [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.111 257641 DEBUG oslo_concurrency.lockutils [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.111 257641 DEBUG oslo_concurrency.lockutils [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.111 257641 DEBUG nova.compute.manager [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.111 257641 WARNING nova.compute.manager [req-16d1c009-37b7-4549-8c9b-00ce1ff62618 req-324b1c9a-e404-4334-a2cd-4dd75468d682 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2912e99ed2baa07e5b60d341407c066b42a4594ad6fb26b55666a3ec6fa3a7d7-merged.mount: Deactivated successfully.
Nov 29 02:55:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:12 np0005539550 podman[276483]: 2025-11-29 07:55:12.672507532 +0000 UTC m=+0.948263874 container cleanup f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:12 np0005539550 systemd[1]: libpod-conmon-f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae.scope: Deactivated successfully.
Nov 29 02:55:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 6.0 MiB/s wr, 169 op/s
Nov 29 02:55:12 np0005539550 podman[276540]: 2025-11-29 07:55:12.753177491 +0000 UTC m=+0.051457808 container remove f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.760 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1947068f-aebc-426b-9a88-32379f5478ff]: (4, ('Sat Nov 29 07:55:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 (f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae)\nf6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae\nSat Nov 29 07:55:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 (f6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae)\nf6c6fbb446b5025cd8bb6cae5f8d5c14ae8109da6fec350c5b3e73f42b4485ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.762 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[928bfaef-283b-4c91-9bd5-63efee5ab3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.763 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b1c62d9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.765 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:12 np0005539550 kernel: tap3b1c62d9-c0: left promiscuous mode
Nov 29 02:55:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:12.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:12 np0005539550 nova_compute[257631]: 2025-11-29 07:55:12.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.786 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f25975ae-e44a-4164-b9c1-339b96649dde]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.798 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06ca1ca5-a0e9-477c-88de-7bf458787c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.799 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0336ce97-742f-46df-a03f-c7750cbc14f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.815 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[68936635-52d1-4773-80d5-1bf7717af8a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593484, 'reachable_time': 30388, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276556, 'error': None, 'target': 'ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:12 np0005539550 systemd[1]: run-netns-ovnmeta\x2d3b1c62d9\x2dc3d9\x2d4fe4\x2da38b\x2d0261211c0ab3.mount: Deactivated successfully.
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.821 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3b1c62d9-c3d9-4fe4-a38b-0261211c0ab3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:12.822 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[09e8e9d5-0329-4f6b-8ceb-8d69a9c055c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.247 257641 DEBUG nova.compute.manager [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.247 257641 DEBUG oslo_concurrency.lockutils [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.248 257641 DEBUG oslo_concurrency.lockutils [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.248 257641 DEBUG oslo_concurrency.lockutils [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.248 257641 DEBUG nova.compute.manager [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] No waiting events found dispatching network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.249 257641 WARNING nova.compute.manager [req-1b698ecb-fa51-41d0-9241-f6f7092f4028 req-cd87502a-2101-45af-9166-c8cc6b438531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Received unexpected event network-vif-plugged-5d7fa9ca-8f51-4047-a121-6c4534fc5ae6 for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:13.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.913 257641 DEBUG nova.network.neutron [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.952 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.973 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.973 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.974 257641 DEBUG oslo_concurrency.lockutils [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:13 np0005539550 nova_compute[257631]: 2025-11-29 07:55:13.978 257641 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Nov 29 02:55:13 np0005539550 virtqemud[256287]: Domain id=11 name='instance-00000013' uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 is tainted: custom-monitor
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.124 257641 DEBUG nova.network.neutron [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updating instance_info_cache with network_info: [{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.157 257641 DEBUG oslo_concurrency.lockutils [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] Releasing lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.158 257641 DEBUG nova.compute.manager [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.158 257641 DEBUG nova.compute.manager [None req-c8afc57f-4d1c-49e5-b139-d18d0a9409a9 e6de179afa044479a7d74c611f9b5398 0fa19cbd18c2485a99a4bcf4d81ee542 - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] network_info to inject: |[{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 02:55:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 6.1 MiB/s wr, 211 op/s
Nov 29 02:55:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.912 257641 DEBUG nova.compute.manager [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.913 257641 DEBUG oslo_concurrency.lockutils [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.913 257641 DEBUG oslo_concurrency.lockutils [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.914 257641 DEBUG oslo_concurrency.lockutils [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.914 257641 DEBUG nova.compute.manager [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] No waiting events found dispatching network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.914 257641 WARNING nova.compute.manager [req-91680ee0-7e99-4fa5-ba8e-6ba8e75c3275 req-a64143b4-1efd-4aab-83bf-ce8ff8d62b9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received unexpected event network-vif-plugged-5f3dee32-7330-47f1-a98f-0647728e6e29 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:55:14 np0005539550 nova_compute[257631]: 2025-11-29 07:55:14.986 257641 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Nov 29 02:55:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:15.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:15 np0005539550 nova_compute[257631]: 2025-11-29 07:55:15.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:15 np0005539550 nova_compute[257631]: 2025-11-29 07:55:15.992 257641 INFO nova.virt.libvirt.driver [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Nov 29 02:55:15 np0005539550 nova_compute[257631]: 2025-11-29 07:55:15.998 257641 DEBUG nova.compute.manager [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:16 np0005539550 nova_compute[257631]: 2025-11-29 07:55:16.029 257641 DEBUG nova.objects.instance [None req-ba1cf82b-dc55-4cb2-ae87-c947f192fbc7 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:55:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Nov 29 02:55:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:16 np0005539550 nova_compute[257631]: 2025-11-29 07:55:16.874 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:17.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Nov 29 02:55:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:18.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:18.928 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:18.928 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:18.929 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:19.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014017206983994045 of space, bias 1.0, pg target 4.205162095198213 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0043215571491717845 of space, bias 1.0, pg target 1.2791809161548482 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:55:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Nov 29 02:55:19 np0005539550 nova_compute[257631]: 2025-11-29 07:55:19.985 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:19 np0005539550 nova_compute[257631]: 2025-11-29 07:55:19.986 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:19 np0005539550 nova_compute[257631]: 2025-11-29 07:55:19.986 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "7067462a-37a6-458e-b96c-76adcea5fdfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.029 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.030 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.030 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.030 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.031 257641 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/8565438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.534 257641 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:20 np0005539550 podman[276584]: 2025-11-29 07:55:20.652774048 +0000 UTC m=+0.063929882 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:20 np0005539550 podman[276585]: 2025-11-29 07:55:20.682016321 +0000 UTC m=+0.091524861 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.704 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.705 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 353 KiB/s wr, 114 op/s
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.708 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.708 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.711 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.711 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.714 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.715 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.718 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 nova_compute[257631]: 2025-11-29 07:55:20.718 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:20.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.005 257641 WARNING nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.007 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=20.694210052490234GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.007 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.008 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.118 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration for instance 7067462a-37a6-458e-b96c-76adcea5fdfa refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.150 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.198 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Instance 2bb12f77-8958-446b-813d-a59f149a549b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.199 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Instance 9bef976c-2981-4d19-aa60-8a550b7093ca actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.200 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Instance 7a5f1bd4-70c2-4571-bcd7-070a08c471ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.200 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Instance 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.200 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Migration 2ab88ec4-0df8-4e1f-a957-0537069aa961 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.310 257641 INFO nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Instance a06b03c0-83a5-4dab-85da-5d9c4df52b22 has allocations against this compute host but is not found in the database.#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.311 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.312 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1152MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:21.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.424 257641 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.723 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Check if temp file /var/lib/nova/instances/tmp5ro22k8w exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.724 257641 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Nov 29 02:55:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1201799573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.881 257641 DEBUG oslo_concurrency.processutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:21 np0005539550 nova_compute[257631]: 2025-11-29 07:55:21.889 257641 DEBUG nova.compute.provider_tree [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.077 257641 DEBUG nova.scheduler.client.report [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.201 257641 DEBUG nova.compute.resource_tracker [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.202 257641 DEBUG oslo_concurrency.lockutils [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.207 257641 INFO nova.compute.manager [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:55:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 722 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 79 op/s
Nov 29 02:55:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:22.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.816 257641 INFO nova.scheduler.client.report [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] Deleted allocation for migration 2ab88ec4-0df8-4e1f-a957-0537069aa961#033[00m
Nov 29 02:55:22 np0005539550 nova_compute[257631]: 2025-11-29 07:55:22.817 257641 DEBUG nova.virt.libvirt.driver [None req-bdc70176-4b35-4086-9d36-4ac52e4d995f 8756f93764c14f80808ae58acc73d953 ba14a9d547174e87a330644bcaa101ea - - default default] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:55:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:23.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:23 np0005539550 nova_compute[257631]: 2025-11-29 07:55:23.811 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402908.809184, 7067462a-37a6-458e-b96c-76adcea5fdfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:23 np0005539550 nova_compute[257631]: 2025-11-29 07:55:23.812 257641 INFO nova.compute.manager [-] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.081 257641 DEBUG nova.compute.manager [None req-fd22239c-268c-4def-a380-96e0c3525d51 - - - - - -] [instance: 7067462a-37a6-458e-b96c-76adcea5fdfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.409 257641 INFO nova.virt.libvirt.driver [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Deleting instance files /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae_del#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.410 257641 INFO nova.virt.libvirt.driver [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Deletion of /var/lib/nova/instances/7a5f1bd4-70c2-4571-bcd7-070a08c471ae_del complete#033[00m
Nov 29 02:55:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 687 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 86 op/s
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.710 257641 INFO nova.compute.manager [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Took 13.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.711 257641 DEBUG oslo.service.loopingcall [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.711 257641 DEBUG nova.compute.manager [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:55:24 np0005539550 nova_compute[257631]: 2025-11-29 07:55:24.711 257641 DEBUG nova.network.neutron [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:55:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:25 np0005539550 nova_compute[257631]: 2025-11-29 07:55:25.698 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:26Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0a:ad:ee 10.100.0.4
Nov 29 02:55:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:26Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:ad:ee 10.100.0.4
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.378 257641 DEBUG nova.network.neutron [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.407 257641 INFO nova.compute.manager [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Took 1.70 seconds to deallocate network for instance.#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.448 257641 DEBUG nova.compute.manager [req-6baae4eb-7879-45e6-a493-0723645cbbfc req-b1428572-a820-4a56-b2ce-dee517cd0904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Received event network-vif-deleted-5f3dee32-7330-47f1-a98f-0647728e6e29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.458 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.459 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.580 257641 DEBUG oslo_concurrency.processutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 656 MiB data, 700 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 839 KiB/s wr, 87 op/s
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.747 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402911.7457595, 7a5f1bd4-70c2-4571-bcd7-070a08c471ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.748 257641 INFO nova.compute.manager [-] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.779 257641 DEBUG nova.compute.manager [None req-e69aa5c4-9c24-4001-8bbe-030f55ba5d40 - - - - - -] [instance: 7a5f1bd4-70c2-4571-bcd7-070a08c471ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:26 np0005539550 nova_compute[257631]: 2025-11-29 07:55:26.880 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4095444136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.026 257641 DEBUG oslo_concurrency.processutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.031 257641 DEBUG nova.compute.provider_tree [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.050 257641 DEBUG nova.scheduler.client.report [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.087 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.129 257641 INFO nova.scheduler.client.report [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Deleted allocations for instance 7a5f1bd4-70c2-4571-bcd7-070a08c471ae#033[00m
Nov 29 02:55:27 np0005539550 nova_compute[257631]: 2025-11-29 07:55:27.243 257641 DEBUG oslo_concurrency.lockutils [None req-1ff3f615-3072-46e9-9a72-1948384f1857 2cc9de54e1764131aa2748a7f9a1df6d c9c2bff0c0a24bebb1149177689b64d7 - - default default] Lock "7a5f1bd4-70c2-4571-bcd7-070a08c471ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 15.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:27.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:55:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2371513821' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:55:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.420 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.420 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.420 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.421 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.421 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.422 257641 INFO nova.compute.manager [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Terminating instance#033[00m
Nov 29 02:55:28 np0005539550 nova_compute[257631]: 2025-11-29 07:55:28.423 257641 DEBUG nova.compute.manager [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:55:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 664 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Nov 29 02:55:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:28.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:29 np0005539550 kernel: tap230cb1ef-c5 (unregistering): left promiscuous mode
Nov 29 02:55:29 np0005539550 NetworkManager[49039]: <info>  [1764402929.2227] device (tap230cb1ef-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00125|binding|INFO|Releasing lport 230cb1ef-c551-4666-88fe-e49994b798e9 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00126|binding|INFO|Setting lport 230cb1ef-c551-4666-88fe-e49994b798e9 down in Southbound
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00127|binding|INFO|Releasing lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00128|binding|INFO|Setting lport 25733cd0-1d42-411e-be69-7bf3a59b5a2a down in Southbound
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00129|binding|INFO|Removing iface tap230cb1ef-c5 ovn-installed in OVS
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.239 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00130|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00131|binding|INFO|Releasing lport 2b822f56-587d-4c36-9c9a-d54b62b2616c from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00132|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00133|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00134|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.244 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:fa:cd 10.100.0.12'], port_security=['fa:16:3e:80:fa:cd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1631774297', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2bb12f77-8958-446b-813d-a59f149a549b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1631774297', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '14', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9eb8ff47-0cf8-4776-a959-1d6d6d7f49c2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=230cb1ef-c551-4666-88fe-e49994b798e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.246 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:c7:5e 19.80.0.38'], port_security=['fa:16:3e:58:c7:5e 19.80.0.38'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['230cb1ef-c551-4666-88fe-e49994b798e9'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-815355501', 'neutron:cidrs': '19.80.0.38/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67969051-efe2-48f0-99e2-c96ec0167864', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-815355501', 'neutron:project_id': '1963a097b7694450aa0d7c30b27b38ac', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7cf396e5-2565-40f4-9bc8-f8d0b75eb4c3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=aa62182a-1418-4867-a065-405baf63a28f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=25733cd0-1d42-411e-be69-7bf3a59b5a2a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.247 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 230cb1ef-c551-4666-88fe-e49994b798e9 in datapath 7a06a21a-ba04-4a14-8d62-c931cbbf124d unbound from our chassis#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.249 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7a06a21a-ba04-4a14-8d62-c931cbbf124d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.250 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f700d49-b887-4166-b100-a9c806e61dc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.250 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d namespace which is not needed anymore#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.260 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 02:55:29 np0005539550 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000d.scope: Consumed 6.666s CPU time.
Nov 29 02:55:29 np0005539550 systemd-machined[216673]: Machine qemu-7-instance-0000000d terminated.
Nov 29 02:55:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:29.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [NOTICE]   (273893) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [NOTICE]   (273893) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [WARNING]  (273893) : Exiting Master process...
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [WARNING]  (273893) : Exiting Master process...
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [ALERT]    (273893) : Current worker (273895) exited with code 143 (Terminated)
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d[273889]: [WARNING]  (273893) : All workers exited. Exiting... (0)
Nov 29 02:55:29 np0005539550 systemd[1]: libpod-e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed.scope: Deactivated successfully.
Nov 29 02:55:29 np0005539550 podman[276743]: 2025-11-29 07:55:29.402125205 +0000 UTC m=+0.054950862 container died e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:55:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fd2f571f500955a172e8e6b0bae006a4a98b01b33455000d6b3ce954d49497ce-merged.mount: Deactivated successfully.
Nov 29 02:55:29 np0005539550 podman[276743]: 2025-11-29 07:55:29.46993541 +0000 UTC m=+0.122761067 container cleanup e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:29 np0005539550 systemd[1]: libpod-conmon-e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed.scope: Deactivated successfully.
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.492 257641 INFO nova.virt.libvirt.driver [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Instance destroyed successfully.#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.493 257641 DEBUG nova.objects.instance [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lazy-loading 'resources' on Instance uuid 2bb12f77-8958-446b-813d-a59f149a549b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.518 257641 DEBUG nova.virt.libvirt.vif [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:53:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-2050487278',display_name='tempest-LiveMigrationTest-server-2050487278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-2050487278',id=13,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:37Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1963a097b7694450aa0d7c30b27b38ac',ramdisk_id='',reservation_id='r-f3pegydb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-814240379',owner_user_name='tempest-LiveMigrationTest-814240379-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:59Z,user_data=None,user_id='85f5548e01234fe4ae9b88e998e943f8',uuid=2bb12f77-8958-446b-813d-a59f149a549b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.518 257641 DEBUG nova.network.os_vif_util [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converting VIF {"id": "230cb1ef-c551-4666-88fe-e49994b798e9", "address": "fa:16:3e:80:fa:cd", "network": {"id": "7a06a21a-ba04-4a14-8d62-c931cbbf124d", "bridge": "br-int", "label": "tempest-LiveMigrationTest-132947190-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1963a097b7694450aa0d7c30b27b38ac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap230cb1ef-c5", "ovs_interfaceid": "230cb1ef-c551-4666-88fe-e49994b798e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.519 257641 DEBUG nova.network.os_vif_util [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.519 257641 DEBUG os_vif [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.522 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.522 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap230cb1ef-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.525 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:29 np0005539550 podman[276780]: 2025-11-29 07:55:29.538878625 +0000 UTC m=+0.045061087 container remove e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.546 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d89d4368-fc94-4335-9831-04ac34fbc2e7]: (4, ('Sat Nov 29 07:55:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed)\ne00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed\nSat Nov 29 07:55:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d (e00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed)\ne00da4603917c6659539ae5d5fe69982c4647ff69c8389261b23e7c8c83017ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.548 257641 DEBUG nova.compute.manager [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.548 257641 DEBUG oslo_concurrency.lockutils [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.549 257641 DEBUG oslo_concurrency.lockutils [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.549 257641 DEBUG oslo_concurrency.lockutils [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.548 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ce540d8f-3594-41e4-aeae-a5998584eda8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.549 257641 DEBUG nova.compute.manager [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.549 257641 DEBUG nova.compute.manager [req-bdef1f6a-679e-453f-8ba6-f6f45bd16c9b req-bfdcb142-165b-4227-8991-8da49c3bc2c0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-unplugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.550 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a06a21a-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.551 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 kernel: tap7a06a21a-b0: left promiscuous mode
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00135|binding|INFO|Releasing lport f4400efd-54c7-4734-b939-0d7fcbfba020 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00136|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00137|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:29Z|00138|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.645 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.648 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[663f12ce-ce69-43c3-9eb3-902ef1fbe1e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.650 257641 INFO os_vif [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:80:fa:cd,bridge_name='br-int',has_traffic_filtering=True,id=230cb1ef-c551-4666-88fe-e49994b798e9,network=Network(7a06a21a-ba04-4a14-8d62-c931cbbf124d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap230cb1ef-c5')#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.652 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.663 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91b05b39-53ea-419f-bbde-5d04e6571372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.664 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3c3aaca8-eab3-4b63-99a2-17c506ee6421]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.671 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.682 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[020afb51-f554-47d7-8b91-748ea57f4658]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588366, 'reachable_time': 39138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276809, 'error': None, 'target': 'ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.684 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7a06a21a-ba04-4a14-8d62-c931cbbf124d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.685 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[84f01dd2-b063-4859-9259-eb15f70ad824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.685 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 25733cd0-1d42-411e-be69-7bf3a59b5a2a in datapath 67969051-efe2-48f0-99e2-c96ec0167864 unbound from our chassis#033[00m
Nov 29 02:55:29 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7a06a21a\x2dba04\x2d4a14\x2d8d62\x2dc931cbbf124d.mount: Deactivated successfully.
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.688 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67969051-efe2-48f0-99e2-c96ec0167864, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.689 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[602358c4-bae0-44a4-9f39-d3631699d101]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.691 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 namespace which is not needed anymore#033[00m
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [NOTICE]   (273968) : haproxy version is 2.8.14-c23fe91
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [NOTICE]   (273968) : path to executable is /usr/sbin/haproxy
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [WARNING]  (273968) : Exiting Master process...
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [WARNING]  (273968) : Exiting Master process...
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [ALERT]    (273968) : Current worker (273970) exited with code 143 (Terminated)
Nov 29 02:55:29 np0005539550 neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864[273964]: [WARNING]  (273968) : All workers exited. Exiting... (0)
Nov 29 02:55:29 np0005539550 systemd[1]: libpod-2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b.scope: Deactivated successfully.
Nov 29 02:55:29 np0005539550 podman[276827]: 2025-11-29 07:55:29.820837022 +0000 UTC m=+0.043817333 container died 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 02:55:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b-userdata-shm.mount: Deactivated successfully.
Nov 29 02:55:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-85bc4e50a54bdcc0b1d77014c9144ca55da73dd4688d4a6122302f4ea5065f81-merged.mount: Deactivated successfully.
Nov 29 02:55:29 np0005539550 podman[276827]: 2025-11-29 07:55:29.869719071 +0000 UTC m=+0.092699382 container cleanup 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:55:29 np0005539550 systemd[1]: libpod-conmon-2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b.scope: Deactivated successfully.
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.877 257641 DEBUG nova.compute.manager [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.878 257641 DEBUG oslo_concurrency.lockutils [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.878 257641 DEBUG oslo_concurrency.lockutils [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.879 257641 DEBUG oslo_concurrency.lockutils [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.879 257641 DEBUG nova.compute.manager [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.879 257641 DEBUG nova.compute.manager [req-a80394b9-425d-4312-84aa-dfe696397215 req-ff567b6d-4888-4d53-9441-e18d2013d283 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:29 np0005539550 podman[276858]: 2025-11-29 07:55:29.9514856 +0000 UTC m=+0.057480050 container remove 2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.956 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcbbf68-f07c-4865-9380-4b011cd63bf6]: (4, ('Sat Nov 29 07:55:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 (2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b)\n2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b\nSat Nov 29 07:55:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 (2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b)\n2547df3d922e8dec3705c49f87295f447043b56042fa029903b6fce09bc3a96b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.958 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[57771408-0d9c-4510-88f7-e97b12482454]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.959 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67969051-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 kernel: tap67969051-e0: left promiscuous mode
Nov 29 02:55:29 np0005539550 nova_compute[257631]: 2025-11-29 07:55:29.976 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63b113e4-46b6-4164-b174-edde1beb6ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.997 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0585ca4d-b206-48db-b59c-bdf2aedb2a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:29.998 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[264daa36-fb79-4349-8f9b-dbf09a613d1c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:30.013 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3914e9ad-b2b8-4164-a293-d2467c1997e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588446, 'reachable_time': 30491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276874, 'error': None, 'target': 'ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:30.016 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67969051-efe2-48f0-99e2-c96ec0167864 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:55:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:30.016 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[f81e83da-910f-4c1c-8ed2-2bc5851f63bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:30.017 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:55:30 np0005539550 systemd[1]: run-netns-ovnmeta\x2d67969051\x2defe2\x2d48f0\x2d99e2\x2dc96ec0167864.mount: Deactivated successfully.
Nov 29 02:55:30 np0005539550 nova_compute[257631]: 2025-11-29 07:55:30.700 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 621 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Nov 29 02:55:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:30.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:31.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.672 257641 DEBUG nova.compute.manager [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.673 257641 DEBUG oslo_concurrency.lockutils [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2bb12f77-8958-446b-813d-a59f149a549b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.673 257641 DEBUG oslo_concurrency.lockutils [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.673 257641 DEBUG oslo_concurrency.lockutils [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.673 257641 DEBUG nova.compute.manager [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] No waiting events found dispatching network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:31 np0005539550 nova_compute[257631]: 2025-11-29 07:55:31.674 257641 WARNING nova.compute.manager [req-d39e6114-3c30-497b-8d03-0f6f4df29872 req-31c8134c-48e5-48de-83a4-45353818e6a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Received unexpected event network-vif-plugged-230cb1ef-c551-4666-88fe-e49994b798e9 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:55:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 556 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 29 02:55:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:33.021 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.122 257641 DEBUG nova.compute.manager [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.122 257641 DEBUG oslo_concurrency.lockutils [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.123 257641 DEBUG oslo_concurrency.lockutils [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.123 257641 DEBUG oslo_concurrency.lockutils [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.123 257641 DEBUG nova.compute.manager [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:33 np0005539550 nova_compute[257631]: 2025-11-29 07:55:33.123 257641 WARNING nova.compute.manager [req-5a3c248d-6424-4d63-9be0-0e1543bf8b57 req-8970ba0a-d71c-4cfe-95fd-29aa3317f219 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:33.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:33 np0005539550 podman[276881]: 2025-11-29 07:55:33.35980536 +0000 UTC m=+0.087853083 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.416847) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933416969, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2147, "num_deletes": 251, "total_data_size": 3795764, "memory_usage": 3849792, "flush_reason": "Manual Compaction"}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933443795, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3717364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23862, "largest_seqno": 26008, "table_properties": {"data_size": 3707924, "index_size": 5870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20321, "raw_average_key_size": 20, "raw_value_size": 3688722, "raw_average_value_size": 3710, "num_data_blocks": 260, "num_entries": 994, "num_filter_entries": 994, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402719, "oldest_key_time": 1764402719, "file_creation_time": 1764402933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 27018 microseconds, and 10371 cpu microseconds.
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.443894) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3717364 bytes OK
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.443920) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.445635) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.445687) EVENT_LOG_v1 {"time_micros": 1764402933445656, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.445705) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3787012, prev total WAL file size 3802550, number of live WAL files 2.
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.446767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3630KB)], [56(9546KB)]
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933446844, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 13492957, "oldest_snapshot_seqno": -1}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5773 keys, 11293577 bytes, temperature: kUnknown
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933539552, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11293577, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253296, "index_size": 24747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 146901, "raw_average_key_size": 25, "raw_value_size": 11147443, "raw_average_value_size": 1930, "num_data_blocks": 1010, "num_entries": 5773, "num_filter_entries": 5773, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764402933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.539934) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11293577 bytes
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.547960) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.4 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(6.7) write-amplify(3.0) OK, records in: 6290, records dropped: 517 output_compression: NoCompression
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.548002) EVENT_LOG_v1 {"time_micros": 1764402933547988, "job": 30, "event": "compaction_finished", "compaction_time_micros": 92772, "compaction_time_cpu_micros": 32178, "output_level": 6, "num_output_files": 1, "total_output_size": 11293577, "num_input_records": 6290, "num_output_records": 5773, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933548888, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764402933550620, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.446579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.550702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.550708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.550710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.550711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:33 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:55:33.550712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:55:34 np0005539550 nova_compute[257631]: 2025-11-29 07:55:34.525 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 526 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 29 02:55:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.343 257641 INFO nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Took 12.22 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.343 257641 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:55:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:35.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.399 257641 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp5ro22k8w',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='56f3f72f-7db4-47c8-a4c3-20b2acc58aa9',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(a06b03c0-83a5-4dab-85da-5d9c4df52b22),old_vol_attachment_ids={e52d8ac1-8970-4cf0-9aa0-795f616090d0='b80f6614-6e08-4c17-b66e-c0f2a630e4d8'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.402 257641 DEBUG nova.objects.instance [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lazy-loading 'migration_context' on Instance uuid 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.404 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.405 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.405 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.477 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Find same serial number: pos=1, serial=e52d8ac1-8970-4cf0-9aa0-795f616090d0 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.479 257641 DEBUG nova.virt.libvirt.vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:16Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.479 257641 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.479 257641 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.480 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating guest XML with vif config: <interface type="ethernet">
Nov 29 02:55:35 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:c2:bc:90"/>
Nov 29 02:55:35 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 02:55:35 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 02:55:35 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 02:55:35 np0005539550 nova_compute[257631]:  <target dev="tapd1330295-51"/>
Nov 29 02:55:35 np0005539550 nova_compute[257631]: </interface>
Nov 29 02:55:35 np0005539550 nova_compute[257631]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.481 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.503 257641 DEBUG nova.compute.manager [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.503 257641 DEBUG nova.compute.manager [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing instance network info cache due to event network-changed-d1330295-51bc-4e64-a620-b63a6d8777fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.504 257641 DEBUG oslo_concurrency.lockutils [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.504 257641 DEBUG oslo_concurrency.lockutils [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.504 257641 DEBUG nova.network.neutron [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Refreshing network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.702 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.909 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:35 np0005539550 nova_compute[257631]: 2025-11-29 07:55:35.910 257641 INFO nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Nov 29 02:55:36 np0005539550 nova_compute[257631]: 2025-11-29 07:55:36.160 257641 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Nov 29 02:55:36 np0005539550 nova_compute[257631]: 2025-11-29 07:55:36.662 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:36 np0005539550 nova_compute[257631]: 2025-11-29 07:55:36.663 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 521 MiB data, 604 MiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 2.2 MiB/s wr, 139 op/s
Nov 29 02:55:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.167 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.168 257641 DEBUG nova.virt.libvirt.migration [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Nov 29 02:55:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:37.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.391 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764402937.3909633, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.391 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Paused (Lifecycle Event)#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.446 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.449 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.480 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Nov 29 02:55:37 np0005539550 kernel: tapd1330295-51 (unregistering): left promiscuous mode
Nov 29 02:55:37 np0005539550 NetworkManager[49039]: <info>  [1764402937.7162] device (tapd1330295-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.724 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:37Z|00139|binding|INFO|Releasing lport d1330295-51bc-4e64-a620-b63a6d8777fb from this chassis (sb_readonly=0)
Nov 29 02:55:37 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:37Z|00140|binding|INFO|Setting lport d1330295-51bc-4e64-a620-b63a6d8777fb down in Southbound
Nov 29 02:55:37 np0005539550 ovn_controller[148680]: 2025-11-29T07:55:37Z|00141|binding|INFO|Removing iface tapd1330295-51 ovn-installed in OVS
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.727 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.742 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 29 02:55:37 np0005539550 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000013.scope: Consumed 2.940s CPU time.
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.784 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c2:bc:90 10.100.0.12'], port_security=['fa:16:3e:c2:bc:90 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '479f969f-dbf7-4938-8979-b8532eb113f6'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '56f3f72f-7db4-47c8-a4c3-20b2acc58aa9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '17', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d1330295-51bc-4e64-a620-b63a6d8777fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.786 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d1330295-51bc-4e64-a620-b63a6d8777fb in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 unbound from our chassis#033[00m
Nov 29 02:55:37 np0005539550 systemd-machined[216673]: Machine qemu-11-instance-00000013 terminated.
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.787 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.802 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[76419e33-c414-452f-80b6-16395293d4ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.831 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[265a9f21-a11f-428f-91d5-9ab99532a2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.834 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e85222-cf1c-446e-922f-862de23aa975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.860 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[29e5527e-48d1-46e3-80ef-2b1ae6888f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.877 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2719b8-b59c-49a6-bc24-c20a6c48c3a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad69a0f4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:a1:2d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 29, 'tx_packets': 7, 'rx_bytes': 1498, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 29, 'tx_packets': 7, 'rx_bytes': 1498, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588187, 'reachable_time': 18122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276923, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.893 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1b41dba5-45c3-47cf-a538-1e4708ec4bd7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapad69a0f4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588199, 'tstamp': 588199}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276924, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapad69a0f4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 588202, 'tstamp': 588202}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276924, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.895 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:37 np0005539550 virtqemud[256287]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0: No such file or directory
Nov 29 02:55:37 np0005539550 virtqemud[256287]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-e52d8ac1-8970-4cf0-9aa0-795f616090d0: No such file or directory
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.945 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.946 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.953 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad69a0f4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.953 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.953 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad69a0f4-00, col_values=(('external_ids', {'iface-id': '7ffec560-b868-40db-af88-b0deaaa81f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:55:37.954 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.957 257641 DEBUG nova.virt.libvirt.guest [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.958 257641 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation has completed#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.958 257641 INFO nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] _post_live_migration() is started..#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.960 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.960 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Nov 29 02:55:37 np0005539550 nova_compute[257631]: 2025-11-29 07:55:37.960 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.400 257641 DEBUG nova.compute.manager [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.400 257641 DEBUG oslo_concurrency.lockutils [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.400 257641 DEBUG oslo_concurrency.lockutils [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.400 257641 DEBUG oslo_concurrency.lockutils [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.401 257641 DEBUG nova.compute.manager [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.401 257641 DEBUG nova.compute.manager [req-d4f9f6b7-981a-47a3-9e29-a39ef6ff7c60 req-560886ea-6deb-453f-bbf3-8ec6e434028a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.575 257641 DEBUG nova.network.neutron [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updated VIF entry in instance network info cache for port d1330295-51bc-4e64-a620-b63a6d8777fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.576 257641 DEBUG nova.network.neutron [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Updating instance_info_cache with network_info: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:38 np0005539550 nova_compute[257631]: 2025-11-29 07:55:38.674 257641 DEBUG oslo_concurrency.lockutils [req-ba7a3a53-89eb-4e02-b851-965d87d9c8ab req-6f7ccd65-ccee-432e-afdc-1291d03d5883 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-56f3f72f-7db4-47c8-a4c3-20b2acc58aa9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:55:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 472 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 257 KiB/s rd, 1.4 MiB/s wr, 118 op/s
Nov 29 02:55:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:39 np0005539550 nova_compute[257631]: 2025-11-29 07:55:39.528 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.695 257641 DEBUG nova.compute.manager [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.696 257641 DEBUG oslo_concurrency.lockutils [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.696 257641 DEBUG oslo_concurrency.lockutils [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.696 257641 DEBUG oslo_concurrency.lockutils [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.697 257641 DEBUG nova.compute.manager [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.697 257641 WARNING nova.compute.manager [req-6668749c-af56-4423-b900-7eda8eba4615 req-47fcb71f-ae10-443e-8937-b49f65f8faf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:40 np0005539550 nova_compute[257631]: 2025-11-29 07:55:40.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 445 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 163 KiB/s rd, 108 KiB/s wr, 83 op/s
Nov 29 02:55:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:41.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 445 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 29 KiB/s wr, 55 op/s
Nov 29 02:55:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:43.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.461 257641 DEBUG nova.compute.manager [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.461 257641 DEBUG oslo_concurrency.lockutils [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.462 257641 DEBUG oslo_concurrency.lockutils [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.462 257641 DEBUG oslo_concurrency.lockutils [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.462 257641 DEBUG nova.compute.manager [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:43 np0005539550 nova_compute[257631]: 2025-11-29 07:55:43.462 257641 DEBUG nova.compute.manager [req-e3dfbd79-babb-4c74-88ef-fe2080103807 req-412a57e9-b315-4154-b17b-7187f4ffb6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-unplugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.481 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402929.480414, 2bb12f77-8958-446b-813d-a59f149a549b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.482 257641 INFO nova.compute.manager [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.647 257641 DEBUG nova.compute.manager [None req-f95a4017-6ce7-46f9-a4cb-2d7be51659ee - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.651 257641 DEBUG nova.compute.manager [None req-f95a4017-6ce7-46f9-a4cb-2d7be51659ee - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:55:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 438 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 25 KiB/s wr, 48 op/s
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.726 257641 INFO nova.virt.libvirt.driver [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Deleting instance files /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b_del#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.727 257641 INFO nova.virt.libvirt.driver [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Deletion of /var/lib/nova/instances/2bb12f77-8958-446b-813d-a59f149a549b_del complete#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.789 257641 INFO nova.compute.manager [None req-f95a4017-6ce7-46f9-a4cb-2d7be51659ee - - - - - -] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Nov 29 02:55:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:44.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.895 257641 INFO nova.compute.manager [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Took 16.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.895 257641 DEBUG oslo.service.loopingcall [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.896 257641 DEBUG nova.compute.manager [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.896 257641 DEBUG nova.network.neutron [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.926 257641 DEBUG nova.network.neutron [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Activated binding for port d1330295-51bc-4e64-a620-b63a6d8777fb and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.926 257641 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.927 257641 DEBUG nova.virt.libvirt.vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:54:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-178880762',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-178880762',id=19,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:54:38Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-1xnv5qiw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:20Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=56f3f72f-7db4-47c8-a4c3-20b2acc58aa9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.927 257641 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converting VIF {"id": "d1330295-51bc-4e64-a620-b63a6d8777fb", "address": "fa:16:3e:c2:bc:90", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1330295-51", "ovs_interfaceid": "d1330295-51bc-4e64-a620-b63a6d8777fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.928 257641 DEBUG nova.network.os_vif_util [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.928 257641 DEBUG os_vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.930 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.930 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1330295-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.936 257641 INFO os_vif [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c2:bc:90,bridge_name='br-int',has_traffic_filtering=True,id=d1330295-51bc-4e64-a620-b63a6d8777fb,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1330295-51')#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.936 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.937 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.937 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.937 257641 DEBUG nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.937 257641 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deleting instance files /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del#033[00m
Nov 29 02:55:44 np0005539550 nova_compute[257631]: 2025-11-29 07:55:44.938 257641 INFO nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Deletion of /var/lib/nova/instances/56f3f72f-7db4-47c8-a4c3-20b2acc58aa9_del complete#033[00m
Nov 29 02:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:45.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.671 257641 DEBUG nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.671 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.671 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.671 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 WARNING nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG oslo_concurrency.lockutils [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.672 257641 DEBUG nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.673 257641 WARNING nova.compute.manager [req-4b18ea74-0fda-46ab-bd80-88ac2828a281 req-a7ff268a-9909-4b9c-8920-2e986c6536ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:45 np0005539550 nova_compute[257631]: 2025-11-29 07:55:45.707 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 438 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 26 KiB/s wr, 53 op/s
Nov 29 02:55:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:46.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:47.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.141 257641 DEBUG nova.compute.manager [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.142 257641 DEBUG oslo_concurrency.lockutils [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.142 257641 DEBUG oslo_concurrency.lockutils [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.142 257641 DEBUG oslo_concurrency.lockutils [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.143 257641 DEBUG nova.compute.manager [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] No waiting events found dispatching network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:55:48 np0005539550 nova_compute[257631]: 2025-11-29 07:55:48.143 257641 WARNING nova.compute.manager [req-092a0765-b73b-4cfb-a953-141186e48508 req-d3b229fe-a739-45fc-b162-fb0614b4e75a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Received unexpected event network-vif-plugged-d1330295-51bc-4e64-a620-b63a6d8777fb for instance with vm_state active and task_state migrating.#033[00m
Nov 29 02:55:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 457 MiB data, 558 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 405 KiB/s wr, 73 op/s
Nov 29 02:55:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:48.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:49.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:49 np0005539550 nova_compute[257631]: 2025-11-29 07:55:49.764 257641 DEBUG nova.network.neutron [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:55:49 np0005539550 nova_compute[257631]: 2025-11-29 07:55:49.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:50 np0005539550 nova_compute[257631]: 2025-11-29 07:55:50.608 257641 INFO nova.compute.manager [-] [instance: 2bb12f77-8958-446b-813d-a59f149a549b] Took 5.71 seconds to deallocate network for instance.#033[00m
Nov 29 02:55:50 np0005539550 nova_compute[257631]: 2025-11-29 07:55:50.709 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Nov 29 02:55:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:50.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:51 np0005539550 nova_compute[257631]: 2025-11-29 07:55:51.182 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:51 np0005539550 nova_compute[257631]: 2025-11-29 07:55:51.183 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:51 np0005539550 nova_compute[257631]: 2025-11-29 07:55:51.323 257641 DEBUG oslo_concurrency.processutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:51 np0005539550 podman[276995]: 2025-11-29 07:55:51.325797355 +0000 UTC m=+0.059819752 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 02:55:51 np0005539550 podman[276994]: 2025-11-29 07:55:51.330802389 +0000 UTC m=+0.064948779 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 02:55:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174540729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:51 np0005539550 nova_compute[257631]: 2025-11-29 07:55:51.770 257641 DEBUG oslo_concurrency.processutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:51 np0005539550 nova_compute[257631]: 2025-11-29 07:55:51.778 257641 DEBUG nova.compute.provider_tree [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.003 257641 DEBUG nova.scheduler.client.report [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.032 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.225 257641 INFO nova.scheduler.client.report [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Deleted allocations for instance 2bb12f77-8958-446b-813d-a59f149a549b#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.422 257641 DEBUG oslo_concurrency.lockutils [None req-12ddce0e-7cc0-44a7-a490-93985cfcc531 85f5548e01234fe4ae9b88e998e943f8 1963a097b7694450aa0d7c30b27b38ac - - default default] Lock "2bb12f77-8958-446b-813d-a59f149a549b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 24.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Nov 29 02:55:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.958 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402937.9573658, 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.959 257641 INFO nova.compute.manager [-] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:55:52 np0005539550 nova_compute[257631]: 2025-11-29 07:55:52.988 257641 DEBUG nova.compute.manager [None req-a8a9939d-d2c0-47de-9a47-b1f65abf4b74 - - - - - -] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:55:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:53.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Nov 29 02:55:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:54 np0005539550 nova_compute[257631]: 2025-11-29 07:55:54.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:55:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:55.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:55 np0005539550 nova_compute[257631]: 2025-11-29 07:55:55.711 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.066 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.066 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.067 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "56f3f72f-7db4-47c8-a4c3-20b2acc58aa9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.090 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.091 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.091 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.091 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.092 257641 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603711066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.540 257641 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Nov 29 02:55:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.934 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.936 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.940 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:56 np0005539550 nova_compute[257631]: 2025-11-29 07:55:56.941 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.125 257641 WARNING nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.127 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4272MB free_disk=20.78524398803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.127 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.127 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.263 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration for instance 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.307 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Nov 29 02:55:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:55:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.375 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Instance 9bef976c-2981-4d19-aa60-8a550b7093ca actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.376 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Instance 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.376 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Migration a06b03c0-83a5-4dab-85da-5d9c4df52b22 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.377 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.377 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:55:57 np0005539550 nova_compute[257631]: 2025-11-29 07:55:57.492 257641 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:55:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4209553299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.041 257641 DEBUG oslo_concurrency.processutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.048 257641 DEBUG nova.compute.provider_tree [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.073 257641 DEBUG nova.scheduler.client.report [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.121 257641 DEBUG nova.compute.resource_tracker [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.122 257641 DEBUG oslo_concurrency.lockutils [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:55:58 np0005539550 nova_compute[257631]: 2025-11-29 07:55:58.128 257641 INFO nova.compute.manager [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Nov 29 02:55:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:55:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:55:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Nov 29 02:55:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 02:55:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:55:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 02:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:55:59
Nov 29 02:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'vms', 'backups', 'default.rgw.control']
Nov 29 02:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:55:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:55:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:55:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:55:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:55:59 np0005539550 nova_compute[257631]: 2025-11-29 07:55:59.934 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:55:59 np0005539550 nova_compute[257631]: 2025-11-29 07:55:59.967 257641 INFO nova.scheduler.client.report [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] Deleted allocation for migration a06b03c0-83a5-4dab-85da-5d9c4df52b22#033[00m
Nov 29 02:55:59 np0005539550 nova_compute[257631]: 2025-11-29 07:55:59.968 257641 DEBUG nova.virt.libvirt.driver [None req-e86c504a-33cb-444a-a768-ac3d6390fbe6 82f20a64d74c4e828a3bcc36c01b947f d7ed55b45c19429eb46f57b6ebce2647 - - default default] [instance: 56f3f72f-7db4-47c8-a4c3-20b2acc58aa9] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Nov 29 02:56:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:00 np0005539550 nova_compute[257631]: 2025-11-29 07:56:00.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 77 op/s
Nov 29 02:56:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:01.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 77 op/s
Nov 29 02:56:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:03.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:04 np0005539550 podman[277109]: 2025-11-29 07:56:04.352901475 +0000 UTC m=+0.093155929 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:56:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 484 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 69 op/s
Nov 29 02:56:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:04 np0005539550 nova_compute[257631]: 2025-11-29 07:56:04.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:04 np0005539550 nova_compute[257631]: 2025-11-29 07:56:04.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:56:04 np0005539550 nova_compute[257631]: 2025-11-29 07:56:04.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:56:04 np0005539550 nova_compute[257631]: 2025-11-29 07:56:04.937 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 02:56:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:05.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:05 np0005539550 nova_compute[257631]: 2025-11-29 07:56:05.714 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:05 np0005539550 nova_compute[257631]: 2025-11-29 07:56:05.891 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:56:05 np0005539550 nova_compute[257631]: 2025-11-29 07:56:05.891 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:56:05 np0005539550 nova_compute[257631]: 2025-11-29 07:56:05.891 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:56:05 np0005539550 nova_compute[257631]: 2025-11-29 07:56:05.891 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 493 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 861 KiB/s wr, 67 op/s
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:07.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d0d86d68-72d2-43ee-8653-1e5903c9886e does not exist
Nov 29 02:56:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 82a3bac6-07a2-4b82-9995-3d3dbddadb2c does not exist
Nov 29 02:56:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1876734b-b7f3-45fe-adfd-5b0c1ed35baa does not exist
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:56:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.182078443 +0000 UTC m=+0.053030545 container create 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:08 np0005539550 systemd[1]: Started libpod-conmon-45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2.scope.
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.157422753 +0000 UTC m=+0.028374885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.282503136 +0000 UTC m=+0.153455258 container init 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.291069145 +0000 UTC m=+0.162021257 container start 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 02:56:08 np0005539550 epic_thompson[277593]: 167 167
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.295314483 +0000 UTC m=+0.166266585 container attach 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 02:56:08 np0005539550 systemd[1]: libpod-45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2.scope: Deactivated successfully.
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.298607888 +0000 UTC m=+0.169559990 container died 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:56:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-59c016b13c0f52aff119e469b3f0c99abc8064eb5a05bb75bf203b0c67f657f3-merged.mount: Deactivated successfully.
Nov 29 02:56:08 np0005539550 podman[277577]: 2025-11-29 07:56:08.457316329 +0000 UTC m=+0.328268441 container remove 45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:08 np0005539550 systemd[1]: libpod-conmon-45763c626c5cc3974ae2f6ed7936cf8a0172761fcac5d4dc10d447a005c6afd2.scope: Deactivated successfully.
Nov 29 02:56:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 02:56:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:56:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:56:08 np0005539550 podman[277616]: 2025-11-29 07:56:08.724782586 +0000 UTC m=+0.111094337 container create 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 02:56:08 np0005539550 podman[277616]: 2025-11-29 07:56:08.640711 +0000 UTC m=+0.027022771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 509 MiB data, 596 MiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 2.0 MiB/s wr, 43 op/s
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:56:08 np0005539550 systemd[1]: Started libpod-conmon-5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611.scope.
Nov 29 02:56:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:08.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:09 np0005539550 podman[277616]: 2025-11-29 07:56:09.023529753 +0000 UTC m=+0.409841534 container init 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:56:09 np0005539550 podman[277616]: 2025-11-29 07:56:09.031801284 +0000 UTC m=+0.418113035 container start 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 02:56:09 np0005539550 podman[277616]: 2025-11-29 07:56:09.109334893 +0000 UTC m=+0.495646644 container attach 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:09.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:09 np0005539550 eloquent_napier[277632]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:56:09 np0005539550 eloquent_napier[277632]: --> relative data size: 1.0
Nov 29 02:56:09 np0005539550 eloquent_napier[277632]: --> All data devices are unavailable
Nov 29 02:56:09 np0005539550 nova_compute[257631]: 2025-11-29 07:56:09.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:09 np0005539550 systemd[1]: libpod-5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611.scope: Deactivated successfully.
Nov 29 02:56:09 np0005539550 podman[277616]: 2025-11-29 07:56:09.955833762 +0000 UTC m=+1.342145513 container died 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:56:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ef90e77bab35285b43a0e41170cbf977d902835d41501852d31a2df1841d0182-merged.mount: Deactivated successfully.
Nov 29 02:56:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:10 np0005539550 podman[277616]: 2025-11-29 07:56:10.01961914 +0000 UTC m=+1.405930891 container remove 5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 29 02:56:10 np0005539550 systemd[1]: libpod-conmon-5017cd504ed318ac63542efb99e16ccec1fea677ef821c74496841a6d2350611.scope: Deactivated successfully.
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.66564029 +0000 UTC m=+0.045752829 container create 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:56:10 np0005539550 systemd[1]: Started libpod-conmon-9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc.scope.
Nov 29 02:56:10 np0005539550 nova_compute[257631]: 2025-11-29 07:56:10.716 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.643996618 +0000 UTC m=+0.024109177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.747938761 +0000 UTC m=+0.128051310 container init 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.755003751 +0000 UTC m=+0.135116290 container start 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.758939592 +0000 UTC m=+0.139052151 container attach 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:10 np0005539550 systemd[1]: libpod-9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc.scope: Deactivated successfully.
Nov 29 02:56:10 np0005539550 cool_kirch[277817]: 167 167
Nov 29 02:56:10 np0005539550 conmon[277817]: conmon 9bc92a22eda3b10b4ca7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc.scope/container/memory.events
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.763497348 +0000 UTC m=+0.143609877 container died 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:56:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 511 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Nov 29 02:56:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-43c24818ab6d2aa67dbdd129e49a435328931a8ffe4f8390881d6dd1ca723ebe-merged.mount: Deactivated successfully.
Nov 29 02:56:10 np0005539550 podman[277801]: 2025-11-29 07:56:10.815017683 +0000 UTC m=+0.195130222 container remove 9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:10 np0005539550 systemd[1]: libpod-conmon-9bc92a22eda3b10b4ca75a116f26992b4711fef95396f5574706d9c007fa57fc.scope: Deactivated successfully.
Nov 29 02:56:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:10.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:11 np0005539550 podman[277841]: 2025-11-29 07:56:11.023357942 +0000 UTC m=+0.044390565 container create 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 02:56:11 np0005539550 systemd[1]: Started libpod-conmon-7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497.scope.
Nov 29 02:56:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fffebf2ce32d8d9154977029463944868379bd57a1cce6a18d0b029f7d0f6a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fffebf2ce32d8d9154977029463944868379bd57a1cce6a18d0b029f7d0f6a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fffebf2ce32d8d9154977029463944868379bd57a1cce6a18d0b029f7d0f6a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fffebf2ce32d8d9154977029463944868379bd57a1cce6a18d0b029f7d0f6a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:11 np0005539550 podman[277841]: 2025-11-29 07:56:11.001682928 +0000 UTC m=+0.022715581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:11 np0005539550 podman[277841]: 2025-11-29 07:56:11.098661384 +0000 UTC m=+0.119694027 container init 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:56:11 np0005539550 podman[277841]: 2025-11-29 07:56:11.108165006 +0000 UTC m=+0.129197629 container start 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 02:56:11 np0005539550 podman[277841]: 2025-11-29 07:56:11.111888961 +0000 UTC m=+0.132921584 container attach 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 02:56:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.723 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updating instance_info_cache with network_info: [{"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.753 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-854cbdb6-6ca7-4fa5-8105-3d48b2926d96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.753 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.755 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.755 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.755 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.755 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.756 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.756 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.795 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.796 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.796 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.797 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:56:11 np0005539550 nova_compute[257631]: 2025-11-29 07:56:11.797 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:11 np0005539550 determined_merkle[277857]: {
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:    "0": [
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:        {
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "devices": [
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "/dev/loop3"
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            ],
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "lv_name": "ceph_lv0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "lv_size": "7511998464",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "name": "ceph_lv0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "tags": {
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.cluster_name": "ceph",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.crush_device_class": "",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.encrypted": "0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.osd_id": "0",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.type": "block",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:                "ceph.vdo": "0"
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            },
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "type": "block",
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:            "vg_name": "ceph_vg0"
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:        }
Nov 29 02:56:11 np0005539550 determined_merkle[277857]:    ]
Nov 29 02:56:11 np0005539550 determined_merkle[277857]: }
Nov 29 02:56:12 np0005539550 systemd[1]: libpod-7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 conmon[277857]: conmon 7f14cd7a7ce487141ef4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497.scope/container/memory.events
Nov 29 02:56:12 np0005539550 podman[277841]: 2025-11-29 07:56:12.018096154 +0000 UTC m=+1.039129007 container died 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:56:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5fffebf2ce32d8d9154977029463944868379bd57a1cce6a18d0b029f7d0f6a7-merged.mount: Deactivated successfully.
Nov 29 02:56:12 np0005539550 podman[277841]: 2025-11-29 07:56:12.080064206 +0000 UTC m=+1.101096829 container remove 7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_merkle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:56:12 np0005539550 systemd[1]: libpod-conmon-7f14cd7a7ce487141ef43c4f1e898ddb45982a7cdbf32e4485d04d117e2e1497.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2794226654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.324 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.418 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.419 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.422 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.423 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.429 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.430 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.430 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.430 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.431 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.433 257641 INFO nova.compute.manager [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Terminating instance#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.434 257641 DEBUG nova.compute.manager [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:12 np0005539550 kernel: tap384b014a-c4 (unregistering): left promiscuous mode
Nov 29 02:56:12 np0005539550 NetworkManager[49039]: <info>  [1764402972.4936] device (tap384b014a-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00142|binding|INFO|Releasing lport 384b014a-c4e8-4d83-a8d1-09e70342722f from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00143|binding|INFO|Setting lport 384b014a-c4e8-4d83-a8d1-09e70342722f down in Southbound
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00144|binding|INFO|Releasing lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00145|binding|INFO|Setting lport f3deccbd-dd81-439e-9ba4-ebc80268aa7a down in Southbound
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00146|binding|INFO|Removing iface tap384b014a-c4 ovn-installed in OVS
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.506 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.511 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.526 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a8:74:d4 10.100.0.6'], port_security=['fa:16:3e:a8:74:d4 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1819185199', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9bef976c-2981-4d19-aa60-8a550b7093ca', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1819185199', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '12', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19139b07-e3dc-4118-93d3-d7c140077f4d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=384b014a-c4e8-4d83-a8d1-09e70342722f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.528 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:b4:cf 19.80.0.168'], port_security=['fa:16:3e:d7:b4:cf 19.80.0.168'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['384b014a-c4e8-4d83-a8d1-09e70342722f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-367077291', 'neutron:cidrs': '19.80.0.168/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-367077291', 'neutron:project_id': 'f91d373d1ef64146866ef08735a75efa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '394eda18-2fbd-4f97-9713-003068aad79a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=86c926f4-a652-4447-9b71-a6da44a90627, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=f3deccbd-dd81-439e-9ba4-ebc80268aa7a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.529 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 384b014a-c4e8-4d83-a8d1-09e70342722f in datapath ad69a0f4-0000-474b-9649-72cf1bf9f5c1 unbound from our chassis#033[00m
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00147|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00148|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00149|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.531 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad69a0f4-0000-474b-9649-72cf1bf9f5c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b976198d-2294-4880-a361-5fa7eaf67951]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.533 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 namespace which is not needed anymore#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000010.scope: Consumed 8.857s CPU time.
Nov 29 02:56:12 np0005539550 systemd-machined[216673]: Machine qemu-8-instance-00000010 terminated.
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00150|binding|INFO|Releasing lport 7ffec560-b868-40db-af88-b0deaaa81f65 from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00151|binding|INFO|Releasing lport acbe1c54-69e5-4789-8e0b-6d1b69eab5e0 from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:12Z|00152|binding|INFO|Releasing lport a9eac8df-57ef-4a9b-91fd-9eb356860a2d from this chassis (sb_readonly=0)
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.626 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.666 257641 INFO nova.virt.libvirt.driver [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Instance destroyed successfully.#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.667 257641 DEBUG nova.objects.instance [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lazy-loading 'resources' on Instance uuid 9bef976c-2981-4d19-aa60-8a550b7093ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:12 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [NOTICE]   (273664) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:12 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [NOTICE]   (273664) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:12 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [WARNING]  (273664) : Exiting Master process...
Nov 29 02:56:12 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [ALERT]    (273664) : Current worker (273666) exited with code 143 (Terminated)
Nov 29 02:56:12 np0005539550 neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1[273657]: [WARNING]  (273664) : All workers exited. Exiting... (0)
Nov 29 02:56:12 np0005539550 systemd[1]: libpod-0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.686 257641 DEBUG nova.virt.libvirt.vif [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T07:53:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1241596333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1241596333',id=16,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:53:39Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f91d373d1ef64146866ef08735a75efa',ramdisk_id='',reservation_id='r-ggznma7o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1482931553',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1482931553-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:53:59Z,user_data=None,user_id='b8f5b14bc98a47f29238140d1d3f1220',uuid=9bef976c-2981-4d19-aa60-8a550b7093ca,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.686 257641 DEBUG nova.network.os_vif_util [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converting VIF {"id": "384b014a-c4e8-4d83-a8d1-09e70342722f", "address": "fa:16:3e:a8:74:d4", "network": {"id": "ad69a0f4-0000-474b-9649-72cf1bf9f5c1", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-354897276-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f91d373d1ef64146866ef08735a75efa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap384b014a-c4", "ovs_interfaceid": "384b014a-c4e8-4d83-a8d1-09e70342722f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.688 257641 DEBUG nova.network.os_vif_util [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.688 257641 DEBUG os_vif [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.690 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.691 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384b014a-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 podman[278045]: 2025-11-29 07:56:12.69446756 +0000 UTC m=+0.048715414 container died 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.700 257641 INFO os_vif [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a8:74:d4,bridge_name='br-int',has_traffic_filtering=True,id=384b014a-c4e8-4d83-a8d1-09e70342722f,network=Network(ad69a0f4-0000-474b-9649-72cf1bf9f5c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap384b014a-c4')#033[00m
Nov 29 02:56:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-63d34bacd8bcea6462ae34e4029eb4a655aed6678d0c62d69b2330cba30c1668-merged.mount: Deactivated successfully.
Nov 29 02:56:12 np0005539550 podman[278045]: 2025-11-29 07:56:12.750188023 +0000 UTC m=+0.104435877 container cleanup 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:56:12 np0005539550 systemd[1]: libpod-conmon-0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 500 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 29 02:56:12 np0005539550 podman[278117]: 2025-11-29 07:56:12.850421651 +0000 UTC m=+0.068083149 container remove 0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.857 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b320764-d884-4c5d-a92b-b8109df35a8f]: (4, ('Sat Nov 29 07:56:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b)\n0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b\nSat Nov 29 07:56:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 (0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b)\n0ea8f382cf6c425843f76c439c27aa3762949390f0526b59366b572e5028168b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.859 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9b42b377-b735-4ff2-a57b-afd0303109ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.861 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad69a0f4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:12 np0005539550 kernel: tapad69a0f4-00: left promiscuous mode
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.871448988 +0000 UTC m=+0.075744354 container create c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.881 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f743e6b5-53b9-4fba-9c1a-ed216cdd857d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.883 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.884 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4227MB free_disk=20.76153564453125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.884 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.885 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.897 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4ea420b7-421f-494e-81c6-45c37388f309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.899 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e58248b0-e644-49bd-afe9-616721e6f3b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.823668639 +0000 UTC m=+0.027964035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.917 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07fa0e80-fa96-45e1-bda1-b6bc8ea17fb1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588179, 'reachable_time': 32175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278148, 'error': None, 'target': 'ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 systemd[1]: Started libpod-conmon-c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b.scope.
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.921 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad69a0f4-0000-474b-9649-72cf1bf9f5c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.922 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[73ca2697-a3f7-4180-9031-da3a9eea6a17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.922 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f3deccbd-dd81-439e-9ba4-ebc80268aa7a in datapath 54dd896a-93d6-4056-93b9-fe4c87eb0b97 unbound from our chassis#033[00m
Nov 29 02:56:12 np0005539550 systemd[1]: run-netns-ovnmeta\x2dad69a0f4\x2d0000\x2d474b\x2d9649\x2d72cf1bf9f5c1.mount: Deactivated successfully.
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.924 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 54dd896a-93d6-4056-93b9-fe4c87eb0b97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.925 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[72f7c7ea-e7c5-4e06-8fd8-238dcb03baa3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:12.926 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 namespace which is not needed anymore#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.937 257641 DEBUG nova.compute.manager [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.938 257641 DEBUG oslo_concurrency.lockutils [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.938 257641 DEBUG oslo_concurrency.lockutils [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.938 257641 DEBUG oslo_concurrency.lockutils [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.939 257641 DEBUG nova.compute.manager [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:12 np0005539550 nova_compute[257631]: 2025-11-29 07:56:12.939 257641 DEBUG nova.compute.manager [req-c2957776-6f83-40e4-b7e2-d7ba5bb33fb2 req-fd415cba-e981-4063-af59-e95d5ee6282b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-unplugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.972731664 +0000 UTC m=+0.177027050 container init c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.982052012 +0000 UTC m=+0.186347378 container start c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.987268285 +0000 UTC m=+0.191563651 container attach c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:56:12 np0005539550 musing_maxwell[278149]: 167 167
Nov 29 02:56:12 np0005539550 systemd[1]: libpod-c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b.scope: Deactivated successfully.
Nov 29 02:56:12 np0005539550 podman[278120]: 2025-11-29 07:56:12.995176797 +0000 UTC m=+0.199472193 container died c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.025 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9bef976c-2981-4d19-aa60-8a550b7093ca actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.026 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.027 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.027 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:56:13 np0005539550 podman[278120]: 2025-11-29 07:56:13.04544152 +0000 UTC m=+0.249736886 container remove c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:56:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cdce46b83ffac86b4fdd4f6d2fc354aab1ed0bb86e5024f0a31f008caa59f77e-merged.mount: Deactivated successfully.
Nov 29 02:56:13 np0005539550 systemd[1]: libpod-conmon-c96d22db60c1c6b86b6f627f079e19b46a6431ee2d379edbb9e311192a7e472b.scope: Deactivated successfully.
Nov 29 02:56:13 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [NOTICE]   (273757) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:13 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [NOTICE]   (273757) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:13 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [WARNING]  (273757) : Exiting Master process...
Nov 29 02:56:13 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [ALERT]    (273757) : Current worker (273760) exited with code 143 (Terminated)
Nov 29 02:56:13 np0005539550 neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97[273750]: [WARNING]  (273757) : All workers exited. Exiting... (0)
Nov 29 02:56:13 np0005539550 systemd[1]: libpod-311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66.scope: Deactivated successfully.
Nov 29 02:56:13 np0005539550 podman[278179]: 2025-11-29 07:56:13.099185962 +0000 UTC m=+0.054243556 container died 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.101 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-09bab5037a0090848ec63c05bab99fb5a730cc9c26542ea0b05dfa2cfe1db75b-merged.mount: Deactivated successfully.
Nov 29 02:56:13 np0005539550 podman[278179]: 2025-11-29 07:56:13.142343713 +0000 UTC m=+0.097401287 container cleanup 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 02:56:13 np0005539550 systemd[1]: libpod-conmon-311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66.scope: Deactivated successfully.
Nov 29 02:56:13 np0005539550 podman[278211]: 2025-11-29 07:56:13.220763335 +0000 UTC m=+0.057600181 container remove 311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.227 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[42b0ff70-9ece-4194-98c6-04e1aec4db29]: (4, ('Sat Nov 29 07:56:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 (311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66)\n311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66\nSat Nov 29 07:56:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 (311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66)\n311a6eb722a434df638aa8e3af0af47253b296e8bc3ddb843a5bd36d1a94de66\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.229 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49254a1e-e432-49c6-9b2c-fb1a55d85336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.230 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54dd896a-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:13 np0005539550 kernel: tap54dd896a-90: left promiscuous mode
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.234 257641 INFO nova.virt.libvirt.driver [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deleting instance files /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca_del#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.236 257641 INFO nova.virt.libvirt.driver [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deletion of /var/lib/nova/instances/9bef976c-2981-4d19-aa60-8a550b7093ca_del complete#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.239 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.250 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d203b37a-2032-4be8-bf3b-3f4a72ca228a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.262 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8b96c7d1-2b5e-41cf-b906-c1e8bb3b2622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.263 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a394cc30-d671-4817-adda-b1ef344eda22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.278 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[05df8348-1be7-4dec-9a0a-3dde2b17756c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 588273, 'reachable_time': 31795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278261, 'error': None, 'target': 'ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 podman[278229]: 2025-11-29 07:56:13.28009904 +0000 UTC m=+0.067650688 container create 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.283 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-54dd896a-93d6-4056-93b9-fe4c87eb0b97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:13.283 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e43304-e0ef-4d48-aa0c-f4cea49fbcde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:13 np0005539550 systemd[1]: run-netns-ovnmeta\x2d54dd896a\x2d93d6\x2d4056\x2d93b9\x2dfe4c87eb0b97.mount: Deactivated successfully.
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.304 257641 INFO nova.compute.manager [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.304 257641 DEBUG oslo.service.loopingcall [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.304 257641 DEBUG nova.compute.manager [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.305 257641 DEBUG nova.network.neutron [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:13 np0005539550 systemd[1]: Started libpod-conmon-2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038.scope.
Nov 29 02:56:13 np0005539550 podman[278229]: 2025-11-29 07:56:13.242424858 +0000 UTC m=+0.029976526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:56:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:56:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a50da0702f0f7888398a17778538900ce390edb91d0e8833a508da4c64ec11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a50da0702f0f7888398a17778538900ce390edb91d0e8833a508da4c64ec11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a50da0702f0f7888398a17778538900ce390edb91d0e8833a508da4c64ec11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a50da0702f0f7888398a17778538900ce390edb91d0e8833a508da4c64ec11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:56:13 np0005539550 podman[278229]: 2025-11-29 07:56:13.373565256 +0000 UTC m=+0.161116924 container init 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:13 np0005539550 podman[278229]: 2025-11-29 07:56:13.383190592 +0000 UTC m=+0.170742240 container start 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:56:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:13 np0005539550 podman[278229]: 2025-11-29 07:56:13.389204365 +0000 UTC m=+0.176756003 container attach 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:56:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2899192210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.597 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.604 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.629 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.650 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:56:13 np0005539550 nova_compute[257631]: 2025-11-29 07:56:13.651 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]: {
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:        "osd_id": 0,
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:        "type": "bluestore"
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]:    }
Nov 29 02:56:14 np0005539550 stupefied_montalcini[278268]: }
Nov 29 02:56:14 np0005539550 systemd[1]: libpod-2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038.scope: Deactivated successfully.
Nov 29 02:56:14 np0005539550 podman[278229]: 2025-11-29 07:56:14.318134777 +0000 UTC m=+1.105686425 container died 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:56:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c7a50da0702f0f7888398a17778538900ce390edb91d0e8833a508da4c64ec11-merged.mount: Deactivated successfully.
Nov 29 02:56:14 np0005539550 podman[278229]: 2025-11-29 07:56:14.543236373 +0000 UTC m=+1.330788021 container remove 2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:56:14 np0005539550 systemd[1]: libpod-conmon-2b52e1e33cecb7961296102b47eb6c5441f5f91bf3e01229aba7308f55d9d038.scope: Deactivated successfully.
Nov 29 02:56:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:56:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 477 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 29 02:56:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:14.824 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:14 np0005539550 nova_compute[257631]: 2025-11-29 07:56:14.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:14.827 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:14.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.066 257641 DEBUG nova.compute.manager [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.066 257641 DEBUG oslo_concurrency.lockutils [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.067 257641 DEBUG oslo_concurrency.lockutils [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.067 257641 DEBUG oslo_concurrency.lockutils [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.067 257641 DEBUG nova.compute.manager [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] No waiting events found dispatching network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.067 257641 WARNING nova.compute.manager [req-752b8abe-ff2e-4b3c-9edb-a3ed6c213977 req-b25d9f46-97d1-4631-825d-cbc2097fe6e2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Received unexpected event network-vif-plugged-384b014a-c4e8-4d83-a8d1-09e70342722f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:56:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:56:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:15 np0005539550 nova_compute[257631]: 2025-11-29 07:56:15.718 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 543c433d-e34c-4f9a-a81d-8751a193f2c3 does not exist
Nov 29 02:56:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev cd1383d7-8942-46d9-bdcd-bd219cfdba19 does not exist
Nov 29 02:56:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 34bb6d03-67c3-4187-9cb3-1d3c0a3a8153 does not exist
Nov 29 02:56:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:56:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 375 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Nov 29 02:56:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:16.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:17 np0005539550 nova_compute[257631]: 2025-11-29 07:56:17.644 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539550 nova_compute[257631]: 2025-11-29 07:56:17.645 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:56:17 np0005539550 nova_compute[257631]: 2025-11-29 07:56:17.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 312 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 318 KiB/s rd, 2.5 MiB/s wr, 135 op/s
Nov 29 02:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:18.829 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:18.928 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:18.929 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:18.929 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.113 257641 DEBUG nova.network.neutron [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.152 257641 INFO nova.compute.manager [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Took 5.85 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.192 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.192 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.280 257641 DEBUG oslo_concurrency.processutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:19.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007182541759724549 of space, bias 1.0, pg target 2.154762527917365 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:56:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:56:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737857199' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.782 257641 DEBUG oslo_concurrency.processutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.793 257641 DEBUG nova.compute.provider_tree [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.818 257641 DEBUG nova.scheduler.client.report [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.855 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:19 np0005539550 nova_compute[257631]: 2025-11-29 07:56:19.929 257641 INFO nova.scheduler.client.report [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Deleted allocations for instance 9bef976c-2981-4d19-aa60-8a550b7093ca#033[00m
Nov 29 02:56:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:20 np0005539550 nova_compute[257631]: 2025-11-29 07:56:20.021 257641 DEBUG oslo_concurrency.lockutils [None req-44fc484d-3142-4de1-921a-5cec077e2e77 b8f5b14bc98a47f29238140d1d3f1220 f91d373d1ef64146866ef08735a75efa - - default default] Lock "9bef976c-2981-4d19-aa60-8a550b7093ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:20 np0005539550 nova_compute[257631]: 2025-11-29 07:56:20.719 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 325 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 275 KiB/s rd, 1.9 MiB/s wr, 135 op/s
Nov 29 02:56:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:20.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:21.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:22 np0005539550 podman[278383]: 2025-11-29 07:56:22.31647307 +0000 UTC m=+0.054505992 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 02:56:22 np0005539550 podman[278382]: 2025-11-29 07:56:22.323311185 +0000 UTC m=+0.063576554 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 02:56:22 np0005539550 nova_compute[257631]: 2025-11-29 07:56:22.737 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 325 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Nov 29 02:56:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:22.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:23.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 325 MiB data, 499 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Nov 29 02:56:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:24.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:56:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:25.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:25 np0005539550 nova_compute[257631]: 2025-11-29 07:56:25.721 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.243 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.244 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.244 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.244 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.244 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.246 257641 INFO nova.compute.manager [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Terminating instance#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.247 257641 DEBUG nova.compute.manager [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:56:26 np0005539550 kernel: tapc6266046-dc (unregistering): left promiscuous mode
Nov 29 02:56:26 np0005539550 NetworkManager[49039]: <info>  [1764402986.3707] device (tapc6266046-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 02:56:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:26Z|00153|binding|INFO|Releasing lport c6266046-dc59-475f-a0f5-391ba640d669 from this chassis (sb_readonly=0)
Nov 29 02:56:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:26Z|00154|binding|INFO|Setting lport c6266046-dc59-475f-a0f5-391ba640d669 down in Southbound
Nov 29 02:56:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:56:26Z|00155|binding|INFO|Removing iface tapc6266046-dc ovn-installed in OVS
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.379 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.384 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:ad:ee 10.100.0.4'], port_security=['fa:16:3e:0a:ad:ee 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '854cbdb6-6ca7-4fa5-8105-3d48b2926d96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f48e629446148199d44b34243b98b8a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e89afbea-5410-4fb0-af48-42605427a18f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58aa5314-b5ce-40ee-9eff-0f30cffaf25d, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c6266046-dc59-475f-a0f5-391ba640d669) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.385 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c6266046-dc59-475f-a0f5-391ba640d669 in datapath 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b unbound from our chassis#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.387 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.388 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4ec086-c035-4064-a15c-1cef2ac11a03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.388 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b namespace which is not needed anymore#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 29 02:56:26 np0005539550 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000017.scope: Consumed 17.185s CPU time.
Nov 29 02:56:26 np0005539550 systemd-machined[216673]: Machine qemu-12-instance-00000017 terminated.
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.485 257641 INFO nova.virt.libvirt.driver [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Instance destroyed successfully.#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.486 257641 DEBUG nova.objects.instance [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lazy-loading 'resources' on Instance uuid 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.500 257641 DEBUG nova.virt.libvirt.vif [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T07:54:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1833838032',display_name='tempest-ServersAdminTestJSON-server-1833838032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1833838032',id=23,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T07:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f48e629446148199d44b34243b98b8a',ramdisk_id='',reservation_id='r-fmtionkv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-93807439',owner_user_name='tempest-ServersAdminTestJSON-93807439-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T07:55:10Z,user_data=None,user_id='446b0f05699845e8bd9f7d59c787f671',uuid=854cbdb6-6ca7-4fa5-8105-3d48b2926d96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.501 257641 DEBUG nova.network.os_vif_util [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converting VIF {"id": "c6266046-dc59-475f-a0f5-391ba640d669", "address": "fa:16:3e:0a:ad:ee", "network": {"id": "62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1954396846-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f48e629446148199d44b34243b98b8a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6266046-dc", "ovs_interfaceid": "c6266046-dc59-475f-a0f5-391ba640d669", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.502 257641 DEBUG nova.network.os_vif_util [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.503 257641 DEBUG os_vif [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.505 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6266046-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.510 257641 INFO os_vif [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:ad:ee,bridge_name='br-int',has_traffic_filtering=True,id=c6266046-dc59-475f-a0f5-391ba640d669,network=Network(62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6266046-dc')#033[00m
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [NOTICE]   (276409) : haproxy version is 2.8.14-c23fe91
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [NOTICE]   (276409) : path to executable is /usr/sbin/haproxy
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [WARNING]  (276409) : Exiting Master process...
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [WARNING]  (276409) : Exiting Master process...
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [ALERT]    (276409) : Current worker (276411) exited with code 143 (Terminated)
Nov 29 02:56:26 np0005539550 neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b[276405]: [WARNING]  (276409) : All workers exited. Exiting... (0)
Nov 29 02:56:26 np0005539550 systemd[1]: libpod-6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb.scope: Deactivated successfully.
Nov 29 02:56:26 np0005539550 podman[278447]: 2025-11-29 07:56:26.525114114 +0000 UTC m=+0.048152200 container died 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:56:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb-userdata-shm.mount: Deactivated successfully.
Nov 29 02:56:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a6face1aabe5620039d3ed97d608db2a39a78cfbe160ec772f92d8e97f7892f6-merged.mount: Deactivated successfully.
Nov 29 02:56:26 np0005539550 podman[278447]: 2025-11-29 07:56:26.570753929 +0000 UTC m=+0.093792025 container cleanup 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:56:26 np0005539550 systemd[1]: libpod-conmon-6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb.scope: Deactivated successfully.
Nov 29 02:56:26 np0005539550 podman[278501]: 2025-11-29 07:56:26.640057468 +0000 UTC m=+0.047841512 container remove 6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.645 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c83e6b-00f5-41b9-b8d5-1beadbacdf47]: (4, ('Sat Nov 29 07:56:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b (6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb)\n6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb\nSat Nov 29 07:56:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b (6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb)\n6a9095e070819ba58bd67039d0cd075e9bda2560c71baa66a0bbe91865cc3ccb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.647 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c0116da8-e45c-48bd-828f-befec234fab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.648 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62e5f2a3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 kernel: tap62e5f2a3-c0: left promiscuous mode
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.670 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a7bc2a7c-93fb-4d94-b674-69aee1f82a44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.692 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[753bed9e-62a7-4f5b-8a4c-a2acef1d9bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.693 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa2204a-9f68-4a83-888e-a88fd2c613f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.707 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[38ee0dcb-df12-4cf2-b24e-74983bb2a24e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 595611, 'reachable_time': 37457, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278516, 'error': None, 'target': 'ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.710 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62e5f2a3-cc8a-4952-bbb2-e2fde1379e9b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 02:56:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:26.710 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e77a31-217d-4bea-83ed-8089446dd9a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 02:56:26 np0005539550 systemd[1]: run-netns-ovnmeta\x2d62e5f2a3\x2dcc8a\x2d4952\x2dbbb2\x2de2fde1379e9b.mount: Deactivated successfully.
Nov 29 02:56:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 326 MiB data, 500 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 170 op/s
Nov 29 02:56:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:26.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.994 257641 DEBUG nova.compute.manager [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-unplugged-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.994 257641 DEBUG oslo_concurrency.lockutils [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.995 257641 DEBUG oslo_concurrency.lockutils [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.995 257641 DEBUG oslo_concurrency.lockutils [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.995 257641 DEBUG nova.compute.manager [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] No waiting events found dispatching network-vif-unplugged-c6266046-dc59-475f-a0f5-391ba640d669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:26 np0005539550 nova_compute[257631]: 2025-11-29 07:56:26.995 257641 DEBUG nova.compute.manager [req-0717c3ed-240e-4d63-8bcd-bdc5fda98408 req-c841ad07-52dc-4ec5-aefd-189022372200 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-unplugged-c6266046-dc59-475f-a0f5-391ba640d669 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.376 257641 INFO nova.virt.libvirt.driver [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Deleting instance files /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96_del#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.377 257641 INFO nova.virt.libvirt.driver [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Deletion of /var/lib/nova/instances/854cbdb6-6ca7-4fa5-8105-3d48b2926d96_del complete#033[00m
Nov 29 02:56:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:27.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.442 257641 INFO nova.compute.manager [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.442 257641 DEBUG oslo.service.loopingcall [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.442 257641 DEBUG nova.compute.manager [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.443 257641 DEBUG nova.network.neutron [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.664 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402972.6629932, 9bef976c-2981-4d19-aa60-8a550b7093ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.665 257641 INFO nova.compute.manager [-] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:27 np0005539550 nova_compute[257631]: 2025-11-29 07:56:27.697 257641 DEBUG nova.compute.manager [None req-f2866008-7f80-4851-b2a5-768638a990b9 - - - - - -] [instance: 9bef976c-2981-4d19-aa60-8a550b7093ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 283 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Nov 29 02:56:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:28.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.338 257641 DEBUG nova.compute.manager [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.339 257641 DEBUG oslo_concurrency.lockutils [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.339 257641 DEBUG oslo_concurrency.lockutils [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.339 257641 DEBUG oslo_concurrency.lockutils [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.339 257641 DEBUG nova.compute.manager [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] No waiting events found dispatching network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.340 257641 WARNING nova.compute.manager [req-6d531fbf-affa-48b2-9d03-c29908a2ca7e req-0cd4f41b-7168-443e-ae84-60be4bbf68ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received unexpected event network-vif-plugged-c6266046-dc59-475f-a0f5-391ba640d669 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.373 257641 DEBUG nova.network.neutron [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.393 257641 INFO nova.compute.manager [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Took 1.95 seconds to deallocate network for instance.#033[00m
Nov 29 02:56:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:29.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.450 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.451 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.524 257641 DEBUG oslo_concurrency.processutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539494095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.983 257641 DEBUG oslo_concurrency.processutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:29 np0005539550 nova_compute[257631]: 2025-11-29 07:56:29.992 257641 DEBUG nova.compute.provider_tree [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:30 np0005539550 nova_compute[257631]: 2025-11-29 07:56:30.023 257641 DEBUG nova.scheduler.client.report [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:30 np0005539550 nova_compute[257631]: 2025-11-29 07:56:30.063 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:30 np0005539550 nova_compute[257631]: 2025-11-29 07:56:30.126 257641 INFO nova.scheduler.client.report [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Deleted allocations for instance 854cbdb6-6ca7-4fa5-8105-3d48b2926d96#033[00m
Nov 29 02:56:30 np0005539550 nova_compute[257631]: 2025-11-29 07:56:30.233 257641 DEBUG oslo_concurrency.lockutils [None req-8fde1506-dd81-43a1-bebf-0e1e4c2b3976 446b0f05699845e8bd9f7d59c787f671 1f48e629446148199d44b34243b98b8a - - default default] Lock "854cbdb6-6ca7-4fa5-8105-3d48b2926d96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:30 np0005539550 nova_compute[257631]: 2025-11-29 07:56:30.723 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 246 MiB data, 468 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 584 KiB/s wr, 126 op/s
Nov 29 02:56:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:30.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:31.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:31 np0005539550 nova_compute[257631]: 2025-11-29 07:56:31.498 257641 DEBUG nova.compute.manager [req-0222c585-b703-42c3-8b78-e541f19aa57b req-dfab2989-bec9-4d60-b990-8cb901032021 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Received event network-vif-deleted-c6266046-dc59-475f-a0f5-391ba640d669 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 02:56:31 np0005539550 nova_compute[257631]: 2025-11-29 07:56:31.509 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 196 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 108 op/s
Nov 29 02:56:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:32.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:33.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 196 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 580 KiB/s rd, 14 KiB/s wr, 56 op/s
Nov 29 02:56:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:34.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:56:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539550 podman[278595]: 2025-11-29 07:56:35.357758804 +0000 UTC m=+0.093627251 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 02:56:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:35 np0005539550 nova_compute[257631]: 2025-11-29 07:56:35.725 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:35 np0005539550 nova_compute[257631]: 2025-11-29 07:56:35.890 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:35 np0005539550 nova_compute[257631]: 2025-11-29 07:56:35.891 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:35 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 02:56:35 np0005539550 nova_compute[257631]: 2025-11-29 07:56:35.911 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.003 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.003 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.010 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.010 257641 INFO nova.compute.claims [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.169 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.511 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:56:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303921868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.627 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.634 257641 DEBUG nova.compute.provider_tree [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.656 257641 DEBUG nova.scheduler.client.report [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.730 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.732 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:56:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 173 MiB data, 430 MiB used, 21 GiB / 21 GiB avail; 680 KiB/s rd, 556 KiB/s wr, 109 op/s
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.813 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.814 257641 DEBUG nova.network.neutron [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.856 257641 INFO nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:56:36 np0005539550 nova_compute[257631]: 2025-11-29 07:56:36.896 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:56:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.004 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.005 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.006 257641 INFO nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Creating image(s)#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.033 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.061 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.091 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.096 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.159 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.160 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.162 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.162 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.189 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.193 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.360 257641 DEBUG nova.network.neutron [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.361 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:56:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:37.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.878 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.685s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:37 np0005539550 nova_compute[257631]: 2025-11-29 07:56:37.967 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] resizing rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.080 257641 DEBUG nova.objects.instance [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'migration_context' on Instance uuid edc2f5a2-46cd-4a58-a722-7cc82e1ada05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.535 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.535 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Ensure instance console log exists: /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.536 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.537 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.537 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.539 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.546 257641 WARNING nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.576 257641 DEBUG nova.virt.libvirt.host [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.577 257641 DEBUG nova.virt.libvirt.host [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.581 257641 DEBUG nova.virt.libvirt.host [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.582 257641 DEBUG nova.virt.libvirt.host [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.583 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.583 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.584 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.584 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.585 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.585 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.585 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.585 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.585 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.586 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.586 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.586 257641 DEBUG nova.virt.hardware [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:56:38 np0005539550 nova_compute[257631]: 2025-11-29 07:56:38.589 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 180 MiB data, 434 MiB used, 21 GiB / 21 GiB avail; 489 KiB/s rd, 2.9 MiB/s wr, 205 op/s
Nov 29 02:56:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:38.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1917574336' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.052 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.086 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.093 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:56:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820587929' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.575 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.577 257641 DEBUG nova.objects.instance [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'pci_devices' on Instance uuid edc2f5a2-46cd-4a58-a722-7cc82e1ada05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.595 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <uuid>edc2f5a2-46cd-4a58-a722-7cc82e1ada05</uuid>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <name>instance-00000018</name>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:name>tempest-LiveMigrationNegativeTest-server-1231999695</nova:name>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:56:38</nova:creationTime>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:user uuid="8d70080854cd432eb4d60eaa335e9658">tempest-LiveMigrationNegativeTest-1463328085-project-member</nova:user>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <nova:project uuid="a20e261a6e5247c5a6a89f85be36c69c">tempest-LiveMigrationNegativeTest-1463328085</nova:project>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="serial">edc2f5a2-46cd-4a58-a722-7cc82e1ada05</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="uuid">edc2f5a2-46cd-4a58-a722-7cc82e1ada05</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/console.log" append="off"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:56:39 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:56:39 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:56:39 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:56:39 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.664 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.664 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.665 257641 INFO nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Using config drive#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.688 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.970 257641 INFO nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Creating config drive at /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config#033[00m
Nov 29 02:56:39 np0005539550 nova_compute[257631]: 2025-11-29 07:56:39.976 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw0f44_vg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.102 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw0f44_vg" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.133 257641 DEBUG nova.storage.rbd_utils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] rbd image edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.137 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.296 257641 DEBUG oslo_concurrency.processutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config edc2f5a2-46cd-4a58-a722-7cc82e1ada05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.300 257641 INFO nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Deleting local config drive /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05/disk.config because it was imported into RBD.#033[00m
Nov 29 02:56:40 np0005539550 systemd-machined[216673]: New machine qemu-13-instance-00000018.
Nov 29 02:56:40 np0005539550 systemd[1]: Started Virtual Machine qemu-13-instance-00000018.
Nov 29 02:56:40 np0005539550 nova_compute[257631]: 2025-11-29 07:56:40.727 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 167 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 485 KiB/s rd, 3.9 MiB/s wr, 233 op/s
Nov 29 02:56:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:56:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:40.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.120 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403001.1203954, edc2f5a2-46cd-4a58-a722-7cc82e1ada05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.121 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.123 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.123 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.126 257641 INFO nova.virt.libvirt.driver [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance spawned successfully.#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.126 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:56:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.483 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764402986.4823146, 854cbdb6-6ca7-4fa5-8105-3d48b2926d96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.483 257641 INFO nova.compute.manager [-] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:56:41 np0005539550 nova_compute[257631]: 2025-11-29 07:56:41.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:42 np0005539550 nova_compute[257631]: 2025-11-29 07:56:42.776 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:42 np0005539550 nova_compute[257631]: 2025-11-29 07:56:42.780 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 167 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 328 op/s
Nov 29 02:56:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:42.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.331 257641 DEBUG nova.compute.manager [None req-9b242b92-a381-42de-92c4-759cc4469388 - - - - - -] [instance: 854cbdb6-6ca7-4fa5-8105-3d48b2926d96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.339 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.340 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.340 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.341 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.341 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.342 257641 DEBUG nova.virt.libvirt.driver [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.356 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.356 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403001.1228456, edc2f5a2-46cd-4a58-a722-7cc82e1ada05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.357 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] VM Started (Lifecycle Event)#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.405 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.409 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:56:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:43.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.425 257641 INFO nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Took 6.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.425 257641 DEBUG nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.437 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.496 257641 INFO nova.compute.manager [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Took 7.54 seconds to build instance.#033[00m
Nov 29 02:56:43 np0005539550 nova_compute[257631]: 2025-11-29 07:56:43.514 257641 DEBUG oslo_concurrency.lockutils [None req-13b5fa76-206e-4b51-bfb8-fa9de53052b9 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:56:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 167 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 321 op/s
Nov 29 02:56:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:45.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:45 np0005539550 nova_compute[257631]: 2025-11-29 07:56:45.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:46 np0005539550 nova_compute[257631]: 2025-11-29 07:56:46.517 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 167 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 372 op/s
Nov 29 02:56:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:46.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 109 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 330 op/s
Nov 29 02:56:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:48.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:49.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:50 np0005539550 nova_compute[257631]: 2025-11-29 07:56:50.731 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1020 KiB/s wr, 223 op/s
Nov 29 02:56:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:50.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:56:51 np0005539550 nova_compute[257631]: 2025-11-29 07:56:51.312 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:51 np0005539550 nova_compute[257631]: 2025-11-29 07:56:51.520 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 115 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 200 op/s
Nov 29 02:56:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:52.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:53 np0005539550 podman[279053]: 2025-11-29 07:56:53.335967172 +0000 UTC m=+0.065486843 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 02:56:53 np0005539550 podman[279052]: 2025-11-29 07:56:53.343779951 +0000 UTC m=+0.073168979 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 02:56:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:53.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:53.462 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:56:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:53.463 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:56:53 np0005539550 nova_compute[257631]: 2025-11-29 07:56:53.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:56:53.464 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:56:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 115 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Nov 29 02:56:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:56:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:54.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:56:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:56:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:55.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:55 np0005539550 nova_compute[257631]: 2025-11-29 07:56:55.733 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:56 np0005539550 nova_compute[257631]: 2025-11-29 07:56:56.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:56:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 172 MiB data, 417 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.1 MiB/s wr, 149 op/s
Nov 29 02:56:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:56.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:56:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:56:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 211 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 211 op/s
Nov 29 02:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:56:59
Nov 29 02:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images']
Nov 29 02:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:56:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:56:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:56:59.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:56:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:56:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:56:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:56:59.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:00 np0005539550 nova_compute[257631]: 2025-11-29 07:57:00.736 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 213 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.7 MiB/s wr, 266 op/s
Nov 29 02:57:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:01 np0005539550 nova_compute[257631]: 2025-11-29 07:57:01.526 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:01.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 213 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Nov 29 02:57:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:03.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 213 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 236 op/s
Nov 29 02:57:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:05.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:05 np0005539550 nova_compute[257631]: 2025-11-29 07:57:05.738 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:06 np0005539550 podman[279094]: 2025-11-29 07:57:06.389973075 +0000 UTC m=+0.121570227 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 02:57:06 np0005539550 nova_compute[257631]: 2025-11-29 07:57:06.588 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 214 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 237 op/s
Nov 29 02:57:06 np0005539550 nova_compute[257631]: 2025-11-29 07:57:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:06 np0005539550 nova_compute[257631]: 2025-11-29 07:57:06.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:57:06 np0005539550 nova_compute[257631]: 2025-11-29 07:57:06.984 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:57:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:07.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.993 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.993 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.993 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.994 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:57:07 np0005539550 nova_compute[257631]: 2025-11-29 07:57:07.994 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:57:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3470144808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.476 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.643 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.644 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:57:08 np0005539550 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.786 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.788 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4566MB free_disk=20.901012420654297GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.788 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.788 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:57:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 189 MiB data, 437 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 208 op/s
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.861 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance edc2f5a2-46cd-4a58-a722-7cc82e1ada05 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.861 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.861 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:57:08 np0005539550 nova_compute[257631]: 2025-11-29 07:57:08.898 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:09.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/458031861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:09 np0005539550 nova_compute[257631]: 2025-11-29 07:57:09.621 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.722s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:09 np0005539550 nova_compute[257631]: 2025-11-29 07:57:09.626 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:09 np0005539550 nova_compute[257631]: 2025-11-29 07:57:09.642 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:09 np0005539550 nova_compute[257631]: 2025-11-29 07:57:09.664 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:57:09 np0005539550 nova_compute[257631]: 2025-11-29 07:57:09.665 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.658 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.659 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.659 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.659 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.660 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.741 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 170 MiB data, 433 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 452 KiB/s wr, 112 op/s
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:57:10 np0005539550 nova_compute[257631]: 2025-11-29 07:57:10.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:57:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:11.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:11 np0005539550 nova_compute[257631]: 2025-11-29 07:57:11.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 191 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 454 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Nov 29 02:57:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:13.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:13.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.380 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.380 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.381 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.381 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.381 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.383 257641 INFO nova.compute.manager [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Terminating instance#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.384 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "refresh_cache-edc2f5a2-46cd-4a58-a722-7cc82e1ada05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.384 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquired lock "refresh_cache-edc2f5a2-46cd-4a58-a722-7cc82e1ada05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.384 257641 DEBUG nova.network.neutron [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.577 257641 DEBUG nova.network.neutron [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 191 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 29 02:57:14 np0005539550 nova_compute[257631]: 2025-11-29 07:57:14.990 257641 DEBUG nova.network.neutron [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:15 np0005539550 nova_compute[257631]: 2025-11-29 07:57:15.007 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Releasing lock "refresh_cache-edc2f5a2-46cd-4a58-a722-7cc82e1ada05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:15 np0005539550 nova_compute[257631]: 2025-11-29 07:57:15.008 257641 DEBUG nova.compute.manager [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:57:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:15 np0005539550 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 29 02:57:15 np0005539550 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000018.scope: Consumed 14.797s CPU time.
Nov 29 02:57:15 np0005539550 systemd-machined[216673]: Machine qemu-13-instance-00000018 terminated.
Nov 29 02:57:15 np0005539550 nova_compute[257631]: 2025-11-29 07:57:15.236 257641 INFO nova.virt.libvirt.driver [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance destroyed successfully.#033[00m
Nov 29 02:57:15 np0005539550 nova_compute[257631]: 2025-11-29 07:57:15.237 257641 DEBUG nova.objects.instance [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lazy-loading 'resources' on Instance uuid edc2f5a2-46cd-4a58-a722-7cc82e1ada05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:15.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:15 np0005539550 nova_compute[257631]: 2025-11-29 07:57:15.752 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:15.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:16 np0005539550 nova_compute[257631]: 2025-11-29 07:57:16.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 198 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 377 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:57:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:17.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d92c290a-9808-4b1e-b134-4ffb515c74a0 does not exist
Nov 29 02:57:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ffe7339-3b46-4f8e-b123-8bdc458abd70 does not exist
Nov 29 02:57:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d5d41c31-63e9-4a27-b4ed-3ec4a81f8885 does not exist
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:57:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:57:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.138033872 +0000 UTC m=+0.022876375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.316214685 +0000 UTC m=+0.201057158 container create 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:18 np0005539550 systemd[1]: Started libpod-conmon-8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73.scope.
Nov 29 02:57:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.56926008 +0000 UTC m=+0.454102583 container init 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.578391004 +0000 UTC m=+0.463233477 container start 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.581843792 +0000 UTC m=+0.466686255 container attach 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:57:18 np0005539550 modest_clarke[279532]: 167 167
Nov 29 02:57:18 np0005539550 systemd[1]: libpod-8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73.scope: Deactivated successfully.
Nov 29 02:57:18 np0005539550 conmon[279532]: conmon 8746517e1c85f51c7f5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73.scope/container/memory.events
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.586292266 +0000 UTC m=+0.471134739 container died 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 02:57:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0716d817190dc9aec89cde766a442b0bc0eb6eda1db8851283df986c1f37faac-merged.mount: Deactivated successfully.
Nov 29 02:57:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 164 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 405 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Nov 29 02:57:18 np0005539550 podman[279518]: 2025-11-29 07:57:18.829414247 +0000 UTC m=+0.714256720 container remove 8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:57:18 np0005539550 systemd[1]: libpod-conmon-8746517e1c85f51c7f5f329920d378cec9b24ef855e020374c80ea7d64ea1c73.scope: Deactivated successfully.
Nov 29 02:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:18.930 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:18.932 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:18.933 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:18 np0005539550 podman[279556]: 2025-11-29 07:57:18.987984578 +0000 UTC m=+0.039894930 container create b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:57:19 np0005539550 systemd[1]: Started libpod-conmon-b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5.scope.
Nov 29 02:57:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:19 np0005539550 podman[279556]: 2025-11-29 07:57:18.970767448 +0000 UTC m=+0.022677820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.110 257641 INFO nova.virt.libvirt.driver [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Deleting instance files /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05_del#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.111 257641 INFO nova.virt.libvirt.driver [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Deletion of /var/lib/nova/instances/edc2f5a2-46cd-4a58-a722-7cc82e1ada05_del complete#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.181 257641 INFO nova.compute.manager [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Took 4.17 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.181 257641 DEBUG oslo.service.loopingcall [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.181 257641 DEBUG nova.compute.manager [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.182 257641 DEBUG nova.network.neutron [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:57:19 np0005539550 podman[279556]: 2025-11-29 07:57:19.216306652 +0000 UTC m=+0.268217034 container init b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:57:19 np0005539550 podman[279556]: 2025-11-29 07:57:19.222660345 +0000 UTC m=+0.274570697 container start b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 02:57:19 np0005539550 podman[279556]: 2025-11-29 07:57:19.245989771 +0000 UTC m=+0.297900123 container attach b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.377 257641 DEBUG nova.network.neutron [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.396 257641 DEBUG nova.network.neutron [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.408 257641 INFO nova.compute.manager [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Took 0.23 seconds to deallocate network for instance.#033[00m
Nov 29 02:57:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:19.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.606 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.607 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:19 np0005539550 nova_compute[257631]: 2025-11-29 07:57:19.659 257641 DEBUG oslo_concurrency.processutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0034218764540294064 of space, bias 1.0, pg target 1.026562936208822 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:57:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 02:57:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:20 np0005539550 boring_shaw[279572]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:57:20 np0005539550 boring_shaw[279572]: --> relative data size: 1.0
Nov 29 02:57:20 np0005539550 boring_shaw[279572]: --> All data devices are unavailable
Nov 29 02:57:20 np0005539550 systemd[1]: libpod-b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5.scope: Deactivated successfully.
Nov 29 02:57:20 np0005539550 podman[279556]: 2025-11-29 07:57:20.095100887 +0000 UTC m=+1.147011249 container died b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 02:57:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844361209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.129 257641 DEBUG oslo_concurrency.processutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.136 257641 DEBUG nova.compute.provider_tree [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5c69532ca77ffad700d8a54c20ef039eb33d83f8c1a6c04e83dff45a4ad55006-merged.mount: Deactivated successfully.
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.173 257641 DEBUG nova.scheduler.client.report [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:20 np0005539550 podman[279556]: 2025-11-29 07:57:20.18875191 +0000 UTC m=+1.240662262 container remove b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 02:57:20 np0005539550 systemd[1]: libpod-conmon-b31341d67355b4dd7b9cd0439f7e65929730bad4aaa8d095849b039baf258dc5.scope: Deactivated successfully.
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.205 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.252 257641 INFO nova.scheduler.client.report [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Deleted allocations for instance edc2f5a2-46cd-4a58-a722-7cc82e1ada05#033[00m
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.361 257641 DEBUG oslo_concurrency.lockutils [None req-06621f28-4ae0-4e2c-b75c-d9bf21db6acc 8d70080854cd432eb4d60eaa335e9658 a20e261a6e5247c5a6a89f85be36c69c - - default default] Lock "edc2f5a2-46cd-4a58-a722-7cc82e1ada05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:20 np0005539550 nova_compute[257631]: 2025-11-29 07:57:20.754 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 397 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 02:57:20 np0005539550 podman[279764]: 2025-11-29 07:57:20.809841101 +0000 UTC m=+0.022416054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:20 np0005539550 podman[279764]: 2025-11-29 07:57:20.969665745 +0000 UTC m=+0.182240678 container create 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:57:21 np0005539550 systemd[1]: Started libpod-conmon-7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7.scope.
Nov 29 02:57:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:21 np0005539550 podman[279764]: 2025-11-29 07:57:21.050313135 +0000 UTC m=+0.262888088 container init 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 02:57:21 np0005539550 podman[279764]: 2025-11-29 07:57:21.056784321 +0000 UTC m=+0.269359254 container start 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:57:21 np0005539550 podman[279764]: 2025-11-29 07:57:21.059967872 +0000 UTC m=+0.272542805 container attach 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 02:57:21 np0005539550 vigorous_chebyshev[279780]: 167 167
Nov 29 02:57:21 np0005539550 systemd[1]: libpod-7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7.scope: Deactivated successfully.
Nov 29 02:57:21 np0005539550 podman[279764]: 2025-11-29 07:57:21.062037155 +0000 UTC m=+0.274612088 container died 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:57:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b222d15ad62044c2d8a70e07b096b769b8bf92b054ad48792fe0876220898078-merged.mount: Deactivated successfully.
Nov 29 02:57:21 np0005539550 podman[279764]: 2025-11-29 07:57:21.0982237 +0000 UTC m=+0.310798633 container remove 7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chebyshev, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:57:21 np0005539550 systemd[1]: libpod-conmon-7d768def58f88f317f537548b2a3feb455d1dbfcf24fe92a40f68f587c0aaea7.scope: Deactivated successfully.
Nov 29 02:57:21 np0005539550 podman[279804]: 2025-11-29 07:57:21.272237916 +0000 UTC m=+0.039798598 container create e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:57:21 np0005539550 systemd[1]: Started libpod-conmon-e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0.scope.
Nov 29 02:57:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca01bd0edccbb3f4d6a1ef3c2d5b4bd6ffbdce85a4c29502a418104da09539a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca01bd0edccbb3f4d6a1ef3c2d5b4bd6ffbdce85a4c29502a418104da09539a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca01bd0edccbb3f4d6a1ef3c2d5b4bd6ffbdce85a4c29502a418104da09539a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca01bd0edccbb3f4d6a1ef3c2d5b4bd6ffbdce85a4c29502a418104da09539a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:21 np0005539550 podman[279804]: 2025-11-29 07:57:21.254057831 +0000 UTC m=+0.021618533 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:21 np0005539550 podman[279804]: 2025-11-29 07:57:21.350746562 +0000 UTC m=+0.118307264 container init e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:57:21 np0005539550 podman[279804]: 2025-11-29 07:57:21.35928381 +0000 UTC m=+0.126844492 container start e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:21 np0005539550 podman[279804]: 2025-11-29 07:57:21.363397415 +0000 UTC m=+0.130958127 container attach e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:57:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:21.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:21 np0005539550 nova_compute[257631]: 2025-11-29 07:57:21.647 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]: {
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:    "0": [
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:        {
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "devices": [
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "/dev/loop3"
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            ],
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "lv_name": "ceph_lv0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "lv_size": "7511998464",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "name": "ceph_lv0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "tags": {
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.cluster_name": "ceph",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.crush_device_class": "",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.encrypted": "0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.osd_id": "0",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.type": "block",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:                "ceph.vdo": "0"
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            },
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "type": "block",
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:            "vg_name": "ceph_vg0"
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:        }
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]:    ]
Nov 29 02:57:22 np0005539550 jolly_pasteur[279820]: }
Nov 29 02:57:22 np0005539550 systemd[1]: libpod-e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0.scope: Deactivated successfully.
Nov 29 02:57:22 np0005539550 podman[279804]: 2025-11-29 07:57:22.229632346 +0000 UTC m=+0.997193028 container died e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 02:57:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4ca01bd0edccbb3f4d6a1ef3c2d5b4bd6ffbdce85a4c29502a418104da09539a-merged.mount: Deactivated successfully.
Nov 29 02:57:22 np0005539550 podman[279804]: 2025-11-29 07:57:22.291508887 +0000 UTC m=+1.059069569 container remove e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:57:22 np0005539550 systemd[1]: libpod-conmon-e0974291d02953e721010c66180cd84fa98fcd87dd815a99243408c5de0ac7b0.scope: Deactivated successfully.
Nov 29 02:57:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 29 02:57:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 29 02:57:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 29 02:57:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 121 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 125 KiB/s rd, 89 KiB/s wr, 54 op/s
Nov 29 02:57:22 np0005539550 podman[279983]: 2025-11-29 07:57:22.919878133 +0000 UTC m=+0.040697281 container create 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 02:57:22 np0005539550 systemd[1]: Started libpod-conmon-3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04.scope.
Nov 29 02:57:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:22 np0005539550 podman[279983]: 2025-11-29 07:57:22.995177947 +0000 UTC m=+0.115997125 container init 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:57:22 np0005539550 podman[279983]: 2025-11-29 07:57:22.902003146 +0000 UTC m=+0.022822314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:23 np0005539550 podman[279983]: 2025-11-29 07:57:23.004972947 +0000 UTC m=+0.125792095 container start 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:57:23 np0005539550 podman[279983]: 2025-11-29 07:57:23.009082872 +0000 UTC m=+0.129902060 container attach 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:57:23 np0005539550 beautiful_tesla[280000]: 167 167
Nov 29 02:57:23 np0005539550 systemd[1]: libpod-3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04.scope: Deactivated successfully.
Nov 29 02:57:23 np0005539550 podman[279983]: 2025-11-29 07:57:23.012430608 +0000 UTC m=+0.133249756 container died 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 02:57:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-58760591ac77e0c9fe91c1060afc7c51fd4614c8aab7eea1e84f61956725ca8b-merged.mount: Deactivated successfully.
Nov 29 02:57:23 np0005539550 podman[279983]: 2025-11-29 07:57:23.049026023 +0000 UTC m=+0.169845171 container remove 3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:57:23 np0005539550 systemd[1]: libpod-conmon-3c9832cd1ab48aeb346bb14e2a14c37450d6ebbbd90e78681ce8ac7900b5ff04.scope: Deactivated successfully.
Nov 29 02:57:23 np0005539550 podman[280023]: 2025-11-29 07:57:23.214126081 +0000 UTC m=+0.040656970 container create 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:23 np0005539550 systemd[1]: Started libpod-conmon-6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804.scope.
Nov 29 02:57:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:57:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0fc64b696b104831c02a87e1c80281ce59b04eca55f9efb82219630692658/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0fc64b696b104831c02a87e1c80281ce59b04eca55f9efb82219630692658/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0fc64b696b104831c02a87e1c80281ce59b04eca55f9efb82219630692658/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf0fc64b696b104831c02a87e1c80281ce59b04eca55f9efb82219630692658/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:57:23 np0005539550 podman[280023]: 2025-11-29 07:57:23.282382335 +0000 UTC m=+0.108913224 container init 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:57:23 np0005539550 podman[280023]: 2025-11-29 07:57:23.289824395 +0000 UTC m=+0.116355284 container start 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 02:57:23 np0005539550 podman[280023]: 2025-11-29 07:57:23.196262944 +0000 UTC m=+0.022793853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:57:23 np0005539550 podman[280023]: 2025-11-29 07:57:23.293296264 +0000 UTC m=+0.119827173 container attach 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:57:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:23.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]: {
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:        "osd_id": 0,
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:        "type": "bluestore"
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]:    }
Nov 29 02:57:24 np0005539550 beautiful_meninsky[280040]: }
Nov 29 02:57:24 np0005539550 podman[280023]: 2025-11-29 07:57:24.264369415 +0000 UTC m=+1.090900334 container died 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:57:24 np0005539550 systemd[1]: libpod-6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804.scope: Deactivated successfully.
Nov 29 02:57:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3cf0fc64b696b104831c02a87e1c80281ce59b04eca55f9efb82219630692658-merged.mount: Deactivated successfully.
Nov 29 02:57:24 np0005539550 podman[280023]: 2025-11-29 07:57:24.352656711 +0000 UTC m=+1.179187600 container remove 6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 02:57:24 np0005539550 podman[280062]: 2025-11-29 07:57:24.354214591 +0000 UTC m=+0.074665769 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 02:57:24 np0005539550 podman[280063]: 2025-11-29 07:57:24.357391902 +0000 UTC m=+0.078904007 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 02:57:24 np0005539550 systemd[1]: libpod-conmon-6ee545d1970ed5f99b6df130672b6d6af1d78ebbc365c4fa17156c062023a804.scope: Deactivated successfully.
Nov 29 02:57:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:57:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 121 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 125 KiB/s rd, 89 KiB/s wr, 54 op/s
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:25.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d78d98bc-4f5f-4f38-afa3-4f7f0eebcc36 does not exist
Nov 29 02:57:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9351c6c2-3372-452d-92b6-4580a1e3b420 does not exist
Nov 29 02:57:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fd575cb3-63f0-4903-908c-f06196acfb0b does not exist
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:57:25 np0005539550 nova_compute[257631]: 2025-11-29 07:57:25.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:26 np0005539550 ovn_controller[148680]: 2025-11-29T07:57:26Z|00156|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 02:57:26 np0005539550 nova_compute[257631]: 2025-11-29 07:57:26.701 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 121 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 33 KiB/s wr, 55 op/s
Nov 29 02:57:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 29 02:57:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 29 02:57:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 29 02:57:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 121 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 895 B/s wr, 74 op/s
Nov 29 02:57:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:29.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:30 np0005539550 nova_compute[257631]: 2025-11-29 07:57:30.234 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403035.233313, edc2f5a2-46cd-4a58-a722-7cc82e1ada05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:30 np0005539550 nova_compute[257631]: 2025-11-29 07:57:30.234 257641 INFO nova.compute.manager [-] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:57:30 np0005539550 nova_compute[257631]: 2025-11-29 07:57:30.260 257641 DEBUG nova.compute.manager [None req-2dd1cacf-d2da-4031-89ad-60f8c9a16931 - - - - - -] [instance: edc2f5a2-46cd-4a58-a722-7cc82e1ada05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:30 np0005539550 nova_compute[257631]: 2025-11-29 07:57:30.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 121 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 KiB/s wr, 138 op/s
Nov 29 02:57:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:31.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:31.700 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:57:31 np0005539550 nova_compute[257631]: 2025-11-29 07:57:31.701 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:31.702 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:57:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 121 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 921 B/s wr, 113 op/s
Nov 29 02:57:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:33.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.623 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.624 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.648 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.742 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.743 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.750 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.750 257641 INFO nova.compute.claims [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:57:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 121 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 921 B/s wr, 113 op/s
Nov 29 02:57:34 np0005539550 nova_compute[257631]: 2025-11-29 07:57:34.914 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 29 02:57:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 29 02:57:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 29 02:57:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:57:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1574353039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.377 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.384 257641 DEBUG nova.compute.provider_tree [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.414 257641 DEBUG nova.scheduler.client.report [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.441 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.442 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:57:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:35.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.489 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.490 257641 DEBUG nova.network.neutron [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.509 257641 INFO nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.527 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.658 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.659 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.660 257641 INFO nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Creating image(s)#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.695 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.734 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.764 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.768 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.791 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.833 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.834 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.834 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.835 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.867 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:35 np0005539550 nova_compute[257631]: 2025-11-29 07:57:35.872 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:35.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.029 257641 DEBUG nova.network.neutron [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.030 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.253 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.342 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] resizing rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.465 257641 DEBUG nova.objects.instance [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.572 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.572 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Ensure instance console log exists: /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.573 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.574 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.574 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.576 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.580 257641 WARNING nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.585 257641 DEBUG nova.virt.libvirt.host [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.585 257641 DEBUG nova.virt.libvirt.host [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.588 257641 DEBUG nova.virt.libvirt.host [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.589 257641 DEBUG nova.virt.libvirt.host [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.590 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.590 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.591 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.591 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.591 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.592 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.592 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.592 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.593 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.593 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.593 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.593 257641 DEBUG nova.virt.hardware [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.597 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:36 np0005539550 nova_compute[257631]: 2025-11-29 07:57:36.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 125 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 569 KiB/s wr, 102 op/s
Nov 29 02:57:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2854301721' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.035 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.065 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.069 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:37 np0005539550 podman[280467]: 2025-11-29 07:57:37.339846069 +0000 UTC m=+0.080246742 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 02:57:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:57:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824221280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.517 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.520 257641 DEBUG nova.objects.instance [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.541 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <uuid>fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</uuid>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <name>instance-0000001b</name>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:name>tempest-MigrationsAdminTest-server-1174659691</nova:name>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:57:36</nova:creationTime>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="serial">fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="uuid">fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/console.log" append="off"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:57:37 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:57:37 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:57:37 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:57:37 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.612 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.613 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.617 257641 INFO nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Using config drive#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.642 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:37.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.982 257641 INFO nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Creating config drive at /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config#033[00m
Nov 29 02:57:37 np0005539550 nova_compute[257631]: 2025-11-29 07:57:37.987 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqjbbfgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:38 np0005539550 nova_compute[257631]: 2025-11-29 07:57:38.112 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxqjbbfgf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:38 np0005539550 nova_compute[257631]: 2025-11-29 07:57:38.146 257641 DEBUG nova.storage.rbd_utils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:57:38 np0005539550 nova_compute[257631]: 2025-11-29 07:57:38.150 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:38 np0005539550 nova_compute[257631]: 2025-11-29 07:57:38.674 257641 DEBUG oslo_concurrency.processutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:38 np0005539550 nova_compute[257631]: 2025-11-29 07:57:38.675 257641 INFO nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Deleting local config drive /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/disk.config because it was imported into RBD.#033[00m
Nov 29 02:57:38 np0005539550 systemd-machined[216673]: New machine qemu-14-instance-0000001b.
Nov 29 02:57:38 np0005539550 systemd[1]: Started Virtual Machine qemu-14-instance-0000001b.
Nov 29 02:57:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 163 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.9 MiB/s wr, 127 op/s
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.336 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.337 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.337 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403059.3361008, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.337 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.343 257641 INFO nova.virt.libvirt.driver [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance spawned successfully.#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.344 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.362 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.369 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.373 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.374 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.375 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.375 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.376 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.376 257641 DEBUG nova.virt.libvirt.driver [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:57:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.484 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.484 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403059.336209, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.485 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Started (Lifecycle Event)#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.604 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.608 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.630 257641 INFO nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Took 3.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.631 257641 DEBUG nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:57:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:57:39.704 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.727 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:57:39 np0005539550 nova_compute[257631]: 2025-11-29 07:57:39.816 257641 INFO nova.compute.manager [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Took 5.11 seconds to build instance.#033[00m
Nov 29 02:57:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:39.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 29 02:57:40 np0005539550 nova_compute[257631]: 2025-11-29 07:57:40.059 257641 DEBUG oslo_concurrency.lockutils [None req-30767677-6cca-418e-8bc2-d40b831f2b47 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:57:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 29 02:57:40 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 29 02:57:40 np0005539550 nova_compute[257631]: 2025-11-29 07:57:40.763 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 187 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.2 MiB/s wr, 159 op/s
Nov 29 02:57:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:41.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:41 np0005539550 nova_compute[257631]: 2025-11-29 07:57:41.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:41.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 187 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.2 MiB/s wr, 217 op/s
Nov 29 02:57:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:43 np0005539550 nova_compute[257631]: 2025-11-29 07:57:43.761 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:57:43 np0005539550 nova_compute[257631]: 2025-11-29 07:57:43.762 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:57:43 np0005539550 nova_compute[257631]: 2025-11-29 07:57:43.762 257641 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:57:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.105 257641 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.446 257641 DEBUG nova.network.neutron [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.468 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.567 257641 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.568 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Creating file /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/a5cfa497bc4b4f67a2ca4116bddd6933.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.568 257641 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/a5cfa497bc4b4f67a2ca4116bddd6933.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 187 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.994 257641 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/a5cfa497bc4b4f67a2ca4116bddd6933.tmp" returned: 1 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.995 257641 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842/a5cfa497bc4b4f67a2ca4116bddd6933.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.996 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Creating directory /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 02:57:44 np0005539550 nova_compute[257631]: 2025-11-29 07:57:44.996 257641 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:57:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:45 np0005539550 nova_compute[257631]: 2025-11-29 07:57:45.212 257641 DEBUG oslo_concurrency.processutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" returned: 0 in 0.216s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:57:45 np0005539550 nova_compute[257631]: 2025-11-29 07:57:45.218 257641 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:57:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:45.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:45 np0005539550 nova_compute[257631]: 2025-11-29 07:57:45.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:45.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:46 np0005539550 nova_compute[257631]: 2025-11-29 07:57:46.708 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 187 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Nov 29 02:57:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 213 MiB data, 439 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Nov 29 02:57:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:50 np0005539550 nova_compute[257631]: 2025-11-29 07:57:50.767 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 236 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 116 op/s
Nov 29 02:57:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:51 np0005539550 nova_compute[257631]: 2025-11-29 07:57:51.743 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:51.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 236 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Nov 29 02:57:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:53.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:53.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 236 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 29 02:57:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:57:55 np0005539550 nova_compute[257631]: 2025-11-29 07:57:55.270 257641 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:57:55 np0005539550 podman[280675]: 2025-11-29 07:57:55.350048595 +0000 UTC m=+0.068560573 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 02:57:55 np0005539550 podman[280674]: 2025-11-29 07:57:55.384906706 +0000 UTC m=+0.103418954 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:57:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:57:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:57:55 np0005539550 nova_compute[257631]: 2025-11-29 07:57:55.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:57:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:55.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:57:56 np0005539550 nova_compute[257631]: 2025-11-29 07:57:56.745 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:57:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 248 MiB data, 475 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.8 MiB/s wr, 162 op/s
Nov 29 02:57:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:57:57.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:57:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:57:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 267 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 29 02:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:57:59
Nov 29 02:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta']
Nov 29 02:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:57:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:57:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:57:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:57:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:57:59 np0005539550 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 29 02:57:59 np0005539550 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Consumed 14.593s CPU time.
Nov 29 02:57:59 np0005539550 systemd-machined[216673]: Machine qemu-14-instance-0000001b terminated.
Nov 29 02:58:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:00.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.294 257641 INFO nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance shutdown successfully after 15 seconds.#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.300 257641 INFO nova.virt.libvirt.driver [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance destroyed successfully.#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.303 257641 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.304 257641 DEBUG nova.virt.libvirt.driver [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.463 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.464 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.464 257641 DEBUG oslo_concurrency.lockutils [None req-7d293f3b-919e-4725-a104-9c73aa86f828 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:00 np0005539550 nova_compute[257631]: 2025-11-29 07:58:00.772 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 269 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 152 op/s
Nov 29 02:58:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 29 02:58:01 np0005539550 nova_compute[257631]: 2025-11-29 07:58:01.748 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 29 02:58:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 29 02:58:01 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 29 02:58:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:02.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 269 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 103 op/s
Nov 29 02:58:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:03.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.326 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.326 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.340 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.414 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.415 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.424 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.424 257641 INFO nova.compute.claims [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:58:04 np0005539550 nova_compute[257631]: 2025-11-29 07:58:04.648 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 269 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 103 op/s
Nov 29 02:58:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/42215081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.107 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.113 257641 DEBUG nova.compute.provider_tree [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.128 257641 DEBUG nova.scheduler.client.report [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.166 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.167 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.215 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.315 257641 INFO nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.350 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.417 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.418 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.418 257641 DEBUG nova.compute.manager [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Going to confirm migration 8 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.467 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.468 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.469 257641 INFO nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating image(s)#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.499 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:05.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.535 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.566 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.569 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.640 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.641 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.642 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.642 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.676 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.683 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.959 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.960 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.960 257641 DEBUG nova.network.neutron [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:05 np0005539550 nova_compute[257631]: 2025-11-29 07:58:05.961 257641 DEBUG nova.objects.instance [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'info_cache' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:06.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.125 257641 DEBUG nova.network.neutron [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.339 257641 DEBUG nova.network.neutron [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.353 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.354 257641 DEBUG nova.objects.instance [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.361 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.515 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] resizing rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.549 257641 DEBUG nova.storage.rbd_utils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] removing snapshot(nova-resize) on rbd image(fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.615 257641 DEBUG nova.objects.instance [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.657 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.658 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Ensure instance console log exists: /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.658 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.658 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.659 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.660 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.664 257641 WARNING nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.669 257641 DEBUG nova.virt.libvirt.host [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.670 257641 DEBUG nova.virt.libvirt.host [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.673 257641 DEBUG nova.virt.libvirt.host [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.674 257641 DEBUG nova.virt.libvirt.host [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.675 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.675 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.676 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.676 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.676 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.676 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.677 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.677 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.677 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.677 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.677 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.678 257641 DEBUG nova.virt.hardware [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.681 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.750 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 301 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 MiB/s wr, 176 op/s
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 02:58:06 np0005539550 nova_compute[257631]: 2025-11-29 07:58:06.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 02:58:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1093238984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.142 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.169 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.173 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 29 02:58:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:07.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817271554' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.601 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.603 257641 DEBUG nova.objects.instance [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.617 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <uuid>241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</uuid>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <name>instance-0000001e</name>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAdmin275Test-server-1780743076</nova:name>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:58:06</nova:creationTime>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:user uuid="83cbfda08549451c8499d3e9bfd0d2ff">tempest-ServersAdmin275Test-1435066616-project-member</nova:user>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <nova:project uuid="8f436e4e1ee14fa09904e250330051d0">tempest-ServersAdmin275Test-1435066616</nova:project>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="serial">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="uuid">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log" append="off"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:58:07 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:58:07 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:58:07 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:58:07 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:07 np0005539550 podman[281035]: 2025-11-29 07:58:07.63492464 +0000 UTC m=+0.108505283 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.669 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.669 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.670 257641 INFO nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Using config drive#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.693 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.919 257641 INFO nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating config drive at /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config#033[00m
Nov 29 02:58:07 np0005539550 nova_compute[257631]: 2025-11-29 07:58:07.925 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99ilv7my execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:08.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.054 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp99ilv7my" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.628 257641 DEBUG nova.storage.rbd_utils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.631 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.675 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.676 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:58:08 np0005539550 nova_compute[257631]: 2025-11-29 07:58:08.831 257641 DEBUG oslo_concurrency.processutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 381 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 7.7 MiB/s wr, 308 op/s
Nov 29 02:58:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320054584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.256 257641 DEBUG oslo_concurrency.processutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.262 257641 DEBUG nova.compute.provider_tree [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.370 257641 DEBUG nova.scheduler.client.report [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.405 257641 DEBUG oslo_concurrency.processutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.773s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.405 257641 INFO nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting local config drive /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.422 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:09 np0005539550 systemd-machined[216673]: New machine qemu-15-instance-0000001e.
Nov 29 02:58:09 np0005539550 systemd[1]: Started Virtual Machine qemu-15-instance-0000001e.
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.512 257641 INFO nova.scheduler.client.report [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocation for migration 9ecb55cd-42bc-44df-9e05-e6d283102cb3#033[00m
Nov 29 02:58:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.553 257641 DEBUG oslo_concurrency.lockutils [None req-79941ffe-2e1f-4780-8289-19220a9bfc3a 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.946 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.946 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.995 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403089.9948456, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.996 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.999 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:09 np0005539550 nova_compute[257631]: 2025-11-29 07:58:09.999 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.004 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance spawned successfully.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.005 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:58:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:10.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.024 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.032 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.037 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.037 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.038 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.038 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.038 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.039 257641 DEBUG nova.virt.libvirt.driver [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.075 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.076 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403089.9951072, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.076 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.104 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.109 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.115 257641 INFO nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Took 4.65 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.116 257641 DEBUG nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.151 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.218 257641 INFO nova.compute.manager [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Took 5.83 seconds to build instance.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.262 257641 DEBUG oslo_concurrency.lockutils [None req-bc6da94d-60f6-46b4-8794-a5314649de75 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2934416992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.417 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.487 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.488 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.654 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.655 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=20.816600799560547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.655 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.656 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.761 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.761 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.762 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.777 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:10 np0005539550 nova_compute[257631]: 2025-11-29 07:58:10.796 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 394 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.7 MiB/s wr, 294 op/s
Nov 29 02:58:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489915335' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.311 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.317 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.334 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.360 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.360 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:11.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:11 np0005539550 nova_compute[257631]: 2025-11-29 07:58:11.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:12.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.360 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.360 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.361 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.721 257641 INFO nova.compute.manager [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Rebuilding instance#033[00m
Nov 29 02:58:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 394 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.8 MiB/s wr, 328 op/s
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:58:12 np0005539550 nova_compute[257631]: 2025-11-29 07:58:12.976 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.003 257641 DEBUG nova.compute.manager [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.104 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.117 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.131 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'resources' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.153 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'migration_context' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.173 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:58:13 np0005539550 nova_compute[257631]: 2025-11-29 07:58:13.176 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:58:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 394 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.8 MiB/s wr, 328 op/s
Nov 29 02:58:14 np0005539550 nova_compute[257631]: 2025-11-29 07:58:14.871 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403079.870672, fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:14 np0005539550 nova_compute[257631]: 2025-11-29 07:58:14.871 257641 INFO nova.compute.manager [-] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:58:14 np0005539550 nova_compute[257631]: 2025-11-29 07:58:14.909 257641 DEBUG nova.compute.manager [None req-a6ebb887-ac96-403e-a311-2d2ee621489f - - - - - -] [instance: fd2e91ac-3a3a-4ccc-a7f9-f2cf2fd8b842] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 29 02:58:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 29 02:58:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:15.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:15 np0005539550 nova_compute[257631]: 2025-11-29 07:58:15.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:16.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 394 MiB data, 592 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 805 KiB/s wr, 159 op/s
Nov 29 02:58:16 np0005539550 nova_compute[257631]: 2025-11-29 07:58:16.850 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:16 np0005539550 nova_compute[257631]: 2025-11-29 07:58:16.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:58:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:58:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.706 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "6e814e3b-3edb-4f37-8701-c37929994645" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.706 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.730 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.817 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.818 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.823 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:58:17 np0005539550 nova_compute[257631]: 2025-11-29 07:58:17.823 257641 INFO nova.compute.claims [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.005 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:58:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344461029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.469 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.475 257641 DEBUG nova.compute.provider_tree [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.514 257641 DEBUG nova.scheduler.client.report [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.556 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.557 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.634 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.635 257641 DEBUG nova.network.neutron [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 02:58:18 np0005539550 nova_compute[257631]: 2025-11-29 07:58:18.746 257641 INFO nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 02:58:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 394 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 720 KiB/s wr, 216 op/s
Nov 29 02:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:18.931 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:18.932 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:18.932 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.093 257641 DEBUG nova.network.neutron [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.093 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.109 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 02:58:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008500801170202866 of space, bias 1.0, pg target 2.55024035106086 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:58:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.874 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.875 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.875 257641 INFO nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Creating image(s)#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.902 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.926 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.957 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:19 np0005539550 nova_compute[257631]: 2025-11-29 07:58:19.960 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:20.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.034 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.035 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.036 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.036 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.060 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.064 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6e814e3b-3edb-4f37-8701-c37929994645_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:20 np0005539550 nova_compute[257631]: 2025-11-29 07:58:20.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 394 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 48 KiB/s wr, 228 op/s
Nov 29 02:58:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:21.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:21 np0005539550 nova_compute[257631]: 2025-11-29 07:58:21.727 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6e814e3b-3edb-4f37-8701-c37929994645_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:21 np0005539550 nova_compute[257631]: 2025-11-29 07:58:21.824 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] resizing rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:21 np0005539550 nova_compute[257631]: 2025-11-29 07:58:21.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:21 np0005539550 nova_compute[257631]: 2025-11-29 07:58:21.976 257641 DEBUG nova.objects.instance [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:22.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.082 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.083 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Ensure instance console log exists: /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.084 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.084 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.084 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.086 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.090 257641 WARNING nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.098 257641 DEBUG nova.virt.libvirt.host [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.098 257641 DEBUG nova.virt.libvirt.host [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.102 257641 DEBUG nova.virt.libvirt.host [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.103 257641 DEBUG nova.virt.libvirt.host [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.104 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.104 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:58:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f77cbd7d-f3d5-4a61-9048-1fd58898bfe7',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1547071603',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.104 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.105 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.105 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.105 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.105 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.105 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.106 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.106 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.106 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.106 257641 DEBUG nova.virt.hardware [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.109 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380738334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.542 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.566 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:22 np0005539550 nova_compute[257631]: 2025-11-29 07:58:22.569 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 423 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Nov 29 02:58:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947003478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.030 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.034 257641 DEBUG nova.objects.instance [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.054 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <uuid>6e814e3b-3edb-4f37-8701-c37929994645</uuid>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <name>instance-0000001f</name>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:name>tempest-MigrationsAdminTest-server-1156943130</nova:name>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:58:22</nova:creationTime>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:flavor name="tempest-test_resize_flavor_-1547071603">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="serial">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="uuid">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk.config">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/console.log" append="off"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:58:23 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:58:23 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:58:23 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:58:23 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.173 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.173 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.174 257641 INFO nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Using config drive#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.203 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.225 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:58:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:23.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.936 257641 INFO nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Creating config drive at /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config#033[00m
Nov 29 02:58:23 np0005539550 nova_compute[257631]: 2025-11-29 07:58:23.942 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmple6t90kg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:24 np0005539550 nova_compute[257631]: 2025-11-29 07:58:24.077 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmple6t90kg" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:24 np0005539550 nova_compute[257631]: 2025-11-29 07:58:24.179 257641 DEBUG nova.storage.rbd_utils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rbd image 6e814e3b-3edb-4f37-8701-c37929994645_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:24 np0005539550 nova_compute[257631]: 2025-11-29 07:58:24.183 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config 6e814e3b-3edb-4f37-8701-c37929994645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:24 np0005539550 nova_compute[257631]: 2025-11-29 07:58:24.374 257641 DEBUG oslo_concurrency.processutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config 6e814e3b-3edb-4f37-8701-c37929994645_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:24 np0005539550 nova_compute[257631]: 2025-11-29 07:58:24.375 257641 INFO nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Deleting local config drive /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/disk.config because it was imported into RBD.#033[00m
Nov 29 02:58:24 np0005539550 systemd-machined[216673]: New machine qemu-16-instance-0000001f.
Nov 29 02:58:24 np0005539550 systemd[1]: Started Virtual Machine qemu-16-instance-0000001f.
Nov 29 02:58:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 423 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.4 MiB/s wr, 171 op/s
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.097 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403105.0970485, 6e814e3b-3edb-4f37-8701-c37929994645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.098 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.101 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.101 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.105 257641 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance spawned successfully.#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.105 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.158 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.162 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.162 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.163 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.163 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.163 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.164 257641 DEBUG nova.virt.libvirt.driver [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.168 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.205 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.206 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403105.0982518, 6e814e3b-3edb-4f37-8701-c37929994645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.206 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.244 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.246 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.273 257641 INFO nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Took 5.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.274 257641 DEBUG nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.275 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.353 257641 INFO nova.compute.manager [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Took 7.56 seconds to build instance.#033[00m
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.381 257641 DEBUG oslo_concurrency.lockutils [None req-83df6b65-2d67-4a3f-aaab-fe634a709099 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:25 np0005539550 nova_compute[257631]: 2025-11-29 07:58:25.784 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:26.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:26 np0005539550 podman[281672]: 2025-11-29 07:58:26.14036585 +0000 UTC m=+0.061402779 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 02:58:26 np0005539550 podman[281671]: 2025-11-29 07:58:26.145794189 +0000 UTC m=+0.066701525 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:58:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 462 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 214 op/s
Nov 29 02:58:26 np0005539550 nova_compute[257631]: 2025-11-29 07:58:26.899 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 02:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:28.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 494 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.9 MiB/s wr, 247 op/s
Nov 29 02:58:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:30.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:30 np0005539550 nova_compute[257631]: 2025-11-29 07:58:30.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 500 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 6.0 MiB/s wr, 231 op/s
Nov 29 02:58:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 02:58:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:31 np0005539550 nova_compute[257631]: 2025-11-29 07:58:31.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 501 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.0 MiB/s wr, 211 op/s
Nov 29 02:58:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:34.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:34 np0005539550 nova_compute[257631]: 2025-11-29 07:58:34.283 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:58:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 501 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 200 op/s
Nov 29 02:58:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:35.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:35 np0005539550 nova_compute[257631]: 2025-11-29 07:58:35.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:58:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:36.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:58:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 501 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.9 MiB/s wr, 201 op/s
Nov 29 02:58:36 np0005539550 nova_compute[257631]: 2025-11-29 07:58:36.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:36 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:37.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d634c00f-4950-4d07-bbbf-13fbf4c31095 does not exist
Nov 29 02:58:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3f0a39ce-d578-4d3d-bde4-0bb939cbf051 does not exist
Nov 29 02:58:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fb5e43d3-accb-4941-87e7-414985a27966 does not exist
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:58:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.010 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.010 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.010 257641 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:58:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:38 np0005539550 podman[281893]: 2025-11-29 07:58:38.171608692 +0000 UTC m=+0.092654918 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:58:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 02:58:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:58:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.304 257641 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.584838701 +0000 UTC m=+0.044140439 container create 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:58:38 np0005539550 systemd[1]: Started libpod-conmon-791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f.scope.
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.563980318 +0000 UTC m=+0.023282086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.707490206 +0000 UTC m=+0.166791974 container init 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.716162067 +0000 UTC m=+0.175463805 container start 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:58:38 np0005539550 stoic_bhabha[282050]: 167 167
Nov 29 02:58:38 np0005539550 systemd[1]: libpod-791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f.scope: Deactivated successfully.
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.800745358 +0000 UTC m=+0.260047116 container attach 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:58:38 np0005539550 podman[282034]: 2025-11-29 07:58:38.802216556 +0000 UTC m=+0.261518294 container died 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 02:58:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 503 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.5 MiB/s wr, 141 op/s
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.894 257641 DEBUG nova.network.neutron [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:58:38 np0005539550 nova_compute[257631]: 2025-11-29 07:58:38.920 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:58:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f5c0b0872cfc6557c6471c38a87d722ca79ea2b665afc3b145382f5b3a3fd3f6-merged.mount: Deactivated successfully.
Nov 29 02:58:39 np0005539550 podman[282034]: 2025-11-29 07:58:39.012587451 +0000 UTC m=+0.471889189 container remove 791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_bhabha, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.033 257641 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.034 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Creating file /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/46c48fde90c44c1989d5e09dee2e1084.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.034 257641 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/46c48fde90c44c1989d5e09dee2e1084.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:39 np0005539550 systemd[1]: libpod-conmon-791ffb14fa0e72f2a6ab8d20cded31901d9b924f4e039abc1de45348a82a1c3f.scope: Deactivated successfully.
Nov 29 02:58:39 np0005539550 podman[282074]: 2025-11-29 07:58:39.193594826 +0000 UTC m=+0.045183556 container create 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 02:58:39 np0005539550 systemd[1]: Started libpod-conmon-8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787.scope.
Nov 29 02:58:39 np0005539550 podman[282074]: 2025-11-29 07:58:39.171112331 +0000 UTC m=+0.022701081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:39 np0005539550 podman[282074]: 2025-11-29 07:58:39.293013316 +0000 UTC m=+0.144602076 container init 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 02:58:39 np0005539550 podman[282074]: 2025-11-29 07:58:39.301650367 +0000 UTC m=+0.153239087 container start 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:58:39 np0005539550 podman[282074]: 2025-11-29 07:58:39.305671409 +0000 UTC m=+0.157260149 container attach 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.445 257641 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/46c48fde90c44c1989d5e09dee2e1084.tmp" returned: 1 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.447 257641 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/46c48fde90c44c1989d5e09dee2e1084.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.447 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Creating directory /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.447 257641 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:39.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.648 257641 DEBUG oslo_concurrency.processutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:39 np0005539550 nova_compute[257631]: 2025-11-29 07:58:39.652 257641 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:58:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:40 np0005539550 nice_nobel[282090]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:58:40 np0005539550 nice_nobel[282090]: --> relative data size: 1.0
Nov 29 02:58:40 np0005539550 nice_nobel[282090]: --> All data devices are unavailable
Nov 29 02:58:40 np0005539550 systemd[1]: libpod-8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787.scope: Deactivated successfully.
Nov 29 02:58:40 np0005539550 podman[282074]: 2025-11-29 07:58:40.187362877 +0000 UTC m=+1.038951607 container died 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:58:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8bc920568780e8085562928f6726b5bad01ce7c2817d50b5f6634f68d41978ce-merged.mount: Deactivated successfully.
Nov 29 02:58:40 np0005539550 podman[282074]: 2025-11-29 07:58:40.244848976 +0000 UTC m=+1.096437706 container remove 8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_nobel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 29 02:58:40 np0005539550 systemd[1]: libpod-conmon-8e7fae965c537925b822deea790ac8550bb4eb311629fcfa0109c50296767787.scope: Deactivated successfully.
Nov 29 02:58:40 np0005539550 nova_compute[257631]: 2025-11-29 07:58:40.791 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 479 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.2 MiB/s wr, 111 op/s
Nov 29 02:58:40 np0005539550 podman[282258]: 2025-11-29 07:58:40.851050375 +0000 UTC m=+0.026198880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:40 np0005539550 podman[282258]: 2025-11-29 07:58:40.96201313 +0000 UTC m=+0.137161605 container create 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:41 np0005539550 systemd[1]: Started libpod-conmon-2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1.scope.
Nov 29 02:58:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:41 np0005539550 podman[282258]: 2025-11-29 07:58:41.165263363 +0000 UTC m=+0.340411858 container init 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 02:58:41 np0005539550 podman[282258]: 2025-11-29 07:58:41.173916414 +0000 UTC m=+0.349064889 container start 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 29 02:58:41 np0005539550 keen_bardeen[282274]: 167 167
Nov 29 02:58:41 np0005539550 systemd[1]: libpod-2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1.scope: Deactivated successfully.
Nov 29 02:58:41 np0005539550 podman[282258]: 2025-11-29 07:58:41.197887156 +0000 UTC m=+0.373035631 container attach 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:41 np0005539550 podman[282258]: 2025-11-29 07:58:41.198340448 +0000 UTC m=+0.373488913 container died 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-91b6ed0c7b2c09351afe0eef8d104b7c1de171a633b18dc87467342061163132-merged.mount: Deactivated successfully.
Nov 29 02:58:41 np0005539550 podman[282258]: 2025-11-29 07:58:41.291596181 +0000 UTC m=+0.466744656 container remove 2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bardeen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:58:41 np0005539550 systemd[1]: libpod-conmon-2a858fb4f7d77ee82debba44941b6289b0ceb669089a1afd55b7ff392dfaa5c1.scope: Deactivated successfully.
Nov 29 02:58:41 np0005539550 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 02:58:41 np0005539550 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001e.scope: Consumed 14.492s CPU time.
Nov 29 02:58:41 np0005539550 systemd-machined[216673]: Machine qemu-15-instance-0000001e terminated.
Nov 29 02:58:41 np0005539550 nova_compute[257631]: 2025-11-29 07:58:41.494 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance shutdown successfully after 28 seconds.#033[00m
Nov 29 02:58:41 np0005539550 nova_compute[257631]: 2025-11-29 07:58:41.500 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance destroyed successfully.#033[00m
Nov 29 02:58:41 np0005539550 nova_compute[257631]: 2025-11-29 07:58:41.506 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance destroyed successfully.#033[00m
Nov 29 02:58:41 np0005539550 podman[282301]: 2025-11-29 07:58:41.437752875 +0000 UTC m=+0.023767248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:41.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:41 np0005539550 podman[282301]: 2025-11-29 07:58:41.559042304 +0000 UTC m=+0.145056647 container create 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:58:41 np0005539550 systemd[1]: Started libpod-conmon-4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1.scope.
Nov 29 02:58:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501aa2735874bcf68ab2af7257cec74efcc18b59e63d048233cac12ea0e36292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501aa2735874bcf68ab2af7257cec74efcc18b59e63d048233cac12ea0e36292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501aa2735874bcf68ab2af7257cec74efcc18b59e63d048233cac12ea0e36292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501aa2735874bcf68ab2af7257cec74efcc18b59e63d048233cac12ea0e36292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:41 np0005539550 podman[282301]: 2025-11-29 07:58:41.821470799 +0000 UTC m=+0.407485142 container init 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:58:41 np0005539550 podman[282301]: 2025-11-29 07:58:41.830709785 +0000 UTC m=+0.416724128 container start 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:58:41 np0005539550 podman[282301]: 2025-11-29 07:58:41.834484652 +0000 UTC m=+0.420499025 container attach 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:58:41 np0005539550 nova_compute[257631]: 2025-11-29 07:58:41.906 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:42.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]: {
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:    "0": [
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:        {
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "devices": [
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "/dev/loop3"
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            ],
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "lv_name": "ceph_lv0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "lv_size": "7511998464",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "name": "ceph_lv0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "tags": {
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.cluster_name": "ceph",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.crush_device_class": "",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.encrypted": "0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.osd_id": "0",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.type": "block",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:                "ceph.vdo": "0"
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            },
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "type": "block",
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:            "vg_name": "ceph_vg0"
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:        }
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]:    ]
Nov 29 02:58:42 np0005539550 wizardly_tharp[282337]: }
Nov 29 02:58:42 np0005539550 systemd[1]: libpod-4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1.scope: Deactivated successfully.
Nov 29 02:58:42 np0005539550 podman[282301]: 2025-11-29 07:58:42.641780539 +0000 UTC m=+1.227794892 container died 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 02:58:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-501aa2735874bcf68ab2af7257cec74efcc18b59e63d048233cac12ea0e36292-merged.mount: Deactivated successfully.
Nov 29 02:58:42 np0005539550 podman[282301]: 2025-11-29 07:58:42.694592778 +0000 UTC m=+1.280607121 container remove 4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:58:42 np0005539550 systemd[1]: libpod-conmon-4d090df2243c101ee66abbfabd9b677d5e87b802458d8a6c5454f1ee7204cae1.scope: Deactivated successfully.
Nov 29 02:58:42 np0005539550 nova_compute[257631]: 2025-11-29 07:58:42.714 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting instance files /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del#033[00m
Nov 29 02:58:42 np0005539550 nova_compute[257631]: 2025-11-29 07:58:42.715 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deletion of /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del complete#033[00m
Nov 29 02:58:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 409 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 2.2 MiB/s wr, 128 op/s
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.009 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.010 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating image(s)#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.037 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.068 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.095 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.099 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.100 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.300966941 +0000 UTC m=+0.039049418 container create ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 02:58:43 np0005539550 systemd[1]: Started libpod-conmon-ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09.scope.
Nov 29 02:58:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.285100526 +0000 UTC m=+0.023183013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.387577374 +0000 UTC m=+0.125659861 container init ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.394800049 +0000 UTC m=+0.132882516 container start ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.398238097 +0000 UTC m=+0.136320564 container attach ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:58:43 np0005539550 crazy_banzai[282568]: 167 167
Nov 29 02:58:43 np0005539550 systemd[1]: libpod-ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09.scope: Deactivated successfully.
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.401715885 +0000 UTC m=+0.139798362 container died ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 02:58:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9eff3ba85827e61279d350eb6a45ee461a326e2673d9a6817e4e2b0ab269b350-merged.mount: Deactivated successfully.
Nov 29 02:58:43 np0005539550 podman[282552]: 2025-11-29 07:58:43.441240105 +0000 UTC m=+0.179322572 container remove ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:58:43 np0005539550 systemd[1]: libpod-conmon-ca27727ab500df829fad2e13c99a3fb03bc8a96d9c01bcfd8cc0a0130b6b7d09.scope: Deactivated successfully.
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.509 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:43.509 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:58:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:43.512 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:58:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:58:43.513 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 02:58:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:43.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:43 np0005539550 nova_compute[257631]: 2025-11-29 07:58:43.670 257641 DEBUG nova.virt.libvirt.imagebackend [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/93eccffb-bacd-407f-af6f-64451dee7b21/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 02:58:43 np0005539550 podman[282591]: 2025-11-29 07:58:43.593571347 +0000 UTC m=+0.023792889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:58:43 np0005539550 podman[282591]: 2025-11-29 07:58:43.810952091 +0000 UTC m=+0.241173633 container create 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 02:58:43 np0005539550 systemd[1]: Started libpod-conmon-4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069.scope.
Nov 29 02:58:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:58:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08327a951e4d6a59ad701cb172fa405c8efa79b7037f97aaa68095f2d3042c63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08327a951e4d6a59ad701cb172fa405c8efa79b7037f97aaa68095f2d3042c63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08327a951e4d6a59ad701cb172fa405c8efa79b7037f97aaa68095f2d3042c63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08327a951e4d6a59ad701cb172fa405c8efa79b7037f97aaa68095f2d3042c63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:58:43 np0005539550 podman[282591]: 2025-11-29 07:58:43.907433627 +0000 UTC m=+0.337655179 container init 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 02:58:43 np0005539550 podman[282591]: 2025-11-29 07:58:43.915481952 +0000 UTC m=+0.345703484 container start 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 02:58:43 np0005539550 podman[282591]: 2025-11-29 07:58:43.91931502 +0000 UTC m=+0.349536592 container attach 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 02:58:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:44 np0005539550 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 02:58:44 np0005539550 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001f.scope: Consumed 14.333s CPU time.
Nov 29 02:58:44 np0005539550 systemd-machined[216673]: Machine qemu-16-instance-0000001f terminated.
Nov 29 02:58:44 np0005539550 nova_compute[257631]: 2025-11-29 07:58:44.679 257641 INFO nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance shutdown successfully after 5 seconds.#033[00m
Nov 29 02:58:44 np0005539550 nova_compute[257631]: 2025-11-29 07:58:44.686 257641 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance destroyed successfully.#033[00m
Nov 29 02:58:44 np0005539550 nova_compute[257631]: 2025-11-29 07:58:44.690 257641 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:44 np0005539550 nova_compute[257631]: 2025-11-29 07:58:44.690 257641 DEBUG nova.virt.libvirt.driver [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]: {
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:        "osd_id": 0,
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:        "type": "bluestore"
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]:    }
Nov 29 02:58:44 np0005539550 fervent_shtern[282607]: }
Nov 29 02:58:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 409 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 457 KiB/s rd, 2.2 MiB/s wr, 126 op/s
Nov 29 02:58:44 np0005539550 systemd[1]: libpod-4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069.scope: Deactivated successfully.
Nov 29 02:58:44 np0005539550 podman[282591]: 2025-11-29 07:58:44.876203968 +0000 UTC m=+1.306425500 container died 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 02:58:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-08327a951e4d6a59ad701cb172fa405c8efa79b7037f97aaa68095f2d3042c63-merged.mount: Deactivated successfully.
Nov 29 02:58:44 np0005539550 podman[282591]: 2025-11-29 07:58:44.943960939 +0000 UTC m=+1.374182481 container remove 4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 02:58:44 np0005539550 systemd[1]: libpod-conmon-4cadd81515fcc854422a698117a302064b2ba7e239f12a68714d5e45f97b2069.scope: Deactivated successfully.
Nov 29 02:58:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:58:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:58:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b67e9d5e-1340-4b93-b1d0-8b286a094671 does not exist
Nov 29 02:58:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fcba03e2-2043-41a5-9dc9-336bc370a134 does not exist
Nov 29 02:58:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f5023910-c051-4bdc-933a-ee6abd8233f9 does not exist
Nov 29 02:58:45 np0005539550 nova_compute[257631]: 2025-11-29 07:58:45.331 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "6e814e3b-3edb-4f37-8701-c37929994645-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:45 np0005539550 nova_compute[257631]: 2025-11-29 07:58:45.332 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:45 np0005539550 nova_compute[257631]: 2025-11-29 07:58:45.332 257641 DEBUG oslo_concurrency.lockutils [None req-c339b404-48ac-4a0c-ace7-fdbcb8c4c4f8 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 02:58:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:45.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 02:58:45 np0005539550 nova_compute[257631]: 2025-11-29 07:58:45.815 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:58:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.216 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.281 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.283 257641 DEBUG nova.virt.images [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] 93eccffb-bacd-407f-af6f-64451dee7b21 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.283 257641 DEBUG nova.privsep.utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.284 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.688 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.part /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.693 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.752 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.753 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.783 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.788 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 383 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 499 KiB/s rd, 2.3 MiB/s wr, 152 op/s
Nov 29 02:58:46 np0005539550 nova_compute[257631]: 2025-11-29 07:58:46.907 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:47.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:48.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 383 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 156 op/s
Nov 29 02:58:49 np0005539550 nova_compute[257631]: 2025-11-29 07:58:49.311 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:49 np0005539550 nova_compute[257631]: 2025-11-29 07:58:49.398 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] resizing rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:58:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:49.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.305 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.306 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Ensure instance console log exists: /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.306 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.307 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.307 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.309 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.313 257641 WARNING nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.320 257641 DEBUG nova.virt.libvirt.host [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.321 257641 DEBUG nova.virt.libvirt.host [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.325 257641 DEBUG nova.virt.libvirt.host [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.325 257641 DEBUG nova.virt.libvirt.host [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.327 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.327 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.327 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.328 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.328 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.328 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.328 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.329 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.329 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.329 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.329 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.329 257641 DEBUG nova.virt.hardware [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.330 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.566 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:50 np0005539550 nova_compute[257631]: 2025-11-29 07:58:50.794 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 396 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 171 op/s
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209866186' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.018 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.050 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.054 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358931924' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.485 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.488 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <uuid>241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</uuid>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <name>instance-0000001e</name>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAdmin275Test-server-1780743076</nova:name>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:58:50</nova:creationTime>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:user uuid="83cbfda08549451c8499d3e9bfd0d2ff">tempest-ServersAdmin275Test-1435066616-project-member</nova:user>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <nova:project uuid="8f436e4e1ee14fa09904e250330051d0">tempest-ServersAdmin275Test-1435066616</nova:project>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="serial">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="uuid">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log" append="off"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:58:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:58:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:58:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:58:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:58:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:51.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.688 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.689 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.689 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Using config drive#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.720 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.856 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:51 np0005539550 nova_compute[257631]: 2025-11-29 07:58:51.964 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'keypairs' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:52 np0005539550 nova_compute[257631]: 2025-11-29 07:58:52.808 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating config drive at /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config#033[00m
Nov 29 02:58:52 np0005539550 nova_compute[257631]: 2025-11-29 07:58:52.812 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz3ux_ti execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 429 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 29 02:58:52 np0005539550 nova_compute[257631]: 2025-11-29 07:58:52.940 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcz3ux_ti" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:53 np0005539550 nova_compute[257631]: 2025-11-29 07:58:53.157 257641 DEBUG nova.storage.rbd_utils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:58:53 np0005539550 nova_compute[257631]: 2025-11-29 07:58:53.161 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:58:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:53.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:54.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:54 np0005539550 nova_compute[257631]: 2025-11-29 07:58:54.314 257641 DEBUG oslo_concurrency.processutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:58:54 np0005539550 nova_compute[257631]: 2025-11-29 07:58:54.315 257641 INFO nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting local config drive /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:58:54 np0005539550 systemd-machined[216673]: New machine qemu-17-instance-0000001e.
Nov 29 02:58:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 29 02:58:54 np0005539550 systemd[1]: Started Virtual Machine qemu-17-instance-0000001e.
Nov 29 02:58:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 429 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 29 02:58:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.325 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.325 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403135.3250117, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.326 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:58:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.329 257641 DEBUG nova.compute.manager [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.329 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.333 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance spawned successfully.#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.333 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.410 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.413 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.442 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.442 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.443 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.443 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.444 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.444 257641 DEBUG nova.virt.libvirt.driver [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.486 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.486 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403135.3270357, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.487 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.527 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.530 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:55.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.630 257641 DEBUG nova.compute.manager [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.638 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.768 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.769 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.769 257641 DEBUG nova.objects.instance [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:58:55 np0005539550 nova_compute[257631]: 2025-11-29 07:58:55.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:56 np0005539550 podman[283056]: 2025-11-29 07:58:56.32736232 +0000 UTC m=+0.058130687 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:58:56 np0005539550 podman[283057]: 2025-11-29 07:58:56.351728922 +0000 UTC m=+0.081840362 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 02:58:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 429 MiB data, 601 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Nov 29 02:58:56 np0005539550 nova_compute[257631]: 2025-11-29 07:58:56.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:58:57 np0005539550 nova_compute[257631]: 2025-11-29 07:58:57.350 257641 DEBUG oslo_concurrency.lockutils [None req-e2a5924f-1a0e-4636-8486-5a4b87d62811 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 1.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:58:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:57.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:58:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:58:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:58:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:58:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:58:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.858709) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138858842, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2383, "num_deletes": 510, "total_data_size": 3793703, "memory_usage": 3859328, "flush_reason": "Manual Compaction"}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 29 02:58:58 np0005539550 nova_compute[257631]: 2025-11-29 07:58:58.864 257641 INFO nova.compute.manager [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Rebuilding instance#033[00m
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138886485, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3350767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26009, "largest_seqno": 28391, "table_properties": {"data_size": 3340965, "index_size": 5592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24738, "raw_average_key_size": 20, "raw_value_size": 3318885, "raw_average_value_size": 2693, "num_data_blocks": 243, "num_entries": 1232, "num_filter_entries": 1232, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764402933, "oldest_key_time": 1764402933, "file_creation_time": 1764403138, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 27873 microseconds, and 8244 cpu microseconds.
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.886569) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3350767 bytes OK
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.886601) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.888309) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.888341) EVENT_LOG_v1 {"time_micros": 1764403138888333, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.888363) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3782836, prev total WAL file size 3799399, number of live WAL files 2.
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.889370) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3272KB)], [59(10MB)]
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138889426, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14644344, "oldest_snapshot_seqno": -1}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5987 keys, 10984765 bytes, temperature: kUnknown
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138978693, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 10984765, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10944218, "index_size": 24501, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 153340, "raw_average_key_size": 25, "raw_value_size": 10835828, "raw_average_value_size": 1809, "num_data_blocks": 992, "num_entries": 5987, "num_filter_entries": 5987, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403138, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.979140) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 10984765 bytes
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.985230) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.3) OK, records in: 7005, records dropped: 1018 output_compression: NoCompression
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.985259) EVENT_LOG_v1 {"time_micros": 1764403138985247, "job": 32, "event": "compaction_finished", "compaction_time_micros": 89537, "compaction_time_cpu_micros": 30691, "output_level": 6, "num_output_files": 1, "total_output_size": 10984765, "num_input_records": 7005, "num_output_records": 5987, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138986974, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403138989146, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.889239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.989326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.989342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.989346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.989348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:58 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:58:58.989350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.116 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.133 257641 DEBUG nova.compute.manager [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.203 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'pci_requests' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.215 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'pci_devices' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.226 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'resources' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.239 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'migration_context' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.252 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.255 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.283 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403124.2819672, 6e814e3b-3edb-4f37-8701-c37929994645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.283 257641 INFO nova.compute.manager [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.309 257641 DEBUG nova.compute.manager [None req-02d43db1-3e81-4f40-9b2c-c9d83da901c9 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:58:59
Nov 29 02:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', 'default.rgw.log', '.rgw.root']
Nov 29 02:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:58:59 np0005539550 nova_compute[257631]: 2025-11-29 07:58:59.313 257641 DEBUG nova.compute.manager [None req-02d43db1-3e81-4f40-9b2c-c9d83da901c9 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:58:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:58:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:58:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:58:59.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:00 np0005539550 nova_compute[257631]: 2025-11-29 07:59:00.008 257641 INFO nova.compute.manager [None req-02d43db1-3e81-4f40-9b2c-c9d83da901c9 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 02:59:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:00.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:00 np0005539550 nova_compute[257631]: 2025-11-29 07:59:00.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 516 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Nov 29 02:59:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:01.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:01 np0005539550 nova_compute[257631]: 2025-11-29 07:59:01.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:02.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Nov 29 02:59:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:03.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:04.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Nov 29 02:59:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:59:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:05.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:59:05 np0005539550 nova_compute[257631]: 2025-11-29 07:59:05.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 266 B/s wr, 87 op/s
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.956 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.956 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:06 np0005539550 nova_compute[257631]: 2025-11-29 07:59:06.957 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 02:59:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:07.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:59:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:08.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:08 np0005539550 podman[283154]: 2025-11-29 07:59:08.368297299 +0000 UTC m=+0.104191183 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 02:59:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 429 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 85 op/s
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.200 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.200 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.200 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.221 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.221 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.221 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.222 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.298 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.453 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:09.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.724 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.743 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.744 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.940 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.941 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 02:59:09 np0005539550 nova_compute[257631]: 2025-11-29 07:59:09.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:10.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 29 02:59:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068131545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.411 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.482 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.483 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.487 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.487 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.674 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.675 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4532MB free_disk=20.785202026367188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.675 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.676 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.727 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration for instance 6e814e3b-3edb-4f37-8701-c37929994645 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.754 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating resource usage from migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.755 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting to track outgoing migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3 with flavor f77cbd7d-f3d5-4a61-9048-1fd58898bfe7 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 396 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 937 B/s wr, 97 op/s
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.898 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.899 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.899 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 02:59:10 np0005539550 nova_compute[257631]: 2025-11-29 07:59:10.899 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.043 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 29 02:59:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 29 02:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476580377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.542 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.549 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.569 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:11.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.593 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.594 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.595 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:11 np0005539550 nova_compute[257631]: 2025-11-29 07:59:11.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:12.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:12 np0005539550 nova_compute[257631]: 2025-11-29 07:59:12.601 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:12 np0005539550 nova_compute[257631]: 2025-11-29 07:59:12.601 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:12 np0005539550 nova_compute[257631]: 2025-11-29 07:59:12.601 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:12 np0005539550 nova_compute[257631]: 2025-11-29 07:59:12.601 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:12 np0005539550 nova_compute[257631]: 2025-11-29 07:59:12.602 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 371 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.0 MiB/s wr, 36 op/s
Nov 29 02:59:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:13.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:13 np0005539550 nova_compute[257631]: 2025-11-29 07:59:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 02:59:13 np0005539550 nova_compute[257631]: 2025-11-29 07:59:13.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 02:59:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:14.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 339 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Nov 29 02:59:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:15.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:15 np0005539550 nova_compute[257631]: 2025-11-29 07:59:15.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:16.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.173750) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156174332, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 368, "num_deletes": 251, "total_data_size": 266735, "memory_usage": 274800, "flush_reason": "Manual Compaction"}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156189170, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 264658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28392, "largest_seqno": 28759, "table_properties": {"data_size": 262323, "index_size": 435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5973, "raw_average_key_size": 19, "raw_value_size": 257609, "raw_average_value_size": 825, "num_data_blocks": 19, "num_entries": 312, "num_filter_entries": 312, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403138, "oldest_key_time": 1764403138, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 15475 microseconds, and 1898 cpu microseconds.
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.189219) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 264658 bytes OK
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.189241) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.213307) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.213357) EVENT_LOG_v1 {"time_micros": 1764403156213346, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.213383) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 264268, prev total WAL file size 264268, number of live WAL files 2.
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.214546) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(258KB)], [62(10MB)]
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156214630, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11249423, "oldest_snapshot_seqno": -1}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5784 keys, 9280492 bytes, temperature: kUnknown
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156308575, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9280492, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9242766, "index_size": 22192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149904, "raw_average_key_size": 25, "raw_value_size": 9139309, "raw_average_value_size": 1580, "num_data_blocks": 888, "num_entries": 5784, "num_filter_entries": 5784, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403156, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.308921) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9280492 bytes
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.311039) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.6 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(77.6) write-amplify(35.1) OK, records in: 6299, records dropped: 515 output_compression: NoCompression
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.311087) EVENT_LOG_v1 {"time_micros": 1764403156311069, "job": 34, "event": "compaction_finished", "compaction_time_micros": 94029, "compaction_time_cpu_micros": 24794, "output_level": 6, "num_output_files": 1, "total_output_size": 9280492, "num_input_records": 6299, "num_output_records": 5784, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156311384, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403156313664, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.214361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.313988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.314109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.314237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.314373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-07:59:16.314493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 02:59:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 347 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 115 op/s
Nov 29 02:59:16 np0005539550 nova_compute[257631]: 2025-11-29 07:59:16.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:17.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:18.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 359 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 198 op/s
Nov 29 02:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:59:18.932 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:59:18.933 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:59:18.933 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:19.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:19 np0005539550 nova_compute[257631]: 2025-11-29 07:59:19.725 257641 INFO nova.compute.manager [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Swapping old allocation on dict_keys(['a73c606e-2495-4af4-b703-8d4b3001fdf5']) held by migration 39ac01bb-9eeb-45e2-96ef-f28173744ae3 for instance#033[00m
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008668196305602085 of space, bias 1.0, pg target 2.6004588916806255 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 02:59:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 02:59:19 np0005539550 nova_compute[257631]: 2025-11-29 07:59:19.780 257641 DEBUG nova.scheduler.client.report [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Overwriting current allocation {'allocations': {'67c71d68-0dd7-4589-b775-189b4191a844': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 21}}, 'project_id': '6717732f9fa242b181f58881b03d246f', 'user_id': '51ae07f600c545c0b4c7fae00657ea40', 'consumer_generation': 1} on consumer 6e814e3b-3edb-4f37-8701-c37929994645 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.065 257641 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.065 257641 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.066 257641 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:20.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.343 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.352 257641 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.807 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 362 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 191 op/s
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.935 257641 DEBUG nova.network.neutron [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.959 257641 DEBUG oslo_concurrency.lockutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:20 np0005539550 nova_compute[257631]: 2025-11-29 07:59:20.960 257641 DEBUG nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 02:59:21 np0005539550 nova_compute[257631]: 2025-11-29 07:59:21.040 257641 DEBUG nova.storage.rbd_utils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] rolling back rbd image(6e814e3b-3edb-4f37-8701-c37929994645_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 02:59:21 np0005539550 nova_compute[257631]: 2025-11-29 07:59:21.449 257641 DEBUG nova.storage.rbd_utils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] removing snapshot(nova-resize) on rbd image(6e814e3b-3edb-4f37-8701-c37929994645_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 02:59:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:21.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 29 02:59:21 np0005539550 nova_compute[257631]: 2025-11-29 07:59:21.968 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 29 02:59:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:59:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:22.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:59:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.190 257641 DEBUG nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.193 257641 WARNING nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.199 257641 DEBUG nova.virt.libvirt.host [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.199 257641 DEBUG nova.virt.libvirt.host [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.206 257641 DEBUG nova.virt.libvirt.host [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.207 257641 DEBUG nova.virt.libvirt.host [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.208 257641 DEBUG nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.208 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:58:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='f77cbd7d-f3d5-4a61-9048-1fd58898bfe7',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-1547071603',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.208 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.208 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.209 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.209 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.209 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.209 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.210 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.210 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.210 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.210 257641 DEBUG nova.virt.hardware [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.210 257641 DEBUG nova.objects.instance [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.236 257641 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591847791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.705 257641 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:22 np0005539550 nova_compute[257631]: 2025-11-29 07:59:22.744 257641 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 362 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 185 op/s
Nov 29 02:59:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063960537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.245 257641 DEBUG oslo_concurrency.processutils [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.249 257641 DEBUG nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <uuid>6e814e3b-3edb-4f37-8701-c37929994645</uuid>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <name>instance-0000001f</name>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:name>tempest-MigrationsAdminTest-server-1156943130</nova:name>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:59:22</nova:creationTime>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:flavor name="tempest-test_resize_flavor_-1547071603">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="serial">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="uuid">6e814e3b-3edb-4f37-8701-c37929994645</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6e814e3b-3edb-4f37-8701-c37929994645_disk.config">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645/console.log" append="off"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:59:23 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:59:23 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:59:23 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:59:23 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:59:23 np0005539550 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 02:59:23 np0005539550 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001e.scope: Consumed 16.871s CPU time.
Nov 29 02:59:23 np0005539550 systemd-machined[216673]: Machine qemu-17-instance-0000001e terminated.
Nov 29 02:59:23 np0005539550 systemd-machined[216673]: New machine qemu-18-instance-0000001f.
Nov 29 02:59:23 np0005539550 systemd[1]: Started Virtual Machine qemu-18-instance-0000001f.
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.460 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.467 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance destroyed successfully.#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.473 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance destroyed successfully.#033[00m
Nov 29 02:59:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:23.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.960 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403163.959567, 6e814e3b-3edb-4f37-8701-c37929994645 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.960 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.963 257641 DEBUG nova.compute.manager [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.966 257641 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance running successfully.#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.966 257641 DEBUG nova.virt.libvirt.driver [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.985 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:23 np0005539550 nova_compute[257631]: 2025-11-29 07:59:23.989 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.037 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.037 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403163.9626768, 6e814e3b-3edb-4f37-8701-c37929994645 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.038 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Started (Lifecycle Event)#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.077 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.081 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.084 257641 INFO nova.compute.manager [None req-a8e57ac3-8a23-43c9-a16d-b922732cfa21 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance to original state: 'active'#033[00m
Nov 29 02:59:24 np0005539550 nova_compute[257631]: 2025-11-29 07:59:24.125 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 02:59:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:24.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 301 active+clean; 362 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.5 MiB/s wr, 179 op/s
Nov 29 02:59:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:25 np0005539550 nova_compute[257631]: 2025-11-29 07:59:25.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 1 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 301 active+clean; 337 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 972 KiB/s wr, 156 op/s
Nov 29 02:59:26 np0005539550 nova_compute[257631]: 2025-11-29 07:59:26.971 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.120 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting instance files /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.121 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deletion of /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del complete#033[00m
Nov 29 02:59:27 np0005539550 podman[283434]: 2025-11-29 07:59:27.324744845 +0000 UTC m=+0.057169812 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 02:59:27 np0005539550 podman[283433]: 2025-11-29 07:59:27.35976392 +0000 UTC m=+0.094148847 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.399 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.400 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating image(s)#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.427 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.457 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.489 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.496 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.568 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.568 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.569 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.569 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.607 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:27 np0005539550 nova_compute[257631]: 2025-11-29 07:59:27.611 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:28 np0005539550 nova_compute[257631]: 2025-11-29 07:59:28.733 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:28 np0005539550 nova_compute[257631]: 2025-11-29 07:59:28.819 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] resizing rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 02:59:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 290 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 104 KiB/s wr, 125 op/s
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.537 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.538 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Ensure instance console log exists: /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.538 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.539 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.539 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.541 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.545 257641 WARNING nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.551 257641 DEBUG nova.virt.libvirt.host [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.552 257641 DEBUG nova.virt.libvirt.host [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.556 257641 DEBUG nova.virt.libvirt.host [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.556 257641 DEBUG nova.virt.libvirt.host [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.557 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.557 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.558 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.558 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.558 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.558 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.559 257641 DEBUG nova.virt.hardware [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.560 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:29 np0005539550 nova_compute[257631]: 2025-11-29 07:59:29.603 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935402513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.074 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.105 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.109 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:30.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 02:59:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458780795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.548 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.551 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <uuid>241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</uuid>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <name>instance-0000001e</name>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAdmin275Test-server-1780743076</nova:name>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 07:59:29</nova:creationTime>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:user uuid="83cbfda08549451c8499d3e9bfd0d2ff">tempest-ServersAdmin275Test-1435066616-project-member</nova:user>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <nova:project uuid="8f436e4e1ee14fa09904e250330051d0">tempest-ServersAdmin275Test-1435066616</nova:project>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <system>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="serial">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="uuid">241f7bd9-4070-42cb-99ab-0c9d53eb7e5b</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </system>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <os>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </os>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <features>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </features>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </clock>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  <devices>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/console.log" append="off"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </serial>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <video>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </video>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </rng>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 02:59:30 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 02:59:30 np0005539550 nova_compute[257631]:  </devices>
Nov 29 02:59:30 np0005539550 nova_compute[257631]: </domain>
Nov 29 02:59:30 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.811 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 316 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 158 op/s
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.978 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.978 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 02:59:30 np0005539550 nova_compute[257631]: 2025-11-29 07:59:30.979 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Using config drive#033[00m
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.009 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.043 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.072 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lazy-loading 'keypairs' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.588 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Creating config drive at /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config#033[00m
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.594 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wrvk4bd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:31.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 29 02:59:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 29 02:59:31 np0005539550 nova_compute[257631]: 2025-11-29 07:59:31.723 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0wrvk4bd" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:32 np0005539550 nova_compute[257631]: 2025-11-29 07:59:32.844 257641 DEBUG nova.storage.rbd_utils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] rbd image 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 02:59:32 np0005539550 nova_compute[257631]: 2025-11-29 07:59:32.849 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 325 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.7 MiB/s wr, 168 op/s
Nov 29 02:59:32 np0005539550 nova_compute[257631]: 2025-11-29 07:59:32.874 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:33 np0005539550 nova_compute[257631]: 2025-11-29 07:59:33.609 257641 DEBUG oslo_concurrency.processutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.761s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:33 np0005539550 nova_compute[257631]: 2025-11-29 07:59:33.610 257641 INFO nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting local config drive /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b/disk.config because it was imported into RBD.#033[00m
Nov 29 02:59:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:33.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:33 np0005539550 systemd-machined[216673]: New machine qemu-19-instance-0000001e.
Nov 29 02:59:33 np0005539550 systemd[1]: Started Virtual Machine qemu-19-instance-0000001e.
Nov 29 02:59:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:34.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.196 257641 DEBUG nova.compute.manager [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.199 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.199 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.200 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403174.1988254, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.200 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.207 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance spawned successfully.#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.208 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.222 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.227 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.232 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.233 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.233 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.233 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.234 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.234 257641 DEBUG nova.virt.libvirt.driver [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.272 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.273 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403174.1989458, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.273 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Started (Lifecycle Event)#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.324 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.328 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.371 257641 DEBUG nova.compute.manager [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.422 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.484 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.484 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.485 257641 DEBUG nova.objects.instance [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 02:59:34 np0005539550 nova_compute[257631]: 2025-11-29 07:59:34.556 257641 DEBUG oslo_concurrency.lockutils [None req-dc8bea93-d366-4955-a982-a5fa05daf0d0 3a7601c8276948e98a343db1dddbe667 a34b88ea36a3473ebf4b27de006d2277 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 329 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 167 op/s
Nov 29 02:59:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:35.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:35 np0005539550 nova_compute[257631]: 2025-11-29 07:59:35.813 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.071 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.072 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.072 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.072 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.073 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.074 257641 INFO nova.compute.manager [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Terminating instance#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.074 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.075 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquired lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.075 257641 DEBUG nova.network.neutron [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 02:59:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:59:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:59:36 np0005539550 nova_compute[257631]: 2025-11-29 07:59:36.522 257641 DEBUG nova.network.neutron [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 353 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 151 op/s
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.017 257641 DEBUG nova.network.neutron [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.053 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Releasing lock "refresh_cache-241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.054 257641 DEBUG nova.compute.manager [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 02:59:37 np0005539550 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 29 02:59:37 np0005539550 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000001e.scope: Consumed 3.523s CPU time.
Nov 29 02:59:37 np0005539550 systemd-machined[216673]: Machine qemu-19-instance-0000001e terminated.
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.277 257641 INFO nova.virt.libvirt.driver [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance destroyed successfully.#033[00m
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.278 257641 DEBUG nova.objects.instance [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lazy-loading 'resources' on Instance uuid 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:37.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:37 np0005539550 nova_compute[257631]: 2025-11-29 07:59:37.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 376 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 171 op/s
Nov 29 02:59:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 02:59:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/109193738' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 02:59:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 02:59:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/109193738' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 02:59:39 np0005539550 podman[283885]: 2025-11-29 07:59:39.388189259 +0000 UTC m=+0.127512299 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:39.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:40 np0005539550 nova_compute[257631]: 2025-11-29 07:59:40.816 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 376 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 155 op/s
Nov 29 02:59:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:41.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 376 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 142 op/s
Nov 29 02:59:42 np0005539550 nova_compute[257631]: 2025-11-29 07:59:42.880 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:43.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 376 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Nov 29 02:59:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:45.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:45 np0005539550 nova_compute[257631]: 2025-11-29 07:59:45.817 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 02:59:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:46.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 350 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 37e2e27a-87d4-4d37-8877-f42ca51dbf72 does not exist
Nov 29 02:59:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d32c2502-552b-480d-9063-505d89c654f4 does not exist
Nov 29 02:59:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e4419db-4711-4e77-9305-9d3400e4d6a1 does not exist
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 02:59:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.482921404 +0000 UTC m=+0.039980543 container create 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 02:59:47 np0005539550 systemd[1]: Started libpod-conmon-7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384.scope.
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.466236247 +0000 UTC m=+0.023295406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.586759777 +0000 UTC m=+0.143818946 container init 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.595254174 +0000 UTC m=+0.152313313 container start 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.598743833 +0000 UTC m=+0.155802992 container attach 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:47 np0005539550 hardcore_villani[284204]: 167 167
Nov 29 02:59:47 np0005539550 systemd[1]: libpod-7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384.scope: Deactivated successfully.
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.604684625 +0000 UTC m=+0.161743774 container died 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 02:59:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a547499ab8a4272018491a559acc0375ea969a7910a4e57ac1628afcd0f756f5-merged.mount: Deactivated successfully.
Nov 29 02:59:47 np0005539550 podman[284187]: 2025-11-29 07:59:47.647038637 +0000 UTC m=+0.204097766 container remove 7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 02:59:47 np0005539550 systemd[1]: libpod-conmon-7abaa16e5576787ffb213eed72d7550549a48ef3abc075372ebc6ff93c35f384.scope: Deactivated successfully.
Nov 29 02:59:47 np0005539550 podman[284229]: 2025-11-29 07:59:47.827072107 +0000 UTC m=+0.041313277 container create 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:47 np0005539550 systemd[1]: Started libpod-conmon-319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a.scope.
Nov 29 02:59:47 np0005539550 nova_compute[257631]: 2025-11-29 07:59:47.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:47 np0005539550 podman[284229]: 2025-11-29 07:59:47.811031057 +0000 UTC m=+0.025272247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:47 np0005539550 podman[284229]: 2025-11-29 07:59:47.929904644 +0000 UTC m=+0.144145814 container init 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 02:59:47 np0005539550 podman[284229]: 2025-11-29 07:59:47.936965845 +0000 UTC m=+0.151207015 container start 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:59:47 np0005539550 podman[284229]: 2025-11-29 07:59:47.940655199 +0000 UTC m=+0.154896389 container attach 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:48.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:48 np0005539550 eager_burnell[284246]: --> passed data devices: 0 physical, 1 LVM
Nov 29 02:59:48 np0005539550 eager_burnell[284246]: --> relative data size: 1.0
Nov 29 02:59:48 np0005539550 eager_burnell[284246]: --> All data devices are unavailable
Nov 29 02:59:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 02:59:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 02:59:48 np0005539550 systemd[1]: libpod-319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a.scope: Deactivated successfully.
Nov 29 02:59:48 np0005539550 podman[284229]: 2025-11-29 07:59:48.843285922 +0000 UTC m=+1.057527092 container died 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 02:59:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d264b6bdb05557e1fb0b1ae07702b5f13aa95e4f2f0f209c26d3cf2faf2767b4-merged.mount: Deactivated successfully.
Nov 29 02:59:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 329 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 488 KiB/s wr, 146 op/s
Nov 29 02:59:48 np0005539550 podman[284229]: 2025-11-29 07:59:48.900008171 +0000 UTC m=+1.114249341 container remove 319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 02:59:48 np0005539550 systemd[1]: libpod-conmon-319497bccc07b1a5b06f3934e5341766c117620f5674b0b3906ca4459fd1fa9a.scope: Deactivated successfully.
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.289 257641 INFO nova.virt.libvirt.driver [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deleting instance files /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.291 257641 INFO nova.virt.libvirt.driver [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deletion of /var/lib/nova/instances/241f7bd9-4070-42cb-99ab-0c9d53eb7e5b_del complete#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.475 257641 INFO nova.compute.manager [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Took 12.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.476 257641 DEBUG oslo.service.loopingcall [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.476 257641 DEBUG nova.compute.manager [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.476 257641 DEBUG nova.network.neutron [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.559678765 +0000 UTC m=+0.037782376 container create 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:59:49 np0005539550 systemd[1]: Started libpod-conmon-74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3.scope.
Nov 29 02:59:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:49.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.636674982 +0000 UTC m=+0.114778603 container init 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.54343696 +0000 UTC m=+0.021540571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.644608195 +0000 UTC m=+0.122711786 container start 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.646 257641 DEBUG nova.network.neutron [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 02:59:49 np0005539550 angry_tesla[284483]: 167 167
Nov 29 02:59:49 np0005539550 systemd[1]: libpod-74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3.scope: Deactivated successfully.
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.650105386 +0000 UTC m=+0.128208977 container attach 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.650766572 +0000 UTC m=+0.128870163 container died 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.670 257641 DEBUG nova.network.neutron [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 02:59:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a1319ddc5566e78d603c1d461e8dde0e829cd8cbe57e871922a968dca316740-merged.mount: Deactivated successfully.
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.690 257641 INFO nova.compute.manager [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Took 0.21 seconds to deallocate network for instance.#033[00m
Nov 29 02:59:49 np0005539550 podman[284466]: 2025-11-29 07:59:49.695260729 +0000 UTC m=+0.173364320 container remove 74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 02:59:49 np0005539550 systemd[1]: libpod-conmon-74367d46a6bf99d16629b86dc2921eb3206a5974559d9588a9bba3e2468027a3.scope: Deactivated successfully.
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.736 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.737 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 29 02:59:49 np0005539550 nova_compute[257631]: 2025-11-29 07:59:49.887 257641 DEBUG oslo_concurrency.processutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:49 np0005539550 podman[284506]: 2025-11-29 07:59:49.834852286 +0000 UTC m=+0.024228720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:50 np0005539550 podman[284506]: 2025-11-29 07:59:50.042553623 +0000 UTC m=+0.231930037 container create a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 02:59:50 np0005539550 systemd[1]: Started libpod-conmon-a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08.scope.
Nov 29 02:59:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8c14c76c2d759881cf9924d859a21c1cc098e27b14862bcbd21a550141bbf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8c14c76c2d759881cf9924d859a21c1cc098e27b14862bcbd21a550141bbf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8c14c76c2d759881cf9924d859a21c1cc098e27b14862bcbd21a550141bbf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8c14c76c2d759881cf9924d859a21c1cc098e27b14862bcbd21a550141bbf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:50.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:50 np0005539550 podman[284506]: 2025-11-29 07:59:50.185153056 +0000 UTC m=+0.374529490 container init a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:50 np0005539550 podman[284506]: 2025-11-29 07:59:50.195724136 +0000 UTC m=+0.385100550 container start a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:50 np0005539550 podman[284506]: 2025-11-29 07:59:50.237995517 +0000 UTC m=+0.427371951 container attach a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 02:59:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616100457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.337 257641 DEBUG oslo_concurrency.processutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.345 257641 DEBUG nova.compute.provider_tree [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.361 257641 DEBUG nova.scheduler.client.report [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.387 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.436 257641 INFO nova.scheduler.client.report [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Deleted allocations for instance 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b#033[00m
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.544 257641 DEBUG oslo_concurrency.lockutils [None req-9f095911-15ff-4488-b054-e4f9c1d33fe1 83cbfda08549451c8499d3e9bfd0d2ff 8f436e4e1ee14fa09904e250330051d0 - - default default] Lock "241f7bd9-4070-42cb-99ab-0c9d53eb7e5b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 29 02:59:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 29 02:59:50 np0005539550 nova_compute[257631]: 2025-11-29 07:59:50.820 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 331 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 20 KiB/s wr, 142 op/s
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]: {
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:    "0": [
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:        {
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "devices": [
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "/dev/loop3"
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            ],
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "lv_name": "ceph_lv0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "lv_size": "7511998464",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "name": "ceph_lv0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "tags": {
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.cephx_lockbox_secret": "",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.cluster_name": "ceph",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.crush_device_class": "",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.encrypted": "0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.osd_id": "0",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.type": "block",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:                "ceph.vdo": "0"
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            },
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "type": "block",
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:            "vg_name": "ceph_vg0"
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:        }
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]:    ]
Nov 29 02:59:50 np0005539550 practical_chebyshev[284544]: }
Nov 29 02:59:51 np0005539550 podman[284506]: 2025-11-29 07:59:51.004687536 +0000 UTC m=+1.194063950 container died a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 02:59:51 np0005539550 systemd[1]: libpod-a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08.scope: Deactivated successfully.
Nov 29 02:59:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6f8c14c76c2d759881cf9924d859a21c1cc098e27b14862bcbd21a550141bbf1-merged.mount: Deactivated successfully.
Nov 29 02:59:51 np0005539550 podman[284506]: 2025-11-29 07:59:51.167900726 +0000 UTC m=+1.357277140 container remove a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:51 np0005539550 systemd[1]: libpod-conmon-a0f5a91b955cb1c73d7a6c4a2c93916fdca004d7bf64a2ae4a23591ebce1dd08.scope: Deactivated successfully.
Nov 29 02:59:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:51.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.780516699 +0000 UTC m=+0.040172508 container create 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.762738414 +0000 UTC m=+0.022394243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:51 np0005539550 systemd[1]: Started libpod-conmon-3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b.scope.
Nov 29 02:59:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.952665397 +0000 UTC m=+0.212321226 container init 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.960410625 +0000 UTC m=+0.220066434 container start 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:51 np0005539550 great_mclean[284724]: 167 167
Nov 29 02:59:51 np0005539550 systemd[1]: libpod-3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b.scope: Deactivated successfully.
Nov 29 02:59:51 np0005539550 conmon[284724]: conmon 3958bdf0c7079b5a543d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b.scope/container/memory.events
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.967734872 +0000 UTC m=+0.227390701 container attach 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:51 np0005539550 podman[284708]: 2025-11-29 07:59:51.968129852 +0000 UTC m=+0.227785661 container died 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 02:59:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f52101decf73e391e31cafbe899be8e9a0df66a99d922f5eba8efadcd09b2075-merged.mount: Deactivated successfully.
Nov 29 02:59:52 np0005539550 podman[284708]: 2025-11-29 07:59:52.152698978 +0000 UTC m=+0.412354787 container remove 3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mclean, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 02:59:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:52 np0005539550 systemd[1]: libpod-conmon-3958bdf0c7079b5a543db90d337be235e04cbd27284311f9d8331e233785a12b.scope: Deactivated successfully.
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.276 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403177.2746425, 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.278 257641 INFO nova.compute.manager [-] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.310 257641 DEBUG nova.compute.manager [None req-c15ff447-a079-429f-9336-093b51a7d925 - - - - - -] [instance: 241f7bd9-4070-42cb-99ab-0c9d53eb7e5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 02:59:52 np0005539550 podman[284749]: 2025-11-29 07:59:52.32419471 +0000 UTC m=+0.049272450 container create b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 02:59:52 np0005539550 systemd[1]: Started libpod-conmon-b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958.scope.
Nov 29 02:59:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 02:59:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b3dc736f6ba4c7b4dbb4d8fd48bee9fc49399348435a5397ce1a9c1cad8fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b3dc736f6ba4c7b4dbb4d8fd48bee9fc49399348435a5397ce1a9c1cad8fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b3dc736f6ba4c7b4dbb4d8fd48bee9fc49399348435a5397ce1a9c1cad8fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6b3dc736f6ba4c7b4dbb4d8fd48bee9fc49399348435a5397ce1a9c1cad8fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 02:59:52 np0005539550 podman[284749]: 2025-11-29 07:59:52.301063439 +0000 UTC m=+0.026141199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 02:59:52 np0005539550 podman[284749]: 2025-11-29 07:59:52.407226041 +0000 UTC m=+0.132303791 container init b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 02:59:52 np0005539550 podman[284749]: 2025-11-29 07:59:52.414568519 +0000 UTC m=+0.139646249 container start b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 02:59:52 np0005539550 podman[284749]: 2025-11-29 07:59:52.418167441 +0000 UTC m=+0.143245201 container attach b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.706 257641 DEBUG nova.compute.manager [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 02:59:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 345 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 824 KiB/s wr, 164 op/s
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.885 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.958 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.959 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 02:59:52 np0005539550 nova_compute[257631]: 2025-11-29 07:59:52.998 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'pci_requests' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.015 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.016 257641 INFO nova.compute.claims [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.016 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'resources' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.033 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'numa_topology' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.057 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'pci_devices' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.101 257641 INFO nova.compute.resource_tracker [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating resource usage from migration 4ebfcc9e-3184-4432-81df-ac416954d5bb#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.102 257641 DEBUG nova.compute.resource_tracker [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting to track incoming migration 4ebfcc9e-3184-4432-81df-ac416954d5bb with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.236 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 02:59:53 np0005539550 silly_albattani[284765]: {
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:        "osd_id": 0,
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:        "type": "bluestore"
Nov 29 02:59:53 np0005539550 silly_albattani[284765]:    }
Nov 29 02:59:53 np0005539550 silly_albattani[284765]: }
Nov 29 02:59:53 np0005539550 systemd[1]: libpod-b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958.scope: Deactivated successfully.
Nov 29 02:59:53 np0005539550 podman[284749]: 2025-11-29 07:59:53.354059713 +0000 UTC m=+1.079137443 container died b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 02:59:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0c6b3dc736f6ba4c7b4dbb4d8fd48bee9fc49399348435a5397ce1a9c1cad8fe-merged.mount: Deactivated successfully.
Nov 29 02:59:53 np0005539550 podman[284749]: 2025-11-29 07:59:53.412066475 +0000 UTC m=+1.137144205 container remove b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 02:59:53 np0005539550 systemd[1]: libpod-conmon-b5b915c7b7d5b9638f64e8c32abc664f1a00c39f97ed67617ec97ececa592958.scope: Deactivated successfully.
Nov 29 02:59:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 02:59:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:53.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 02:59:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241822061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.693 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 02:59:53 np0005539550 nova_compute[257631]: 2025-11-29 07:59:53.701 257641 DEBUG nova.compute.provider_tree [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 02:59:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 02:59:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 02:59:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0fbab957-3180-496b-95d1-c68e0fe4fb98 does not exist
Nov 29 02:59:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e81de6dc-ca85-4268-adde-55b6c52241d2 does not exist
Nov 29 02:59:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3525d523-cde0-4bf5-82b2-cebbd788220e does not exist
Nov 29 02:59:54 np0005539550 nova_compute[257631]: 2025-11-29 07:59:54.801 257641 DEBUG nova.scheduler.client.report [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 02:59:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 361 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.3 MiB/s wr, 164 op/s
Nov 29 02:59:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 02:59:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:55.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 02:59:55 np0005539550 nova_compute[257631]: 2025-11-29 07:59:55.683 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 2.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 02:59:55 np0005539550 nova_compute[257631]: 2025-11-29 07:59:55.683 257641 INFO nova.compute.manager [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Migrating#033[00m
Nov 29 02:59:55 np0005539550 nova_compute[257631]: 2025-11-29 07:59:55.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:59:56.024 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 02:59:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 07:59:56.026 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 02:59:56 np0005539550 nova_compute[257631]: 2025-11-29 07:59:56.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:56.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 377 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Nov 29 02:59:57 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 02:59:57 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 02:59:57 np0005539550 systemd-logind[788]: New session 54 of user nova.
Nov 29 02:59:57 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 02:59:57 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 02:59:57 np0005539550 systemd[284874]: Queued start job for default target Main User Target.
Nov 29 02:59:57 np0005539550 systemd[284874]: Created slice User Application Slice.
Nov 29 02:59:57 np0005539550 systemd[284874]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 02:59:57 np0005539550 systemd[284874]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 02:59:57 np0005539550 systemd[284874]: Reached target Paths.
Nov 29 02:59:57 np0005539550 systemd[284874]: Reached target Timers.
Nov 29 02:59:57 np0005539550 systemd[284874]: Starting D-Bus User Message Bus Socket...
Nov 29 02:59:57 np0005539550 systemd[284874]: Starting Create User's Volatile Files and Directories...
Nov 29 02:59:57 np0005539550 systemd[284874]: Finished Create User's Volatile Files and Directories.
Nov 29 02:59:57 np0005539550 systemd[284874]: Listening on D-Bus User Message Bus Socket.
Nov 29 02:59:57 np0005539550 systemd[284874]: Reached target Sockets.
Nov 29 02:59:57 np0005539550 systemd[284874]: Reached target Basic System.
Nov 29 02:59:57 np0005539550 systemd[284874]: Reached target Main User Target.
Nov 29 02:59:57 np0005539550 systemd[284874]: Startup finished in 156ms.
Nov 29 02:59:57 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 02:59:57 np0005539550 systemd[1]: Started Session 54 of User nova.
Nov 29 02:59:57 np0005539550 podman[284890]: 2025-11-29 07:59:57.514214656 +0000 UTC m=+0.059098831 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 02:59:57 np0005539550 podman[284889]: 2025-11-29 07:59:57.520696922 +0000 UTC m=+0.065520815 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 02:59:57 np0005539550 systemd[1]: session-54.scope: Deactivated successfully.
Nov 29 02:59:57 np0005539550 systemd-logind[788]: Session 54 logged out. Waiting for processes to exit.
Nov 29 02:59:57 np0005539550 systemd-logind[788]: Removed session 54.
Nov 29 02:59:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:57.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:57 np0005539550 systemd-logind[788]: New session 56 of user nova.
Nov 29 02:59:57 np0005539550 systemd[1]: Started Session 56 of User nova.
Nov 29 02:59:57 np0005539550 systemd[1]: session-56.scope: Deactivated successfully.
Nov 29 02:59:57 np0005539550 systemd-logind[788]: Session 56 logged out. Waiting for processes to exit.
Nov 29 02:59:57 np0005539550 systemd-logind[788]: Removed session 56.
Nov 29 02:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 29 02:59:57 np0005539550 nova_compute[257631]: 2025-11-29 07:59:57.888 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 02:59:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 02:59:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:07:59:58.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 02:59:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 02:59:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 02:59:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 29 02:59:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 377 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 564 KiB/s rd, 2.6 MiB/s wr, 70 op/s
Nov 29 02:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_07:59:59
Nov 29 02:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 02:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 02:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data']
Nov 29 02:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 02:59:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 02:59:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 02:59:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:07:59:59.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:00:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:00.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:00 np0005539550 nova_compute[257631]: 2025-11-29 08:00:00.864 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:00:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 385 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 500 KiB/s rd, 3.1 MiB/s wr, 83 op/s
Nov 29 03:00:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:01.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:02.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 399 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Nov 29 03:00:02 np0005539550 nova_compute[257631]: 2025-11-29 08:00:02.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:04.027 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:04.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 403 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 417 KiB/s rd, 3.3 MiB/s wr, 104 op/s
Nov 29 03:00:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:05 np0005539550 nova_compute[257631]: 2025-11-29 08:00:05.866 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:06.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 139 op/s
Nov 29 03:00:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:07 np0005539550 nova_compute[257631]: 2025-11-29 08:00:07.894 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:07 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:00:07 np0005539550 systemd[284874]: Activating special unit Exit the Session...
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped target Main User Target.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped target Basic System.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped target Paths.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped target Sockets.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped target Timers.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:00:07 np0005539550 systemd[284874]: Closed D-Bus User Message Bus Socket.
Nov 29 03:00:07 np0005539550 systemd[284874]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:00:07 np0005539550 systemd[284874]: Removed slice User Application Slice.
Nov 29 03:00:07 np0005539550 systemd[284874]: Reached target Shutdown.
Nov 29 03:00:07 np0005539550 systemd[284874]: Finished Exit the Session.
Nov 29 03:00:07 np0005539550 systemd[284874]: Reached target Exit the Session.
Nov 29 03:00:07 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:00:07 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:00:08 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:00:08 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:00:08 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:00:08 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:00:08 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:00:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:08.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:00:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 178 op/s
Nov 29 03:00:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:09 np0005539550 nova_compute[257631]: 2025-11-29 08:00:09.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:09 np0005539550 nova_compute[257631]: 2025-11-29 08:00:09.922 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:00:09 np0005539550 nova_compute[257631]: 2025-11-29 08:00:09.922 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.183 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.183 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.183 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.183 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:00:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:00:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 29 03:00:10 np0005539550 podman[284997]: 2025-11-29 08:00:10.353034135 +0000 UTC m=+0.087282021 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:00:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 29 03:00:10 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.499 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:10 np0005539550 nova_compute[257631]: 2025-11-29 08:00:10.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.6 MiB/s wr, 163 op/s
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.035 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.199 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.200 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.200 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.200 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.201 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.201 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.456 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.456 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.456 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.456 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.456 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:11.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2808546068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.904 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.972 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:11 np0005539550 nova_compute[257631]: 2025-11-29 08:00:11.972 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:00:12 np0005539550 nova_compute[257631]: 2025-11-29 08:00:12.128 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:12 np0005539550 nova_compute[257631]: 2025-11-29 08:00:12.129 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=20.785263061523438GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:00:12 np0005539550 nova_compute[257631]: 2025-11-29 08:00:12.129 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:12 np0005539550 nova_compute[257631]: 2025-11-29 08:00:12.130 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 748 KiB/s wr, 147 op/s
Nov 29 03:00:12 np0005539550 nova_compute[257631]: 2025-11-29 08:00:12.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:13.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:13 np0005539550 nova_compute[257631]: 2025-11-29 08:00:13.904 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Applying migration context for instance 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd as it has an incoming, in-progress migration 4ebfcc9e-3184-4432-81df-ac416954d5bb. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Nov 29 03:00:13 np0005539550 nova_compute[257631]: 2025-11-29 08:00:13.905 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating resource usage from migration 4ebfcc9e-3184-4432-81df-ac416954d5bb#033[00m
Nov 29 03:00:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:14 np0005539550 nova_compute[257631]: 2025-11-29 08:00:14.692 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 6e814e3b-3edb-4f37-8701-c37929994645 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:14 np0005539550 nova_compute[257631]: 2025-11-29 08:00:14.693 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:00:14 np0005539550 nova_compute[257631]: 2025-11-29 08:00:14.694 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:00:14 np0005539550 nova_compute[257631]: 2025-11-29 08:00:14.694 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:00:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 107 KiB/s wr, 108 op/s
Nov 29 03:00:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:15.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:15 np0005539550 nova_compute[257631]: 2025-11-29 08:00:15.799 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:00:15 np0005539550 nova_compute[257631]: 2025-11-29 08:00:15.870 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:16.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.382 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.383 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.397 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.397 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquired lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.398 257641 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.419 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.445 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.505 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.693 257641 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 410 MiB data, 585 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 31 KiB/s wr, 55 op/s
Nov 29 03:00:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4158813653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.925 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.931 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.947 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.979 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:00:16 np0005539550 nova_compute[257631]: 2025-11-29 08:00:16.980 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.154 257641 DEBUG nova.network.neutron [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.185 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Releasing lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.290 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.292 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.292 257641 INFO nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Creating image(s)#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.329 257641 DEBUG nova.storage.rbd_utils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] creating snapshot(nova-resize) on rbd image(35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:00:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:17.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.698 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.698 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.699 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.699 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.699 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:17 np0005539550 nova_compute[257631]: 2025-11-29 08:00:17.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:00:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:18.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 29 03:00:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 29 03:00:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 29 03:00:18 np0005539550 nova_compute[257631]: 2025-11-29 08:00:18.748 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 426 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 137 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Nov 29 03:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:18.933 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:18.934 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:18.934 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.021 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.022 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Ensure instance console log exists: /var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.022 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.022 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.023 257641 DEBUG oslo_concurrency.lockutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.024 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.028 257641 WARNING nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.033 257641 DEBUG nova.virt.libvirt.host [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.033 257641 DEBUG nova.virt.libvirt.host [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.037 257641 DEBUG nova.virt.libvirt.host [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.038 257641 DEBUG nova.virt.libvirt.host [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.039 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.039 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.040 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.040 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.040 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.040 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.041 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.041 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.041 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.041 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.042 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.042 257641 DEBUG nova.virt.hardware [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.042 257641 DEBUG nova.objects.instance [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.070 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509874043' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.538 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:19 np0005539550 nova_compute[257631]: 2025-11-29 08:00:19.588 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:19.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010473737320863577 of space, bias 1.0, pg target 3.1421211962590734 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:00:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:00:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3660920672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.040 257641 DEBUG oslo_concurrency.processutils [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.048 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <uuid>35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</uuid>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <name>instance-00000020</name>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:name>tempest-MigrationsAdminTest-server-10669624</nova:name>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:00:19</nova:creationTime>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:user uuid="51ae07f600c545c0b4c7fae00657ea40">tempest-MigrationsAdminTest-1930136363-project-member</nova:user>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <nova:project uuid="6717732f9fa242b181f58881b03d246f">tempest-MigrationsAdminTest-1930136363</nova:project>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="serial">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="uuid">35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd_disk.config">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd/console.log" append="off"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:00:20 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:00:20 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:00:20 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:00:20 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:20.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.436 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.436 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.437 257641 INFO nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Using config drive#033[00m
Nov 29 03:00:20 np0005539550 systemd-machined[216673]: New machine qemu-20-instance-00000020.
Nov 29 03:00:20 np0005539550 systemd[1]: Started Virtual Machine qemu-20-instance-00000020.
Nov 29 03:00:20 np0005539550 nova_compute[257631]: 2025-11-29 08:00:20.872 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 434 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 223 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.087 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403221.0872564, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.089 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.091 257641 DEBUG nova.compute.manager [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.098 257641 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance running successfully.#033[00m
Nov 29 03:00:21 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.102 257641 DEBUG nova.virt.libvirt.guest [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.103 257641 DEBUG nova.virt.libvirt.driver [None req-629a718e-d439-4c70-ab84-28c2d1caf162 e9c5a793e885447b8b387d31e35002a5 f32c5413dfce491a96f52ef642d44d10 - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.112 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.114 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.170 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.171 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403221.0875306, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.171 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.234 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.236 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:21 np0005539550 nova_compute[257631]: 2025-11-29 08:00:21.260 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:00:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:21.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:22.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 441 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 667 KiB/s rd, 2.5 MiB/s wr, 92 op/s
Nov 29 03:00:22 np0005539550 nova_compute[257631]: 2025-11-29 08:00:22.904 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:23.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:23 np0005539550 nova_compute[257631]: 2025-11-29 08:00:23.879 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:23 np0005539550 nova_compute[257631]: 2025-11-29 08:00:23.879 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:23 np0005539550 nova_compute[257631]: 2025-11-29 08:00:23.879 257641 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.037 257641 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.345 257641 DEBUG nova.network.neutron [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.496 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:24 np0005539550 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000020.scope: Deactivated successfully.
Nov 29 03:00:24 np0005539550 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000020.scope: Consumed 4.105s CPU time.
Nov 29 03:00:24 np0005539550 systemd-machined[216673]: Machine qemu-20-instance-00000020 terminated.
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.777 257641 INFO nova.virt.libvirt.driver [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Instance destroyed successfully.#033[00m
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.777 257641 DEBUG nova.objects.instance [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.851 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.852 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 443 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 946 KiB/s rd, 2.6 MiB/s wr, 114 op/s
Nov 29 03:00:24 np0005539550 nova_compute[257631]: 2025-11-29 08:00:24.934 257641 DEBUG nova.objects.instance [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'migration_context' on Instance uuid 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.017 257641 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/754019942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.483 257641 DEBUG oslo_concurrency.processutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.490 257641 DEBUG nova.compute.provider_tree [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.777 257641 DEBUG nova.scheduler.client.report [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.875 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:25 np0005539550 nova_compute[257631]: 2025-11-29 08:00:25.901 257641 DEBUG oslo_concurrency.lockutils [None req-3eef4695-4c06-451a-829e-8cd7b734dd19 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:26.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 443 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.6 MiB/s wr, 175 op/s
Nov 29 03:00:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:27.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 29 03:00:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 29 03:00:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 29 03:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:27 np0005539550 nova_compute[257631]: 2025-11-29 08:00:27.907 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:28 np0005539550 podman[285312]: 2025-11-29 08:00:28.311968993 +0000 UTC m=+0.052381939 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:00:28 np0005539550 podman[285311]: 2025-11-29 08:00:28.348022993 +0000 UTC m=+0.089259220 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:00:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 388 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 839 KiB/s wr, 184 op/s
Nov 29 03:00:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:29.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:00:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:30.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:00:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:30 np0005539550 nova_compute[257631]: 2025-11-29 08:00:30.876 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 364 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 100 KiB/s wr, 158 op/s
Nov 29 03:00:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:31.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:32.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 364 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.4 KiB/s wr, 149 op/s
Nov 29 03:00:32 np0005539550 nova_compute[257631]: 2025-11-29 08:00:32.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.332 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.333 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.362 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.478 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.479 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.490 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.491 257641 INFO nova.compute.claims [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:00:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:33.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:33 np0005539550 nova_compute[257631]: 2025-11-29 08:00:33.680 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996496029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.141 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.150 257641 DEBUG nova.compute.provider_tree [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:34.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.271 257641 DEBUG nova.scheduler.client.report [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.329 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.330 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.552 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.553 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.573 257641 INFO nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.615 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.798 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.799 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.799 257641 INFO nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Creating image(s)#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.827 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.859 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.882 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.886 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 364 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 140 op/s
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.942 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.943 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.944 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.944 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.970 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:34 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.974 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:35 np0005539550 nova_compute[257631]: 2025-11-29 08:00:34.999 257641 DEBUG nova.policy [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b956671bad8a4a0b99469a9d0258a2bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:00:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:35 np0005539550 nova_compute[257631]: 2025-11-29 08:00:35.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:35 np0005539550 nova_compute[257631]: 2025-11-29 08:00:35.985 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.064 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] resizing rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.201 257641 DEBUG nova.objects.instance [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4b0086-da42-4b01-8bd6-d9f6450bda8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.399 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Successfully created port: 6346f0b1-5293-423e-99b7-d7688895c848 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.406 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.406 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Ensure instance console log exists: /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.407 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.407 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:36 np0005539550 nova_compute[257631]: 2025-11-29 08:00:36.407 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 326 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 KiB/s wr, 139 op/s
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.470 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Successfully updated port: 6346f0b1-5293-423e-99b7-d7688895c848 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.489 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.489 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.489 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.573 257641 DEBUG nova.compute.manager [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-changed-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.573 257641 DEBUG nova.compute.manager [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing instance network info cache due to event network-changed-6346f0b1-5293-423e-99b7-d7688895c848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.574 257641 DEBUG oslo_concurrency.lockutils [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.664 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:00:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:37.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:00:37 np0005539550 nova_compute[257631]: 2025-11-29 08:00:37.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.024 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "6e814e3b-3edb-4f37-8701-c37929994645" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.025 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.026 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "6e814e3b-3edb-4f37-8701-c37929994645-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.026 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.026 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.027 257641 INFO nova.compute.manager [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Terminating instance#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.028 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.028 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquired lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.029 257641 DEBUG nova.network.neutron [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:00:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.437 257641 DEBUG nova.network.neutron [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.754 257641 DEBUG nova.network.neutron [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.785 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.786 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Instance network_info: |[{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.786 257641 DEBUG oslo_concurrency.lockutils [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.787 257641 DEBUG nova.network.neutron [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.790 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Start _get_guest_xml network_info=[{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.794 257641 WARNING nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.799 257641 DEBUG nova.virt.libvirt.host [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.800 257641 DEBUG nova.virt.libvirt.host [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.803 257641 DEBUG nova.virt.libvirt.host [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.804 257641 DEBUG nova.virt.libvirt.host [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.805 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.805 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.806 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.806 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.806 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.806 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.807 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.807 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.807 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.807 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.807 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.808 257641 DEBUG nova.virt.hardware [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.810 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:38 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.833 257641 DEBUG nova.network.neutron [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.858 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Releasing lock "refresh_cache-6e814e3b-3edb-4f37-8701-c37929994645" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:38 np0005539550 nova_compute[257631]: 2025-11-29 08:00:38.858 257641 DEBUG nova.compute.manager [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:00:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 311 MiB data, 532 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1009 KiB/s wr, 148 op/s
Nov 29 03:00:39 np0005539550 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Nov 29 03:00:39 np0005539550 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Consumed 16.983s CPU time.
Nov 29 03:00:39 np0005539550 systemd-machined[216673]: Machine qemu-18-instance-0000001f terminated.
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.077 257641 INFO nova.virt.libvirt.driver [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance destroyed successfully.#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.077 257641 DEBUG nova.objects.instance [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lazy-loading 'resources' on Instance uuid 6e814e3b-3edb-4f37-8701-c37929994645 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313020998' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.251 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.280 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.284 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.541 257641 INFO nova.virt.libvirt.driver [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Deleting instance files /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645_del#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.542 257641 INFO nova.virt.libvirt.driver [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Deletion of /var/lib/nova/instances/6e814e3b-3edb-4f37-8701-c37929994645_del complete#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.591 257641 INFO nova.compute.manager [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.591 257641 DEBUG oslo.service.loopingcall [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.592 257641 DEBUG nova.compute.manager [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.592 257641 DEBUG nova.network.neutron [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:00:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:39.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:00:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536156323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.746 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.747 257641 DEBUG nova.virt.libvirt.vif [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-2014344953',display_name='tempest-FloatingIPsAssociationTestJSON-server-2014344953',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-2014344953',id=34,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-eef128j9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:34Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=1c4b0086-da42-4b01-8bd6-d9f6450bda8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.748 257641 DEBUG nova.network.os_vif_util [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.748 257641 DEBUG nova.network.os_vif_util [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.750 257641 DEBUG nova.objects.instance [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4b0086-da42-4b01-8bd6-d9f6450bda8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.775 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403224.7738228, 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.775 257641 INFO nova.compute.manager [-] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.778 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <uuid>1c4b0086-da42-4b01-8bd6-d9f6450bda8e</uuid>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <name>instance-00000022</name>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-2014344953</nova:name>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:00:38</nova:creationTime>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:user uuid="b956671bad8a4a0b99469a9d0258a2bc">tempest-FloatingIPsAssociationTestJSON-304325142-project-member</nova:user>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:project uuid="86064cd197e14fa8a17d2a0d9547af3e">tempest-FloatingIPsAssociationTestJSON-304325142</nova:project>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <nova:port uuid="6346f0b1-5293-423e-99b7-d7688895c848">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="serial">1c4b0086-da42-4b01-8bd6-d9f6450bda8e</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="uuid">1c4b0086-da42-4b01-8bd6-d9f6450bda8e</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ab:68:4e"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <target dev="tap6346f0b1-52"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/console.log" append="off"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:00:39 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:00:39 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:00:39 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:00:39 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.778 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Preparing to wait for external event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.779 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.779 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.779 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.780 257641 DEBUG nova.virt.libvirt.vif [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-2014344953',display_name='tempest-FloatingIPsAssociationTestJSON-server-2014344953',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-2014344953',id=34,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-eef128j9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:00:34Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=1c4b0086-da42-4b01-8bd6-d9f6450bda8e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.780 257641 DEBUG nova.network.os_vif_util [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.781 257641 DEBUG nova.network.os_vif_util [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.781 257641 DEBUG os_vif [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.782 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.783 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.783 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.789 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6346f0b1-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.789 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6346f0b1-52, col_values=(('external_ids', {'iface-id': '6346f0b1-5293-423e-99b7-d7688895c848', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:68:4e', 'vm-uuid': '1c4b0086-da42-4b01-8bd6-d9f6450bda8e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.790 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:39 np0005539550 NetworkManager[49039]: <info>  [1764403239.7919] manager: (tap6346f0b1-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.800 257641 DEBUG nova.compute.manager [None req-e5e8628f-7dc5-402e-8880-66babdf25e57 - - - - - -] [instance: 35a081d7-9e7e-47e3-a6a0-9be2f7ffb4bd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.801 257641 INFO os_vif [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52')#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.853 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.854 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.854 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] No VIF found with MAC fa:16:3e:ab:68:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.854 257641 INFO nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Using config drive#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.881 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.931 257641 DEBUG nova.network.neutron [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.943 257641 DEBUG nova.network.neutron [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:39 np0005539550 nova_compute[257631]: 2025-11-29 08:00:39.957 257641 INFO nova.compute.manager [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Took 0.37 seconds to deallocate network for instance.#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.012 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.012 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.087 257641 DEBUG oslo_concurrency.processutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:40.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.321 257641 INFO nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Creating config drive at /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.327 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjch6iza7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.457 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjch6iza7" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.495 257641 DEBUG nova.storage.rbd_utils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] rbd image 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.498 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:00:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:00:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038687018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.534 257641 DEBUG oslo_concurrency.processutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.540 257641 DEBUG nova.compute.provider_tree [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.824 257641 DEBUG nova.scheduler.client.report [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.881 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 294 MiB data, 521 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Nov 29 03:00:40 np0005539550 nova_compute[257631]: 2025-11-29 08:00:40.979 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.041 257641 DEBUG nova.network.neutron [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated VIF entry in instance network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.042 257641 DEBUG nova.network.neutron [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.067 257641 DEBUG oslo_concurrency.lockutils [req-3a38f22f-ec1f-428d-b998-b0b94b7b31c4 req-b0009010-18fc-4294-a9c8-39d61eca9b7e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.140 257641 INFO nova.scheduler.client.report [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Deleted allocations for instance 6e814e3b-3edb-4f37-8701-c37929994645#033[00m
Nov 29 03:00:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.258 257641 DEBUG oslo_concurrency.lockutils [None req-a6f42a1f-c854-4db9-a8f2-3e9c57718eb5 51ae07f600c545c0b4c7fae00657ea40 6717732f9fa242b181f58881b03d246f - - default default] Lock "6e814e3b-3edb-4f37-8701-c37929994645" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:41 np0005539550 podman[285763]: 2025-11-29 08:00:41.379472145 +0000 UTC m=+0.105119137 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:00:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.529 257641 DEBUG oslo_concurrency.processutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config 1c4b0086-da42-4b01-8bd6-d9f6450bda8e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.530 257641 INFO nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Deleting local config drive /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:00:41 np0005539550 kernel: tap6346f0b1-52: entered promiscuous mode
Nov 29 03:00:41 np0005539550 systemd-udevd[285618]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:00:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:00:41Z|00157|binding|INFO|Claiming lport 6346f0b1-5293-423e-99b7-d7688895c848 for this chassis.
Nov 29 03:00:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:00:41Z|00158|binding|INFO|6346f0b1-5293-423e-99b7-d7688895c848: Claiming fa:16:3e:ab:68:4e 10.100.0.11
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.579 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.5814] manager: (tap6346f0b1-52): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.584 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.5925] device (tap6346f0b1-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.5949] device (tap6346f0b1-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.594 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:68:4e 10.100.0.11'], port_security=['fa:16:3e:ab:68:4e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1c4b0086-da42-4b01-8bd6-d9f6450bda8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1ea0603-2fd5-45b7-93c4-f54e4ea109a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2df18a4d-96ab-4a0e-9a1e-536ced01a074, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6346f0b1-5293-423e-99b7-d7688895c848) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.595 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6346f0b1-5293-423e-99b7-d7688895c848 in datapath 4537c1f2-a597-4d7c-a9c3-2ad7483510a6 bound to our chassis#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.597 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4537c1f2-a597-4d7c-a9c3-2ad7483510a6#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.609 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[77f4633c-36e7-49a0-8db5-f396cfd4e9af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.610 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4537c1f2-a1 in ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:00:41 np0005539550 systemd-machined[216673]: New machine qemu-21-instance-00000022.
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.612 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4537c1f2-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.612 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6452ac70-a68e-4282-ac38-a1276b5492e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.613 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b8ca47-11d7-4c44-ac68-4bcf0ce5e6e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.624 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d40b9a9a-579e-4151-bb50-7746ee08eefd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 systemd[1]: Started Virtual Machine qemu-21-instance-00000022.
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.650 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0e7f56-8338-4a41-8691-672adce72add]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.656 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:00:41Z|00159|binding|INFO|Setting lport 6346f0b1-5293-423e-99b7-d7688895c848 ovn-installed in OVS
Nov 29 03:00:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:00:41Z|00160|binding|INFO|Setting lport 6346f0b1-5293-423e-99b7-d7688895c848 up in Southbound
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.679 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[006ec96a-5566-43f7-89dc-14be4f07497d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.684 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a531e02e-6107-4052-bdda-74fc04d9ee6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.6862] manager: (tap4537c1f2-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.714 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1adb8960-decc-4314-a86d-f28631568829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.717 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[62d2c3e5-9e61-44a2-86c6-8ce1c131e663]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.7351] device (tap4537c1f2-a0): carrier: link connected
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.740 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[55c1df0a-644a-4f43-b50f-69f389bd874e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.756 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b85d802-a262-4c6c-af4d-8f015b8052f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4537c1f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628939, 'reachable_time': 43602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285839, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.771 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9a476b-6bd3-49a6-9bf8-1ca843a47c7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:9a78'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628939, 'tstamp': 628939}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285840, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.784 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc6aafa-6eba-4802-98c7-b178c47a7c4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4537c1f2-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:9a:78'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628939, 'reachable_time': 43602, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285841, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.810 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[43a0b3e2-fdf2-402a-8e1a-8f6a72ed9ecd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.860 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e5307019-7e4c-4222-b4ef-2eb29127914a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.861 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4537c1f2-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.861 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.862 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4537c1f2-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:41 np0005539550 NetworkManager[49039]: <info>  [1764403241.8643] manager: (tap4537c1f2-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 03:00:41 np0005539550 kernel: tap4537c1f2-a0: entered promiscuous mode
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.864 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.866 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4537c1f2-a0, col_values=(('external_ids', {'iface-id': 'a70c8006-642e-4a6b-bad1-4f848cd680db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:00:41Z|00161|binding|INFO|Releasing lport a70c8006-642e-4a6b-bad1-4f848cd680db from this chassis (sb_readonly=0)
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.884 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.885 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[806d5b02-81a7-4714-bd15-e077824fd40c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.885 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-4537c1f2-a597-4d7c-a9c3-2ad7483510a6
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.pid.haproxy
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 4537c1f2-a597-4d7c-a9c3-2ad7483510a6
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:00:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:41.886 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'env', 'PROCESS_TAG=haproxy-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4537c1f2-a597-4d7c-a9c3-2ad7483510a6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.983 257641 DEBUG nova.compute.manager [req-be56698f-d9b3-4e34-83aa-43a35752c8ed req-0c2fba3b-2845-4ee4-9246-15e46ca3ffc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.984 257641 DEBUG oslo_concurrency.lockutils [req-be56698f-d9b3-4e34-83aa-43a35752c8ed req-0c2fba3b-2845-4ee4-9246-15e46ca3ffc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.984 257641 DEBUG oslo_concurrency.lockutils [req-be56698f-d9b3-4e34-83aa-43a35752c8ed req-0c2fba3b-2845-4ee4-9246-15e46ca3ffc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.984 257641 DEBUG oslo_concurrency.lockutils [req-be56698f-d9b3-4e34-83aa-43a35752c8ed req-0c2fba3b-2845-4ee4-9246-15e46ca3ffc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:41 np0005539550 nova_compute[257631]: 2025-11-29 08:00:41.984 257641 DEBUG nova.compute.manager [req-be56698f-d9b3-4e34-83aa-43a35752c8ed req-0c2fba3b-2845-4ee4-9246-15e46ca3ffc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Processing event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:00:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:42.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:42 np0005539550 podman[285873]: 2025-11-29 08:00:42.24420637 +0000 UTC m=+0.050974093 container create 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:00:42 np0005539550 systemd[1]: Started libpod-conmon-89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6.scope.
Nov 29 03:00:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:00:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63963fa8bfc24f339216b7e7a3c5ca93eb708b3759275f80cda4903cbeb6de32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:42 np0005539550 podman[285873]: 2025-11-29 08:00:42.215012054 +0000 UTC m=+0.021779807 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:00:42 np0005539550 podman[285873]: 2025-11-29 08:00:42.319474833 +0000 UTC m=+0.126242546 container init 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:00:42 np0005539550 podman[285873]: 2025-11-29 08:00:42.327757585 +0000 UTC m=+0.134525308 container start 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:00:42 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [NOTICE]   (285893) : New worker (285895) forked
Nov 29 03:00:42 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [NOTICE]   (285893) : Loading success.
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.853 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.855 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403242.8533332, 1c4b0086-da42-4b01-8bd6-d9f6450bda8e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.855 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.860 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.864 257641 INFO nova.virt.libvirt.driver [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Instance spawned successfully.#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.864 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.878 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.884 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.888 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.889 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.889 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.890 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.890 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.891 257641 DEBUG nova.virt.libvirt.driver [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:00:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 276 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.918 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.919 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403242.8535388, 1c4b0086-da42-4b01-8bd6-d9f6450bda8e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.919 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.943 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.946 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403242.8589985, 1c4b0086-da42-4b01-8bd6-d9f6450bda8e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.947 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.956 257641 INFO nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Took 8.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.957 257641 DEBUG nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.964 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.967 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:00:42 np0005539550 nova_compute[257631]: 2025-11-29 08:00:42.998 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:00:43 np0005539550 nova_compute[257631]: 2025-11-29 08:00:43.027 257641 INFO nova.compute.manager [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Took 9.61 seconds to build instance.#033[00m
Nov 29 03:00:43 np0005539550 nova_compute[257631]: 2025-11-29 08:00:43.044 257641 DEBUG oslo_concurrency.lockutils [None req-2d69779e-818b-482c-9873-72bd734a97ba b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:43.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.059 257641 DEBUG nova.compute.manager [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.059 257641 DEBUG oslo_concurrency.lockutils [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.060 257641 DEBUG oslo_concurrency.lockutils [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.060 257641 DEBUG oslo_concurrency.lockutils [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.060 257641 DEBUG nova.compute.manager [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] No waiting events found dispatching network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.060 257641 WARNING nova.compute.manager [req-b35fab7b-6152-4494-ab5b-104b3a1b0160 req-453cd430-eee6-4750-a404-db71db61ec38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received unexpected event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:00:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:44 np0005539550 nova_compute[257631]: 2025-11-29 08:00:44.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 250 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 173 op/s
Nov 29 03:00:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:45.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:45 np0005539550 nova_compute[257631]: 2025-11-29 08:00:45.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:46.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 250 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 162 op/s
Nov 29 03:00:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:47.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:48.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 250 MiB data, 496 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.1 MiB/s wr, 165 op/s
Nov 29 03:00:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:49.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:49 np0005539550 nova_compute[257631]: 2025-11-29 08:00:49.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:50.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:50 np0005539550 nova_compute[257631]: 2025-11-29 08:00:50.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 212 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 120 op/s
Nov 29 03:00:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:51.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:52.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 200 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 107 op/s
Nov 29 03:00:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:54 np0005539550 nova_compute[257631]: 2025-11-29 08:00:54.076 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403239.0749247, 6e814e3b-3edb-4f37-8701-c37929994645 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:00:54 np0005539550 nova_compute[257631]: 2025-11-29 08:00:54.077 257641 INFO nova.compute.manager [-] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:00:54 np0005539550 nova_compute[257631]: 2025-11-29 08:00:54.108 257641 DEBUG nova.compute.manager [None req-370409e7-8c9b-4832-8e19-9a4ee4963bda - - - - - -] [instance: 6e814e3b-3edb-4f37-8701-c37929994645] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:00:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:54 np0005539550 nova_compute[257631]: 2025-11-29 08:00:54.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 169 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 99 op/s
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:00:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:00:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:55.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:00:55 np0005539550 nova_compute[257631]: 2025-11-29 08:00:55.889 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:00:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:00:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:56.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:00:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:56.487 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:00:56 np0005539550 nova_compute[257631]: 2025-11-29 08:00:56.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:56.489 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:00:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 215 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 136 op/s
Nov 29 03:00:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:57.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8686f22b-4f8c-4721-9ab6-a68a2db9d523 does not exist
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f954a0a-ba5f-4b0b-83ce-1d7f7854ec5d does not exist
Nov 29 03:00:57 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a175614a-82ba-40e9-af83-93494bfa70fa does not exist
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:00:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:00:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:00:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:00:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:00:58.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:00:58.491 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.522895356 +0000 UTC m=+0.043561674 container create 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:00:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:00:58 np0005539550 systemd[1]: Started libpod-conmon-6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d.scope.
Nov 29 03:00:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.504445184 +0000 UTC m=+0.025111522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.640332646 +0000 UTC m=+0.160998984 container init 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:00:58 np0005539550 podman[286295]: 2025-11-29 08:00:58.63968206 +0000 UTC m=+0.075847209 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.649346037 +0000 UTC m=+0.170012355 container start 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:00:58 np0005539550 podman[286292]: 2025-11-29 08:00:58.64946269 +0000 UTC m=+0.084144981 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:00:58 np0005539550 nifty_allen[286296]: 167 167
Nov 29 03:00:58 np0005539550 systemd[1]: libpod-6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d.scope: Deactivated successfully.
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.657776492 +0000 UTC m=+0.178442830 container attach 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.658308066 +0000 UTC m=+0.178974384 container died 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:00:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f6d29ace681339aaa4ab6bf50897298dd10f0ea04bf643ab684b215b7119c8b7-merged.mount: Deactivated successfully.
Nov 29 03:00:58 np0005539550 podman[286278]: 2025-11-29 08:00:58.728617422 +0000 UTC m=+0.249283740 container remove 6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:00:58 np0005539550 systemd[1]: libpod-conmon-6a6301ddb9dbe24657e8d638bc12eff26d2eba42f0a4656254fff6caf699994d.scope: Deactivated successfully.
Nov 29 03:00:58 np0005539550 podman[286356]: 2025-11-29 08:00:58.901239673 +0000 UTC m=+0.042630911 container create fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:00:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 273 MiB data, 497 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.7 MiB/s wr, 111 op/s
Nov 29 03:00:58 np0005539550 systemd[1]: Started libpod-conmon-fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2.scope.
Nov 29 03:00:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:00:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:00:58 np0005539550 podman[286356]: 2025-11-29 08:00:58.882202026 +0000 UTC m=+0.023593294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:00:58 np0005539550 podman[286356]: 2025-11-29 08:00:58.98174221 +0000 UTC m=+0.123133528 container init fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:00:58 np0005539550 podman[286356]: 2025-11-29 08:00:58.988680687 +0000 UTC m=+0.130071935 container start fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:00:58 np0005539550 podman[286356]: 2025-11-29 08:00:58.992107044 +0000 UTC m=+0.133498332 container attach fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:00:59
Nov 29 03:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Nov 29 03:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:00:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:00:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:00:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:00:59.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:00:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:00:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:00:59 np0005539550 stoic_rosalind[286372]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:00:59 np0005539550 stoic_rosalind[286372]: --> relative data size: 1.0
Nov 29 03:00:59 np0005539550 stoic_rosalind[286372]: --> All data devices are unavailable
Nov 29 03:00:59 np0005539550 systemd[1]: libpod-fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2.scope: Deactivated successfully.
Nov 29 03:00:59 np0005539550 nova_compute[257631]: 2025-11-29 08:00:59.894 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:00:59 np0005539550 podman[286388]: 2025-11-29 08:00:59.934183075 +0000 UTC m=+0.026216561 container died fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:01:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-62d5e1311244c417a5ee176313733de7ceacd6b0ebb5b4f2a19731735f5bc665-merged.mount: Deactivated successfully.
Nov 29 03:01:00 np0005539550 podman[286388]: 2025-11-29 08:01:00.398798377 +0000 UTC m=+0.490831843 container remove fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:01:00 np0005539550 systemd[1]: libpod-conmon-fa02e7aaf001d396c153b8e5857591e4df43c08a339bcd808cf23bb8ff5e98c2.scope: Deactivated successfully.
Nov 29 03:01:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:00 np0005539550 nova_compute[257631]: 2025-11-29 08:01:00.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 275 MiB data, 509 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 5.0 MiB/s wr, 98 op/s
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.024555424 +0000 UTC m=+0.050218104 container create 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:01:01 np0005539550 systemd[1]: Started libpod-conmon-8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574.scope.
Nov 29 03:01:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.003686161 +0000 UTC m=+0.029348861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.100356251 +0000 UTC m=+0.126018961 container init 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.107256477 +0000 UTC m=+0.132919157 container start 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.110198332 +0000 UTC m=+0.135861012 container attach 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:01:01 np0005539550 elegant_shockley[286559]: 167 167
Nov 29 03:01:01 np0005539550 systemd[1]: libpod-8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574.scope: Deactivated successfully.
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.114155724 +0000 UTC m=+0.139818404 container died 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:01:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e18946f92aed6d52da5dd667f7b0782b27220132e93c6aa807ded5f3b7905e33-merged.mount: Deactivated successfully.
Nov 29 03:01:01 np0005539550 podman[286543]: 2025-11-29 08:01:01.150375589 +0000 UTC m=+0.176038269 container remove 8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:01:01 np0005539550 systemd[1]: libpod-conmon-8c94b12da756a2ce36c67003b0b78cd0f80f3a90b29790b14c3e833ccddab574.scope: Deactivated successfully.
Nov 29 03:01:01 np0005539550 podman[286583]: 2025-11-29 08:01:01.324814486 +0000 UTC m=+0.047151746 container create 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:01:01 np0005539550 systemd[1]: Started libpod-conmon-7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77.scope.
Nov 29 03:01:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:01:01 np0005539550 podman[286583]: 2025-11-29 08:01:01.304715992 +0000 UTC m=+0.027053262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:01:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a110363041f5579d4a04c95d6e50e9162d10c3269cfdc1f954b86260c8300ac4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a110363041f5579d4a04c95d6e50e9162d10c3269cfdc1f954b86260c8300ac4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a110363041f5579d4a04c95d6e50e9162d10c3269cfdc1f954b86260c8300ac4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a110363041f5579d4a04c95d6e50e9162d10c3269cfdc1f954b86260c8300ac4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:01 np0005539550 podman[286583]: 2025-11-29 08:01:01.414056496 +0000 UTC m=+0.136393776 container init 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:01:01 np0005539550 podman[286583]: 2025-11-29 08:01:01.421393674 +0000 UTC m=+0.143730934 container start 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:01:01 np0005539550 podman[286583]: 2025-11-29 08:01:01.425415096 +0000 UTC m=+0.147752356 container attach 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:01:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:01.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]: {
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:    "0": [
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:        {
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "devices": [
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "/dev/loop3"
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            ],
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "lv_name": "ceph_lv0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "lv_size": "7511998464",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "name": "ceph_lv0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "tags": {
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.cluster_name": "ceph",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.crush_device_class": "",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.encrypted": "0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.osd_id": "0",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.type": "block",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:                "ceph.vdo": "0"
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            },
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "type": "block",
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:            "vg_name": "ceph_vg0"
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:        }
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]:    ]
Nov 29 03:01:02 np0005539550 priceless_merkle[286600]: }
Nov 29 03:01:02 np0005539550 systemd[1]: libpod-7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77.scope: Deactivated successfully.
Nov 29 03:01:02 np0005539550 podman[286583]: 2025-11-29 08:01:02.258908823 +0000 UTC m=+0.981246083 container died 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:01:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a110363041f5579d4a04c95d6e50e9162d10c3269cfdc1f954b86260c8300ac4-merged.mount: Deactivated successfully.
Nov 29 03:01:02 np0005539550 podman[286583]: 2025-11-29 08:01:02.327179627 +0000 UTC m=+1.049516887 container remove 7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:02 np0005539550 systemd[1]: libpod-conmon-7a695bdfb11a7bf2ad35f28aa6b94cde8f12289083ca2697da4779123944ff77.scope: Deactivated successfully.
Nov 29 03:01:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 261 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 109 KiB/s rd, 5.3 MiB/s wr, 107 op/s
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:02.912153133 +0000 UTC m=+0.021180912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:01:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:03Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:68:4e 10.100.0.11
Nov 29 03:01:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:03Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:68:4e 10.100.0.11
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.071849284 +0000 UTC m=+0.180877083 container create 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:01:03 np0005539550 systemd[1]: Started libpod-conmon-6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967.scope.
Nov 29 03:01:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.197237867 +0000 UTC m=+0.306265646 container init 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.213177415 +0000 UTC m=+0.322205174 container start 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.217393302 +0000 UTC m=+0.326421061 container attach 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:01:03 np0005539550 angry_chandrasekhar[286791]: 167 167
Nov 29 03:01:03 np0005539550 systemd[1]: libpod-6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967.scope: Deactivated successfully.
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.219390623 +0000 UTC m=+0.328418392 container died 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:01:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ee29092b8b107dca5950ffda2107856aedf773d6371309ef87dd7f7dd13720e-merged.mount: Deactivated successfully.
Nov 29 03:01:03 np0005539550 podman[286775]: 2025-11-29 08:01:03.257000124 +0000 UTC m=+0.366027883 container remove 6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:01:03 np0005539550 systemd[1]: libpod-conmon-6e22a03a33f83d60be29a39d8aa78f1a5f77694b83b3fcb7bfc4a83269b23967.scope: Deactivated successfully.
Nov 29 03:01:03 np0005539550 podman[286816]: 2025-11-29 08:01:03.422419541 +0000 UTC m=+0.038771802 container create 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:01:03 np0005539550 systemd[1]: Started libpod-conmon-49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f.scope.
Nov 29 03:01:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:01:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f0ebff93a09ca193a227dc295fbcec394d57ccfb326c28d8c0e1773a8b977f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f0ebff93a09ca193a227dc295fbcec394d57ccfb326c28d8c0e1773a8b977f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f0ebff93a09ca193a227dc295fbcec394d57ccfb326c28d8c0e1773a8b977f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f0ebff93a09ca193a227dc295fbcec394d57ccfb326c28d8c0e1773a8b977f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:03 np0005539550 podman[286816]: 2025-11-29 08:01:03.408086335 +0000 UTC m=+0.024438616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:01:03 np0005539550 podman[286816]: 2025-11-29 08:01:03.528575153 +0000 UTC m=+0.144927444 container init 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:01:03 np0005539550 podman[286816]: 2025-11-29 08:01:03.534798112 +0000 UTC m=+0.151150373 container start 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:01:03 np0005539550 podman[286816]: 2025-11-29 08:01:03.675803615 +0000 UTC m=+0.292155876 container attach 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:01:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:04.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]: {
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:        "osd_id": 0,
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:        "type": "bluestore"
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]:    }
Nov 29 03:01:04 np0005539550 awesome_noyce[286832]: }
Nov 29 03:01:04 np0005539550 systemd[1]: libpod-49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f.scope: Deactivated successfully.
Nov 29 03:01:04 np0005539550 podman[286854]: 2025-11-29 08:01:04.557716407 +0000 UTC m=+0.025240406 container died 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:01:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c8f0ebff93a09ca193a227dc295fbcec394d57ccfb326c28d8c0e1773a8b977f-merged.mount: Deactivated successfully.
Nov 29 03:01:04 np0005539550 podman[286854]: 2025-11-29 08:01:04.807450798 +0000 UTC m=+0.274974767 container remove 49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:04 np0005539550 systemd[1]: libpod-conmon-49b3817e06913204b3bd0c8201d809e134641015d00e46064d99caa5d534b86f.scope: Deactivated successfully.
Nov 29 03:01:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:01:04 np0005539550 nova_compute[257631]: 2025-11-29 08:01:04.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 237 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 211 KiB/s rd, 5.6 MiB/s wr, 119 op/s
Nov 29 03:01:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:01:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:05 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a9751a7d-fd61-4313-9fc1-c5a59d874b1e does not exist
Nov 29 03:01:05 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 170d2f49-6c73-4c63-8491-304c2537eb90 does not exist
Nov 29 03:01:05 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 05d0f83c-0883-40cc-b23b-c8120dd051fd does not exist
Nov 29 03:01:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:05.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:05 np0005539550 nova_compute[257631]: 2025-11-29 08:01:05.893 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:01:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 207 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 364 KiB/s rd, 5.7 MiB/s wr, 150 op/s
Nov 29 03:01:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:07.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:01:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:01:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 214 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 198 op/s
Nov 29 03:01:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:09.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:09 np0005539550 nova_compute[257631]: 2025-11-29 08:01:09.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:10.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:10 np0005539550 nova_compute[257631]: 2025-11-29 08:01:10.895 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 237 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.0 MiB/s wr, 257 op/s
Nov 29 03:01:10 np0005539550 nova_compute[257631]: 2025-11-29 08:01:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:11 np0005539550 NetworkManager[49039]: <info>  [1764403271.5116] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 03:01:11 np0005539550 NetworkManager[49039]: <info>  [1764403271.5126] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 03:01:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:11.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:11Z|00162|binding|INFO|Releasing lport a70c8006-642e-4a6b-bad1-4f848cd680db from this chassis (sb_readonly=0)
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.731 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:01:11 np0005539550 nova_compute[257631]: 2025-11-29 08:01:11.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:01:12 np0005539550 nova_compute[257631]: 2025-11-29 08:01:12.132 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:12 np0005539550 nova_compute[257631]: 2025-11-29 08:01:12.132 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:12 np0005539550 nova_compute[257631]: 2025-11-29 08:01:12.132 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:01:12 np0005539550 nova_compute[257631]: 2025-11-29 08:01:12.133 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1c4b0086-da42-4b01-8bd6-d9f6450bda8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:12.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:12 np0005539550 podman[286977]: 2025-11-29 08:01:12.352046366 +0000 UTC m=+0.083698480 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:01:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 245 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.1 MiB/s wr, 248 op/s
Nov 29 03:01:13 np0005539550 nova_compute[257631]: 2025-11-29 08:01:13.658 257641 DEBUG nova.compute.manager [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-changed-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:13 np0005539550 nova_compute[257631]: 2025-11-29 08:01:13.659 257641 DEBUG nova.compute.manager [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing instance network info cache due to event network-changed-6346f0b1-5293-423e-99b7-d7688895c848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:13 np0005539550 nova_compute[257631]: 2025-11-29 08:01:13.659 257641 DEBUG oslo_concurrency.lockutils [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:13.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:14.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.504 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.523 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.524 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.524 257641 DEBUG oslo_concurrency.lockutils [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.524 257641 DEBUG nova.network.neutron [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.525 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.526 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.526 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.526 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.527 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.549 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.550 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.550 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.550 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.550 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 260 MiB data, 498 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.2 MiB/s wr, 238 op/s
Nov 29 03:01:14 np0005539550 nova_compute[257631]: 2025-11-29 08:01:14.977 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362858373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:15 np0005539550 nova_compute[257631]: 2025-11-29 08:01:15.033 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:15.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:15 np0005539550 nova_compute[257631]: 2025-11-29 08:01:15.876 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:15 np0005539550 nova_compute[257631]: 2025-11-29 08:01:15.877 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:01:15 np0005539550 nova_compute[257631]: 2025-11-29 08:01:15.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.051 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.052 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4491MB free_disk=20.900978088378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.053 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.053 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.258 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 1c4b0086-da42-4b01-8bd6-d9f6450bda8e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.258 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.259 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:01:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:16.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.303 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997768542' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.741 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.748 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.866 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.900 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:01:16 np0005539550 nova_compute[257631]: 2025-11-29 08:01:16.902 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 260 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.9 MiB/s wr, 227 op/s
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.248 257641 DEBUG nova.network.neutron [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated VIF entry in instance network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.249 257641 DEBUG nova.network.neutron [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.268 257641 DEBUG oslo_concurrency.lockutils [req-420dc293-d3fd-49f7-beaa-44ce564ec93a req-5983932e-6267-4719-905f-3ca19e0bc673 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:17.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.836 257641 DEBUG nova.compute.manager [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-changed-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.837 257641 DEBUG nova.compute.manager [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing instance network info cache due to event network-changed-6346f0b1-5293-423e-99b7-d7688895c848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.837 257641 DEBUG oslo_concurrency.lockutils [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.837 257641 DEBUG oslo_concurrency.lockutils [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:17 np0005539550 nova_compute[257631]: 2025-11-29 08:01:17.837 257641 DEBUG nova.network.neutron [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:18.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:18 np0005539550 nova_compute[257631]: 2025-11-29 08:01:18.896 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:18 np0005539550 nova_compute[257631]: 2025-11-29 08:01:18.896 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:01:18 np0005539550 nova_compute[257631]: 2025-11-29 08:01:18.896 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:01:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 271 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Nov 29 03:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:18.935 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:18.936 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:18.936 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:19.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:19 np0005539550 nova_compute[257631]: 2025-11-29 08:01:19.747 257641 DEBUG nova.network.neutron [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated VIF entry in instance network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:19 np0005539550 nova_compute[257631]: 2025-11-29 08:01:19.747 257641 DEBUG nova.network.neutron [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:01:19 np0005539550 nova_compute[257631]: 2025-11-29 08:01:19.780 257641 DEBUG oslo_concurrency.lockutils [req-e867a211-1a98-413a-a478-0c732968b74d req-f6391e75-9cdd-4a2b-828f-d0569f593999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004688881327935976 of space, bias 1.0, pg target 1.4066643983807927 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:01:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:01:19 np0005539550 nova_compute[257631]: 2025-11-29 08:01:19.980 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:20 np0005539550 nova_compute[257631]: 2025-11-29 08:01:20.899 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 296 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.7 MiB/s wr, 167 op/s
Nov 29 03:01:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:21.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:22.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 317 MiB data, 549 MiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 5.0 MiB/s wr, 133 op/s
Nov 29 03:01:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:23.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:24.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 673 KiB/s rd, 4.7 MiB/s wr, 134 op/s
Nov 29 03:01:24 np0005539550 nova_compute[257631]: 2025-11-29 08:01:24.982 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:25 np0005539550 nova_compute[257631]: 2025-11-29 08:01:25.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:26.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 672 KiB/s rd, 4.3 MiB/s wr, 131 op/s
Nov 29 03:01:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:28.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 146 op/s
Nov 29 03:01:29 np0005539550 podman[287080]: 2025-11-29 08:01:29.048687821 +0000 UTC m=+0.061229647 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:01:29 np0005539550 podman[287081]: 2025-11-29 08:01:29.070032654 +0000 UTC m=+0.079189162 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:01:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:29 np0005539550 nova_compute[257631]: 2025-11-29 08:01:29.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:30 np0005539550 nova_compute[257631]: 2025-11-29 08:01:30.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 172 op/s
Nov 29 03:01:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:31.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:32.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 128 op/s
Nov 29 03:01:32 np0005539550 nova_compute[257631]: 2025-11-29 08:01:32.988 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:32 np0005539550 nova_compute[257631]: 2025-11-29 08:01:32.989 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.021 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.119 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.120 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.127 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.128 257641 INFO nova.compute.claims [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:01:33 np0005539550 nova_compute[257631]: 2025-11-29 08:01:33.573 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:33.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135576831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.052 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.058 257641 DEBUG nova.compute.provider_tree [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.086 257641 DEBUG nova.scheduler.client.report [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.138 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.139 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.209 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.210 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.289 257641 INFO nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:01:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:34.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.420 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.543 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.544 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.544 257641 INFO nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Creating image(s)#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.573 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.601 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.631 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.636 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.659 257641 DEBUG nova.policy [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ba1b423b724a47f692a3d9cbf91860d7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'afaf65dfeab546ee991af0438784b8a3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.703 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.704 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.705 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.705 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.729 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.733 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 291 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 106 KiB/s wr, 88 op/s
Nov 29 03:01:34 np0005539550 nova_compute[257631]: 2025-11-29 08:01:34.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.001 257641 DEBUG nova.compute.manager [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-changed-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.002 257641 DEBUG nova.compute.manager [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing instance network info cache due to event network-changed-6346f0b1-5293-423e-99b7-d7688895c848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.002 257641 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.003 257641 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.003 257641 DEBUG nova.network.neutron [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:35.301 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:35.302 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:01:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:35.303 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:35 np0005539550 nova_compute[257631]: 2025-11-29 08:01:35.904 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.025 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.068 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Successfully created port: 417b0676-65f0-4e4a-a08c-9c313d926b20 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.113 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] resizing rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.232 257641 DEBUG nova.objects.instance [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'migration_context' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.247 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.247 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Ensure instance console log exists: /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.248 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.248 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:36 np0005539550 nova_compute[257631]: 2025-11-29 08:01:36.248 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:36.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 283 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 724 KiB/s wr, 111 op/s
Nov 29 03:01:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.749 257641 DEBUG nova.network.neutron [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated VIF entry in instance network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.750 257641 DEBUG nova.network.neutron [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.770 257641 DEBUG oslo_concurrency.lockutils [req-99dd43f4-fa3c-470f-8066-132fe93f740f req-596c8208-669f-4a1b-bdc2-f9305f103eee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.892 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Successfully updated port: 417b0676-65f0-4e4a-a08c-9c313d926b20 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.908 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.909 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquired lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:37 np0005539550 nova_compute[257631]: 2025-11-29 08:01:37.909 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:01:38 np0005539550 nova_compute[257631]: 2025-11-29 08:01:38.115 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:01:38 np0005539550 nova_compute[257631]: 2025-11-29 08:01:38.285 257641 DEBUG nova.compute.manager [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-changed-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:38 np0005539550 nova_compute[257631]: 2025-11-29 08:01:38.286 257641 DEBUG nova.compute.manager [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Refreshing instance network info cache due to event network-changed-417b0676-65f0-4e4a-a08c-9c313d926b20. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:38 np0005539550 nova_compute[257631]: 2025-11-29 08:01:38.286 257641 DEBUG oslo_concurrency.lockutils [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:38.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 293 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 03:01:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:01:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048498424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:01:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:01:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048498424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:01:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.776 257641 DEBUG nova.network.neutron [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating instance_info_cache with network_info: [{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.802 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Releasing lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.802 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Instance network_info: |[{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.803 257641 DEBUG oslo_concurrency.lockutils [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.803 257641 DEBUG nova.network.neutron [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Refreshing network info cache for port 417b0676-65f0-4e4a-a08c-9c313d926b20 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.806 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Start _get_guest_xml network_info=[{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.810 257641 WARNING nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.815 257641 DEBUG nova.virt.libvirt.host [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.815 257641 DEBUG nova.virt.libvirt.host [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.818 257641 DEBUG nova.virt.libvirt.host [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.819 257641 DEBUG nova.virt.libvirt.host [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.820 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.820 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.820 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.820 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.821 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.821 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.821 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.821 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.822 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.822 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.822 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.822 257641 DEBUG nova.virt.hardware [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.825 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:39 np0005539550 nova_compute[257631]: 2025-11-29 08:01:39.992 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516750251' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.286 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.312 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:40.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.317 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.401 257641 DEBUG nova.compute.manager [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-changed-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.401 257641 DEBUG nova.compute.manager [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing instance network info cache due to event network-changed-6346f0b1-5293-423e-99b7-d7688895c848. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.402 257641 DEBUG oslo_concurrency.lockutils [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.402 257641 DEBUG oslo_concurrency.lockutils [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.402 257641 DEBUG nova.network.neutron [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Refreshing network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:01:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1055240698' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.773 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.775 257641 DEBUG nova.virt.libvirt.vif [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1805929808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1805929808',id=38,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4ij879vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=28e30c6c-30cb-4da9-9f29-a5195810cc1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.775 257641 DEBUG nova.network.os_vif_util [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.776 257641 DEBUG nova.network.os_vif_util [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.777 257641 DEBUG nova.objects.instance [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.793 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <uuid>28e30c6c-30cb-4da9-9f29-a5195810cc1b</uuid>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <name>instance-00000026</name>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1805929808</nova:name>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:01:39</nova:creationTime>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:user uuid="ba1b423b724a47f692a3d9cbf91860d7">tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member</nova:user>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:project uuid="afaf65dfeab546ee991af0438784b8a3">tempest-UpdateMultiattachVolumeNegativeTest-343315775</nova:project>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <nova:port uuid="417b0676-65f0-4e4a-a08c-9c313d926b20">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="serial">28e30c6c-30cb-4da9-9f29-a5195810cc1b</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="uuid">28e30c6c-30cb-4da9-9f29-a5195810cc1b</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d0:40:cb"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <target dev="tap417b0676-65"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/console.log" append="off"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:01:40 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:01:40 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:01:40 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:01:40 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.795 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Preparing to wait for external event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.795 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.796 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.796 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.797 257641 DEBUG nova.virt.libvirt.vif [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1805929808',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1805929808',id=38,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4ij879vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:01:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=28e30c6c-30cb-4da9-9f29-a5195810cc1b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.797 257641 DEBUG nova.network.os_vif_util [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.798 257641 DEBUG nova.network.os_vif_util [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.798 257641 DEBUG os_vif [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.799 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.800 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.804 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap417b0676-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.805 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap417b0676-65, col_values=(('external_ids', {'iface-id': '417b0676-65f0-4e4a-a08c-9c313d926b20', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:40:cb', 'vm-uuid': '28e30c6c-30cb-4da9-9f29-a5195810cc1b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 NetworkManager[49039]: <info>  [1764403300.8072] manager: (tap417b0676-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.813 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.814 257641 INFO os_vif [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65')#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.879 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.879 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.879 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No VIF found with MAC fa:16:3e:d0:40:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.880 257641 INFO nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Using config drive#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.904 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:40 np0005539550 nova_compute[257631]: 2025-11-29 08:01:40.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 304 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.8 MiB/s wr, 120 op/s
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.362 257641 INFO nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Creating config drive at /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.369 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeomakd02 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.392 257641 DEBUG nova.network.neutron [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updated VIF entry in instance network info cache for port 417b0676-65f0-4e4a-a08c-9c313d926b20. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.393 257641 DEBUG nova.network.neutron [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating instance_info_cache with network_info: [{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.413 257641 DEBUG oslo_concurrency.lockutils [req-f2440e7c-f6c4-4581-a7a7-e81a1f8d1c12 req-c67a166a-ff8e-4a37-8309-ad3cfd92473c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.499 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeomakd02" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.619 257641 DEBUG nova.storage.rbd_utils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] rbd image 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.623 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:01:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:41.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.832 257641 DEBUG oslo_concurrency.processutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config 28e30c6c-30cb-4da9-9f29-a5195810cc1b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.833 257641 INFO nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Deleting local config drive /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b/disk.config because it was imported into RBD.#033[00m
Nov 29 03:01:41 np0005539550 kernel: tap417b0676-65: entered promiscuous mode
Nov 29 03:01:41 np0005539550 NetworkManager[49039]: <info>  [1764403301.8826] manager: (tap417b0676-65): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:41Z|00163|binding|INFO|Claiming lport 417b0676-65f0-4e4a-a08c-9c313d926b20 for this chassis.
Nov 29 03:01:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:41Z|00164|binding|INFO|417b0676-65f0-4e4a-a08c-9c313d926b20: Claiming fa:16:3e:d0:40:cb 10.100.0.5
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.892 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:40:cb 10.100.0.5'], port_security=['fa:16:3e:d0:40:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '28e30c6c-30cb-4da9-9f29-a5195810cc1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f73b0808-21fd-43a4-809d-85e512de1cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'afaf65dfeab546ee991af0438784b8a3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8c6ebd91-3a75-49da-9c37-da4a9dd74667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc8191a-1888-462d-843d-a3e5df7bfc2c, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=417b0676-65f0-4e4a-a08c-9c313d926b20) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.894 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 417b0676-65f0-4e4a-a08c-9c313d926b20 in datapath f73b0808-21fd-43a4-809d-85e512de1cb7 bound to our chassis#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.895 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f73b0808-21fd-43a4-809d-85e512de1cb7#033[00m
Nov 29 03:01:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:41Z|00165|binding|INFO|Setting lport 417b0676-65f0-4e4a-a08c-9c313d926b20 ovn-installed in OVS
Nov 29 03:01:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:41Z|00166|binding|INFO|Setting lport 417b0676-65f0-4e4a-a08c-9c313d926b20 up in Southbound
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539550 nova_compute[257631]: 2025-11-29 08:01:41.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.909 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf157db-46aa-4a27-b16e-e0350499963d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.910 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf73b0808-21 in ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.912 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf73b0808-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.912 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b19a9900-8462-4813-bbe8-94d84e9099d5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.913 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[faa7874e-fdb2-475f-bda6-cf42c8fa1d1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 systemd-udevd[287474]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:01:41 np0005539550 systemd-machined[216673]: New machine qemu-22-instance-00000026.
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.925 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[59b00713-e475-45f6-b978-4daaa2c8a629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 systemd[1]: Started Virtual Machine qemu-22-instance-00000026.
Nov 29 03:01:41 np0005539550 NetworkManager[49039]: <info>  [1764403301.9319] device (tap417b0676-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:01:41 np0005539550 NetworkManager[49039]: <info>  [1764403301.9331] device (tap417b0676-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.949 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a00f86f6-f4f9-42a8-a2f1-a37e51820c20]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.978 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9501b2-0d76-4aa7-a03f-805b1c09bd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:41 np0005539550 NetworkManager[49039]: <info>  [1764403301.9860] manager: (tapf73b0808-20): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 03:01:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:41.985 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0eed85-7539-4d7d-91fc-cdcd1f4c446a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.019 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2498205b-f77d-4a74-9287-0ae9689d92c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.022 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[927a5375-0b7a-42e3-8429-506692bba152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 NetworkManager[49039]: <info>  [1764403302.0458] device (tapf73b0808-20): carrier: link connected
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.052 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ff1965fe-ffc4-48c8-b7c5-e359805e16f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.070 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[380f63a7-4d67-4cb9-8a23-74cc49028381]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf73b0808-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634971, 'reachable_time': 31316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287507, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.086 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdc1d0b-015b-46d6-b856-b1e0547ac48a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:9a0d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 634971, 'tstamp': 634971}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287508, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.106 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[08afe58c-b572-4086-83ff-ab4bcf251d1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf73b0808-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:9a:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634971, 'reachable_time': 31316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287509, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.142 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c1766bbd-e5ca-42c8-bd7f-9a91c973e064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.204 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[021bc0d1-d9a7-4c55-aa17-9efd77dc50d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf73b0808-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.206 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.206 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf73b0808-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:42 np0005539550 kernel: tapf73b0808-20: entered promiscuous mode
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.208 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539550 NetworkManager[49039]: <info>  [1764403302.2087] manager: (tapf73b0808-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.209 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.212 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf73b0808-20, col_values=(('external_ids', {'iface-id': '76e8e3af-f64b-4dff-8ae5-0367f134cc2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:42Z|00167|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.215 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.216 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf8c037-976f-4711-bf61-e4f66e5b71f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.217 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-f73b0808-21fd-43a4-809d-85e512de1cb7
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/f73b0808-21fd-43a4-809d-85e512de1cb7.pid.haproxy
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID f73b0808-21fd-43a4-809d-85e512de1cb7
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:01:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:42.218 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'env', 'PROCESS_TAG=haproxy-f73b0808-21fd-43a4-809d-85e512de1cb7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f73b0808-21fd-43a4-809d-85e512de1cb7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:42.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.351 257641 DEBUG nova.compute.manager [req-8244bf6b-865c-4338-a5ef-f9147e9075ce req-7145c17e-dea8-4381-90d4-509eda240297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.352 257641 DEBUG oslo_concurrency.lockutils [req-8244bf6b-865c-4338-a5ef-f9147e9075ce req-7145c17e-dea8-4381-90d4-509eda240297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.352 257641 DEBUG oslo_concurrency.lockutils [req-8244bf6b-865c-4338-a5ef-f9147e9075ce req-7145c17e-dea8-4381-90d4-509eda240297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.352 257641 DEBUG oslo_concurrency.lockutils [req-8244bf6b-865c-4338-a5ef-f9147e9075ce req-7145c17e-dea8-4381-90d4-509eda240297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.352 257641 DEBUG nova.compute.manager [req-8244bf6b-865c-4338-a5ef-f9147e9075ce req-7145c17e-dea8-4381-90d4-509eda240297 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Processing event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.396 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403302.395157, 28e30c6c-30cb-4da9-9f29-a5195810cc1b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.396 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] VM Started (Lifecycle Event)#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.398 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.402 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.406 257641 INFO nova.virt.libvirt.driver [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Instance spawned successfully.#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.407 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.421 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.428 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.432 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.433 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.434 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.434 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.434 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.435 257641 DEBUG nova.virt.libvirt.driver [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.442 257641 DEBUG nova.network.neutron [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updated VIF entry in instance network info cache for port 6346f0b1-5293-423e-99b7-d7688895c848. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.442 257641 DEBUG nova.network.neutron [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [{"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.467 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.467 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403302.3955133, 28e30c6c-30cb-4da9-9f29-a5195810cc1b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.467 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.496 257641 DEBUG oslo_concurrency.lockutils [req-bbc24a17-ed43-4671-a21c-073603e2d92d req-0257b73e-62b3-4902-ba8b-e5464ca5960b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1c4b0086-da42-4b01-8bd6-d9f6450bda8e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.509 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.512 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403302.4015667, 28e30c6c-30cb-4da9-9f29-a5195810cc1b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.513 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.525 257641 INFO nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Took 7.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.526 257641 DEBUG nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.592 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.596 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.628 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.642 257641 INFO nova.compute.manager [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Took 9.55 seconds to build instance.#033[00m
Nov 29 03:01:42 np0005539550 nova_compute[257631]: 2025-11-29 08:01:42.666 257641 DEBUG oslo_concurrency.lockutils [None req-f0aeded8-c477-40fa-98b2-93ed1882c14e ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:42 np0005539550 podman[287581]: 2025-11-29 08:01:42.676782449 +0000 UTC m=+0.050609931 container create 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:01:42 np0005539550 systemd[1]: Started libpod-conmon-44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd.scope.
Nov 29 03:01:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:01:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec336cd54eb859dfbc792ffa4fc698f1a239afba05f9acd72d69c952ae6d3dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:01:42 np0005539550 podman[287581]: 2025-11-29 08:01:42.650795387 +0000 UTC m=+0.024622899 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:01:42 np0005539550 podman[287581]: 2025-11-29 08:01:42.754762679 +0000 UTC m=+0.128590171 container init 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:01:42 np0005539550 podman[287581]: 2025-11-29 08:01:42.761730459 +0000 UTC m=+0.135557941 container start 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:01:42 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [NOTICE]   (287617) : New worker (287625) forked
Nov 29 03:01:42 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [NOTICE]   (287617) : Loading success.
Nov 29 03:01:42 np0005539550 podman[287594]: 2025-11-29 08:01:42.795775321 +0000 UTC m=+0.080517766 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 03:01:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 312 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 450 KiB/s rd, 3.0 MiB/s wr, 85 op/s
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.511 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.511 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.512 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.512 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.512 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.513 257641 INFO nova.compute.manager [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Terminating instance#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.515 257641 DEBUG nova.compute.manager [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:01:43 np0005539550 kernel: tap6346f0b1-52 (unregistering): left promiscuous mode
Nov 29 03:01:43 np0005539550 NetworkManager[49039]: <info>  [1764403303.5669] device (tap6346f0b1-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.584 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:43Z|00168|binding|INFO|Releasing lport 6346f0b1-5293-423e-99b7-d7688895c848 from this chassis (sb_readonly=0)
Nov 29 03:01:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:43Z|00169|binding|INFO|Setting lport 6346f0b1-5293-423e-99b7-d7688895c848 down in Southbound
Nov 29 03:01:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:43Z|00170|binding|INFO|Removing iface tap6346f0b1-52 ovn-installed in OVS
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.589 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:68:4e 10.100.0.11'], port_security=['fa:16:3e:ab:68:4e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1c4b0086-da42-4b01-8bd6-d9f6450bda8e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86064cd197e14fa8a17d2a0d9547af3e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1ea0603-2fd5-45b7-93c4-f54e4ea109a0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2df18a4d-96ab-4a0e-9a1e-536ced01a074, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6346f0b1-5293-423e-99b7-d7688895c848) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.590 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6346f0b1-5293-423e-99b7-d7688895c848 in datapath 4537c1f2-a597-4d7c-a9c3-2ad7483510a6 unbound from our chassis#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.592 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4537c1f2-a597-4d7c-a9c3-2ad7483510a6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.594 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[67995eea-e5b6-4fd2-987c-e8bfe3dba6c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.594 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 namespace which is not needed anymore#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000022.scope: Deactivated successfully.
Nov 29 03:01:43 np0005539550 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000022.scope: Consumed 16.656s CPU time.
Nov 29 03:01:43 np0005539550 systemd-machined[216673]: Machine qemu-21-instance-00000022 terminated.
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [NOTICE]   (285893) : haproxy version is 2.8.14-c23fe91
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [NOTICE]   (285893) : path to executable is /usr/sbin/haproxy
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [WARNING]  (285893) : Exiting Master process...
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [WARNING]  (285893) : Exiting Master process...
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [ALERT]    (285893) : Current worker (285895) exited with code 143 (Terminated)
Nov 29 03:01:43 np0005539550 neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6[285888]: [WARNING]  (285893) : All workers exited. Exiting... (0)
Nov 29 03:01:43 np0005539550 systemd[1]: libpod-89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6.scope: Deactivated successfully.
Nov 29 03:01:43 np0005539550 podman[287657]: 2025-11-29 08:01:43.71651417 +0000 UTC m=+0.040628533 container died 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:01:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:43.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:01:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-63963fa8bfc24f339216b7e7a3c5ca93eb708b3759275f80cda4903cbeb6de32-merged.mount: Deactivated successfully.
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.753 257641 INFO nova.virt.libvirt.driver [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Instance destroyed successfully.#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.753 257641 DEBUG nova.objects.instance [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lazy-loading 'resources' on Instance uuid 1c4b0086-da42-4b01-8bd6-d9f6450bda8e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:01:43 np0005539550 podman[287657]: 2025-11-29 08:01:43.763352243 +0000 UTC m=+0.087466606 container cleanup 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.767 257641 DEBUG nova.virt.libvirt.vif [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-2014344953',display_name='tempest-FloatingIPsAssociationTestJSON-server-2014344953',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-2014344953',id=34,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:00:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86064cd197e14fa8a17d2a0d9547af3e',ramdisk_id='',reservation_id='r-eef128j9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-304325142',owner_user_name='tempest-FloatingIPsAssociationTestJSON-304325142-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:00:43Z,user_data=None,user_id='b956671bad8a4a0b99469a9d0258a2bc',uuid=1c4b0086-da42-4b01-8bd6-d9f6450bda8e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.767 257641 DEBUG nova.network.os_vif_util [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converting VIF {"id": "6346f0b1-5293-423e-99b7-d7688895c848", "address": "fa:16:3e:ab:68:4e", "network": {"id": "4537c1f2-a597-4d7c-a9c3-2ad7483510a6", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1808585921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86064cd197e14fa8a17d2a0d9547af3e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6346f0b1-52", "ovs_interfaceid": "6346f0b1-5293-423e-99b7-d7688895c848", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.768 257641 DEBUG nova.network.os_vif_util [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.769 257641 DEBUG os_vif [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.770 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.771 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6346f0b1-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.782 257641 INFO os_vif [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:68:4e,bridge_name='br-int',has_traffic_filtering=True,id=6346f0b1-5293-423e-99b7-d7688895c848,network=Network(4537c1f2-a597-4d7c-a9c3-2ad7483510a6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6346f0b1-52')#033[00m
Nov 29 03:01:43 np0005539550 systemd[1]: libpod-conmon-89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6.scope: Deactivated successfully.
Nov 29 03:01:43 np0005539550 podman[287699]: 2025-11-29 08:01:43.839536376 +0000 UTC m=+0.044235157 container remove 89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.846 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[db398b30-7cb1-430a-8c30-038b242022b7]: (4, ('Sat Nov 29 08:01:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 (89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6)\n89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6\nSat Nov 29 08:01:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 (89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6)\n89f705bc0d6266ba23891b29b30e64d42e0192540da7f575d058c4680fc3a3b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.848 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2126d440-d0a7-4624-b7ce-31d30f1e5f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.849 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4537c1f2-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:01:43 np0005539550 kernel: tap4537c1f2-a0: left promiscuous mode
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.851 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 nova_compute[257631]: 2025-11-29 08:01:43.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.869 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c7df6336-f804-4194-8b1e-19999ca8a1b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.889 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[437d1097-7f87-4cab-af96-87a841529e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.891 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[42c094d7-36bc-4adf-82a4-c4c11cc54419]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.911 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[67c3d928-ac52-4dbc-9013-10ef7ddd47c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628934, 'reachable_time': 20155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287729, 'error': None, 'target': 'ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.914 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4537c1f2-a597-4d7c-a9c3-2ad7483510a6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:01:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:01:43.914 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e154a580-a79a-4c2c-bc9c-de8e242a83b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:01:43 np0005539550 systemd[1]: run-netns-ovnmeta\x2d4537c1f2\x2da597\x2d4d7c\x2da9c3\x2d2ad7483510a6.mount: Deactivated successfully.
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.206 257641 DEBUG nova.compute.manager [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-unplugged-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.209 257641 DEBUG oslo_concurrency.lockutils [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.209 257641 DEBUG oslo_concurrency.lockutils [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.210 257641 DEBUG oslo_concurrency.lockutils [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.210 257641 DEBUG nova.compute.manager [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] No waiting events found dispatching network-vif-unplugged-6346f0b1-5293-423e-99b7-d7688895c848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.210 257641 DEBUG nova.compute.manager [req-a302ce4f-97f9-47a6-a43a-f2038b0de9f4 req-df56a772-c123-499c-884e-8b27a1b0ea04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-unplugged-6346f0b1-5293-423e-99b7-d7688895c848 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.228 257641 INFO nova.virt.libvirt.driver [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Deleting instance files /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e_del#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.229 257641 INFO nova.virt.libvirt.driver [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Deletion of /var/lib/nova/instances/1c4b0086-da42-4b01-8bd6-d9f6450bda8e_del complete#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.290 257641 INFO nova.compute.manager [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.291 257641 DEBUG oslo.service.loopingcall [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.291 257641 DEBUG nova.compute.manager [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.292 257641 DEBUG nova.network.neutron [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:01:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:44.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.433 257641 DEBUG nova.compute.manager [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.434 257641 DEBUG oslo_concurrency.lockutils [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.435 257641 DEBUG oslo_concurrency.lockutils [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.435 257641 DEBUG oslo_concurrency.lockutils [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.435 257641 DEBUG nova.compute.manager [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] No waiting events found dispatching network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:44 np0005539550 nova_compute[257631]: 2025-11-29 08:01:44.436 257641 WARNING nova.compute.manager [req-95401a65-ae2d-4ce2-a907-d789d570b6c0 req-e8ccd045-2fba-4435-826b-98a67fde4982 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received unexpected event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:01:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 332 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 4.5 MiB/s wr, 113 op/s
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.135 257641 DEBUG nova.network.neutron [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.151 257641 INFO nova.compute.manager [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Took 0.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.195 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.196 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.253 257641 DEBUG oslo_concurrency.processutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:01:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:01:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/583124969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.709 257641 DEBUG oslo_concurrency.processutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.715 257641 DEBUG nova.compute.provider_tree [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.729 257641 DEBUG nova.scheduler.client.report [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:01:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.769 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.793 257641 INFO nova.scheduler.client.report [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Deleted allocations for instance 1c4b0086-da42-4b01-8bd6-d9f6450bda8e#033[00m
Nov 29 03:01:45 np0005539550 nova_compute[257631]: 2025-11-29 08:01:45.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.071 257641 DEBUG oslo_concurrency.lockutils [None req-55dc3dfd-d0e1-472c-be11-181faaece34a b956671bad8a4a0b99469a9d0258a2bc 86064cd197e14fa8a17d2a0d9547af3e - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.296 257641 DEBUG nova.compute.manager [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.296 257641 DEBUG oslo_concurrency.lockutils [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.297 257641 DEBUG oslo_concurrency.lockutils [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.297 257641 DEBUG oslo_concurrency.lockutils [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1c4b0086-da42-4b01-8bd6-d9f6450bda8e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.297 257641 DEBUG nova.compute.manager [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] No waiting events found dispatching network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.297 257641 WARNING nova.compute.manager [req-dd96347f-e2d8-4ba7-b30e-ca2e4ef5f0ad req-31f7239e-8b70-40f7-aafe-c392d7eb0398 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received unexpected event network-vif-plugged-6346f0b1-5293-423e-99b7-d7688895c848 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:01:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:46.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:46 np0005539550 nova_compute[257631]: 2025-11-29 08:01:46.528 257641 DEBUG nova.compute.manager [req-48ec1135-2c86-46fb-860f-d31d8f1a08c2 req-ca5d57c5-656d-4f23-8b7a-d72f4485a040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Received event network-vif-deleted-6346f0b1-5293-423e-99b7-d7688895c848 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 340 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Nov 29 03:01:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:47 np0005539550 nova_compute[257631]: 2025-11-29 08:01:47.787 257641 DEBUG nova.compute.manager [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-changed-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:01:47 np0005539550 nova_compute[257631]: 2025-11-29 08:01:47.787 257641 DEBUG nova.compute.manager [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Refreshing instance network info cache due to event network-changed-417b0676-65f0-4e4a-a08c-9c313d926b20. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:01:47 np0005539550 nova_compute[257631]: 2025-11-29 08:01:47.787 257641 DEBUG oslo_concurrency.lockutils [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:01:47 np0005539550 nova_compute[257631]: 2025-11-29 08:01:47.788 257641 DEBUG oslo_concurrency.lockutils [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:01:47 np0005539550 nova_compute[257631]: 2025-11-29 08:01:47.788 257641 DEBUG nova.network.neutron [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Refreshing network info cache for port 417b0676-65f0-4e4a-a08c-9c313d926b20 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:01:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:48.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:48 np0005539550 nova_compute[257631]: 2025-11-29 08:01:48.774 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 293 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Nov 29 03:01:49 np0005539550 nova_compute[257631]: 2025-11-29 08:01:49.236 257641 DEBUG nova.network.neutron [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updated VIF entry in instance network info cache for port 417b0676-65f0-4e4a-a08c-9c313d926b20. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:01:49 np0005539550 nova_compute[257631]: 2025-11-29 08:01:49.237 257641 DEBUG nova.network.neutron [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating instance_info_cache with network_info: [{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:01:49 np0005539550 nova_compute[257631]: 2025-11-29 08:01:49.253 257641 DEBUG oslo_concurrency.lockutils [req-646d0f27-16f8-4a50-810e-072b72e86b93 req-ac75f063-a72a-4cbf-9890-62f5a5085776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:01:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:50Z|00171|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:01:50 np0005539550 nova_compute[257631]: 2025-11-29 08:01:50.268 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:50.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:50 np0005539550 nova_compute[257631]: 2025-11-29 08:01:50.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 293 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Nov 29 03:01:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:52.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 293 MiB data, 579 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 259 op/s
Nov 29 03:01:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:53.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:53 np0005539550 nova_compute[257631]: 2025-11-29 08:01:53.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:54.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 293 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Nov 29 03:01:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:01:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:01:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:55.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:01:55 np0005539550 nova_compute[257631]: 2025-11-29 08:01:55.913 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:56Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d0:40:cb 10.100.0.5
Nov 29 03:01:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:01:56Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d0:40:cb 10.100.0.5
Nov 29 03:01:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:56.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 260 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.0 MiB/s wr, 249 op/s
Nov 29 03:01:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:57.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:01:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:01:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:01:58.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:01:58 np0005539550 nova_compute[257631]: 2025-11-29 08:01:58.754 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403303.752814, 1c4b0086-da42-4b01-8bd6-d9f6450bda8e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:01:58 np0005539550 nova_compute[257631]: 2025-11-29 08:01:58.754 257641 INFO nova.compute.manager [-] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:01:58 np0005539550 nova_compute[257631]: 2025-11-29 08:01:58.778 257641 DEBUG nova.compute.manager [None req-5f557710-24a3-461c-8371-68cee685377f - - - - - -] [instance: 1c4b0086-da42-4b01-8bd6-d9f6450bda8e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:01:58 np0005539550 nova_compute[257631]: 2025-11-29 08:01:58.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:01:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 241 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 215 op/s
Nov 29 03:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:01:59
Nov 29 03:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'backups', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Nov 29 03:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:01:59 np0005539550 podman[287816]: 2025-11-29 08:01:59.336112614 +0000 UTC m=+0.067939400 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:59 np0005539550 podman[287815]: 2025-11-29 08:01:59.37418487 +0000 UTC m=+0.104745593 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:01:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:01:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:01:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:01:59.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:00.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:02:00Z|00172|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:02:00 np0005539550 nova_compute[257631]: 2025-11-29 08:02:00.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539550 nova_compute[257631]: 2025-11-29 08:02:00.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 258 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 173 op/s
Nov 29 03:02:01 np0005539550 nova_compute[257631]: 2025-11-29 08:02:01.177 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:02:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 6703 writes, 30K keys, 6697 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 6703 writes, 6697 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1511 writes, 6359 keys, 1510 commit groups, 1.0 writes per commit group, ingest: 10.19 MB, 0.02 MB/s#012Interval WAL: 1511 writes, 1510 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     20.4      1.84              0.12        17    0.108       0      0       0.0       0.0#012  L6      1/0    8.85 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3     19.4     16.5      9.75              0.46        16    0.609     86K   8606       0.0       0.0#012 Sum      1/0    8.85 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.3     16.3     17.1     11.59              0.58        33    0.351     86K   8606       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    108.3    107.0      0.35              0.11         6    0.058     19K   2050       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     19.4     16.5      9.75              0.46        16    0.609     86K   8606       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     20.4      1.83              0.12        16    0.114       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.037, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.19 GB write, 0.07 MB/s write, 0.18 GB read, 0.06 MB/s read, 11.6 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 18.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000177 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1063,18.20 MB,5.98815%) FilterBlock(34,245.11 KB,0.0787384%) IndexBlock(34,458.67 KB,0.147343%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:02:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:01.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:02.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 254 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 615 KiB/s rd, 3.6 MiB/s wr, 142 op/s
Nov 29 03:02:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000077s ======
Nov 29 03:02:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:03.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000077s
Nov 29 03:02:03 np0005539550 nova_compute[257631]: 2025-11-29 08:02:03.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:04 np0005539550 nova_compute[257631]: 2025-11-29 08:02:04.552 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 242 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 670 KiB/s rd, 4.2 MiB/s wr, 158 op/s
Nov 29 03:02:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:05.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:05 np0005539550 nova_compute[257631]: 2025-11-29 08:02:05.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:06.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev eefead3d-10bf-4f8e-93f7-6a65f05fcd56 does not exist
Nov 29 03:02:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f59ee196-acf2-484f-9ed1-7fc711820c3f does not exist
Nov 29 03:02:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 308cb99c-12af-4fb2-a348-7386d467b225 does not exist
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:02:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:02:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 200 MiB data, 553 MiB used, 20 GiB / 21 GiB avail; 677 KiB/s rd, 4.2 MiB/s wr, 169 op/s
Nov 29 03:02:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:02:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.512298699 +0000 UTC m=+0.087814315 container create c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.447794569 +0000 UTC m=+0.023310215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:07 np0005539550 systemd[1]: Started libpod-conmon-c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90.scope.
Nov 29 03:02:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:07.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.820313234 +0000 UTC m=+0.395828880 container init c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.828382943 +0000 UTC m=+0.403898569 container start c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.832243973 +0000 UTC m=+0.407759599 container attach c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:02:07 np0005539550 systemd[1]: libpod-c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90.scope: Deactivated successfully.
Nov 29 03:02:07 np0005539550 loving_keller[288141]: 167 167
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.836528504 +0000 UTC m=+0.412044140 container died c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:02:07 np0005539550 conmon[288141]: conmon c614a34fd828b27781b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90.scope/container/memory.events
Nov 29 03:02:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4f6a3d120db2ff687f9e32243ab4499ea8c1a9d76a27660687b28b975e0ec910-merged.mount: Deactivated successfully.
Nov 29 03:02:07 np0005539550 podman[288125]: 2025-11-29 08:02:07.875225146 +0000 UTC m=+0.450740772 container remove c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:02:07 np0005539550 systemd[1]: libpod-conmon-c614a34fd828b27781b4633224fce41c8db83b355a54f1d6d8022eb01225ef90.scope: Deactivated successfully.
Nov 29 03:02:08 np0005539550 podman[288164]: 2025-11-29 08:02:08.099897473 +0000 UTC m=+0.095705139 container create 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:02:08 np0005539550 podman[288164]: 2025-11-29 08:02:08.026259917 +0000 UTC m=+0.022067633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:08 np0005539550 systemd[1]: Started libpod-conmon-0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760.scope.
Nov 29 03:02:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:08 np0005539550 podman[288164]: 2025-11-29 08:02:08.211452872 +0000 UTC m=+0.207260558 container init 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:02:08 np0005539550 podman[288164]: 2025-11-29 08:02:08.219802548 +0000 UTC m=+0.215610214 container start 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:08 np0005539550 podman[288164]: 2025-11-29 08:02:08.229412137 +0000 UTC m=+0.225219833 container attach 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:02:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:08.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:08 np0005539550 nova_compute[257631]: 2025-11-29 08:02:08.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:02:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 3.4 MiB/s wr, 134 op/s
Nov 29 03:02:09 np0005539550 unruffled_swirles[288180]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:02:09 np0005539550 unruffled_swirles[288180]: --> relative data size: 1.0
Nov 29 03:02:09 np0005539550 unruffled_swirles[288180]: --> All data devices are unavailable
Nov 29 03:02:09 np0005539550 systemd[1]: libpod-0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760.scope: Deactivated successfully.
Nov 29 03:02:09 np0005539550 conmon[288180]: conmon 0824c5d0be4de1dc9c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760.scope/container/memory.events
Nov 29 03:02:09 np0005539550 podman[288164]: 2025-11-29 08:02:09.198789816 +0000 UTC m=+1.194597482 container died 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8f44a3b928bc252713721129b475e8470bce8f6160d7ffc80f896d375ebfe6ca-merged.mount: Deactivated successfully.
Nov 29 03:02:09 np0005539550 podman[288164]: 2025-11-29 08:02:09.262819674 +0000 UTC m=+1.258627340 container remove 0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_swirles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:02:09 np0005539550 systemd[1]: libpod-conmon-0824c5d0be4de1dc9c6daff4965f233872ac72a6c3eecca3e76c8af5d18de760.scope: Deactivated successfully.
Nov 29 03:02:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:09.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.90919058 +0000 UTC m=+0.041341251 container create be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:02:09 np0005539550 systemd[1]: Started libpod-conmon-be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab.scope.
Nov 29 03:02:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.986568914 +0000 UTC m=+0.118719605 container init be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.891374479 +0000 UTC m=+0.023525170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.993488363 +0000 UTC m=+0.125639034 container start be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.997290801 +0000 UTC m=+0.129441472 container attach be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:02:09 np0005539550 focused_banzai[288414]: 167 167
Nov 29 03:02:09 np0005539550 systemd[1]: libpod-be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab.scope: Deactivated successfully.
Nov 29 03:02:09 np0005539550 podman[288397]: 2025-11-29 08:02:09.998655907 +0000 UTC m=+0.130806588 container died be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:02:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-57d9e2812e4ace7897ca947a32206158eb36d8d7c4a9218a2ae6a31429ec6903-merged.mount: Deactivated successfully.
Nov 29 03:02:10 np0005539550 podman[288397]: 2025-11-29 08:02:10.036076696 +0000 UTC m=+0.168227367 container remove be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:02:10 np0005539550 systemd[1]: libpod-conmon-be555d3d19c1bc8d2f121ede73a1433d9ec8758711c6803193ee33830066e9ab.scope: Deactivated successfully.
Nov 29 03:02:10 np0005539550 podman[288439]: 2025-11-29 08:02:10.21506165 +0000 UTC m=+0.044662627 container create 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:02:10 np0005539550 systemd[1]: Started libpod-conmon-2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb.scope.
Nov 29 03:02:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5bde62fc889f327d965eb33c327df441b18dc7c6070029beedef271d69bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5bde62fc889f327d965eb33c327df441b18dc7c6070029beedef271d69bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5bde62fc889f327d965eb33c327df441b18dc7c6070029beedef271d69bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdbe5bde62fc889f327d965eb33c327df441b18dc7c6070029beedef271d69bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:10 np0005539550 podman[288439]: 2025-11-29 08:02:10.288794349 +0000 UTC m=+0.118395356 container init 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:02:10 np0005539550 podman[288439]: 2025-11-29 08:02:10.195466143 +0000 UTC m=+0.025067140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:10 np0005539550 podman[288439]: 2025-11-29 08:02:10.296831357 +0000 UTC m=+0.126432334 container start 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:02:10 np0005539550 podman[288439]: 2025-11-29 08:02:10.30077671 +0000 UTC m=+0.130377687 container attach 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:02:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:10.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:10 np0005539550 nova_compute[257631]: 2025-11-29 08:02:10.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 457 KiB/s rd, 2.3 MiB/s wr, 97 op/s
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]: {
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:    "0": [
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:        {
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "devices": [
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "/dev/loop3"
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            ],
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "lv_name": "ceph_lv0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "lv_size": "7511998464",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "name": "ceph_lv0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "tags": {
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.cluster_name": "ceph",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.crush_device_class": "",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.encrypted": "0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.osd_id": "0",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.type": "block",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:                "ceph.vdo": "0"
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            },
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "type": "block",
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:            "vg_name": "ceph_vg0"
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:        }
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]:    ]
Nov 29 03:02:11 np0005539550 vigilant_vaughan[288455]: }
Nov 29 03:02:11 np0005539550 systemd[1]: libpod-2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb.scope: Deactivated successfully.
Nov 29 03:02:11 np0005539550 podman[288439]: 2025-11-29 08:02:11.153297184 +0000 UTC m=+0.982898191 container died 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:02:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fdbe5bde62fc889f327d965eb33c327df441b18dc7c6070029beedef271d69bb-merged.mount: Deactivated successfully.
Nov 29 03:02:11 np0005539550 podman[288439]: 2025-11-29 08:02:11.207662462 +0000 UTC m=+1.037263439 container remove 2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:02:11 np0005539550 systemd[1]: libpod-conmon-2a39ac282117fe8aadc9fab0e3ae64a459a2a06d37cab63f451191b11993cfeb.scope: Deactivated successfully.
Nov 29 03:02:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:11.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.808873719 +0000 UTC m=+0.048263351 container create 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:11 np0005539550 systemd[1]: Started libpod-conmon-651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d.scope.
Nov 29 03:02:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.794177638 +0000 UTC m=+0.033567300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:11 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:02:11 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.899852214 +0000 UTC m=+0.139241866 container init 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.905120771 +0000 UTC m=+0.144510403 container start 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.909400682 +0000 UTC m=+0.148790324 container attach 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:02:11 np0005539550 agitated_sutherland[288635]: 167 167
Nov 29 03:02:11 np0005539550 systemd[1]: libpod-651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d.scope: Deactivated successfully.
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.910471379 +0000 UTC m=+0.149861011 container died 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:02:11 np0005539550 nova_compute[257631]: 2025-11-29 08:02:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-78e72690db54cf62ca2b0429b48e137901931c219357fc5d95b8b700f4d7e9ef-merged.mount: Deactivated successfully.
Nov 29 03:02:11 np0005539550 podman[288619]: 2025-11-29 08:02:11.947295513 +0000 UTC m=+0.186685145 container remove 651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:02:11 np0005539550 systemd[1]: libpod-conmon-651a78bfb01db0addfabd3870269e18aac77df15852a79508f2d624ad1b3bf7d.scope: Deactivated successfully.
Nov 29 03:02:12 np0005539550 podman[288661]: 2025-11-29 08:02:12.112280945 +0000 UTC m=+0.043790415 container create bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:02:12 np0005539550 systemd[1]: Started libpod-conmon-bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df.scope.
Nov 29 03:02:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:02:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077a545b858d26f78abd011f97f9d2485fa0d0a367de59a582105cc5d757e0f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077a545b858d26f78abd011f97f9d2485fa0d0a367de59a582105cc5d757e0f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077a545b858d26f78abd011f97f9d2485fa0d0a367de59a582105cc5d757e0f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/077a545b858d26f78abd011f97f9d2485fa0d0a367de59a582105cc5d757e0f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:02:12 np0005539550 podman[288661]: 2025-11-29 08:02:12.095513891 +0000 UTC m=+0.027023391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:02:12 np0005539550 podman[288661]: 2025-11-29 08:02:12.202583582 +0000 UTC m=+0.134093072 container init bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:02:12 np0005539550 podman[288661]: 2025-11-29 08:02:12.212362025 +0000 UTC m=+0.143871495 container start bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:02:12 np0005539550 podman[288661]: 2025-11-29 08:02:12.215790474 +0000 UTC m=+0.147299954 container attach bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:02:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:12.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:12 np0005539550 nova_compute[257631]: 2025-11-29 08:02:12.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:12 np0005539550 nova_compute[257631]: 2025-11-29 08:02:12.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 835 KiB/s wr, 39 op/s
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]: {
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:        "osd_id": 0,
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:        "type": "bluestore"
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]:    }
Nov 29 03:02:13 np0005539550 ecstatic_lalande[288677]: }
Nov 29 03:02:13 np0005539550 systemd[1]: libpod-bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df.scope: Deactivated successfully.
Nov 29 03:02:13 np0005539550 podman[288661]: 2025-11-29 08:02:13.087766422 +0000 UTC m=+1.019275892 container died bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:02:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-077a545b858d26f78abd011f97f9d2485fa0d0a367de59a582105cc5d757e0f6-merged.mount: Deactivated successfully.
Nov 29 03:02:13 np0005539550 podman[288661]: 2025-11-29 08:02:13.143827674 +0000 UTC m=+1.075337144 container remove bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:02:13 np0005539550 systemd[1]: libpod-conmon-bc7f03670067d8a12042d7f2a939b7b7d92f797aa9dc2997dae922bf368b79df.scope: Deactivated successfully.
Nov 29 03:02:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:02:13 np0005539550 podman[288699]: 2025-11-29 08:02:13.242374165 +0000 UTC m=+0.126192908 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:02:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.451 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.451 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.451 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.452 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.452 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:13.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/871497214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:13 np0005539550 nova_compute[257631]: 2025-11-29 08:02:13.890 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f79344a2-4223-47b4-b807-a9f58780197a does not exist
Nov 29 03:02:14 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e4f8efc-211f-4518-a526-45085047537d does not exist
Nov 29 03:02:14 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 367d32f4-847e-4bd0-994a-7f2536598d32 does not exist
Nov 29 03:02:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.421 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.422 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:02:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.597 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.598 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4493MB free_disk=20.897136688232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.598 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:14 np0005539550 nova_compute[257631]: 2025-11-29 08:02:14.599 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 645 KiB/s wr, 30 op/s
Nov 29 03:02:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:15.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:15 np0005539550 nova_compute[257631]: 2025-11-29 08:02:15.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.389 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.390 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.390 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.438 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794370649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.905 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:16 np0005539550 nova_compute[257631]: 2025-11-29 08:02:16.911 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 8.7 KiB/s rd, 22 KiB/s wr, 14 op/s
Nov 29 03:02:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:17.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:17 np0005539550 nova_compute[257631]: 2025-11-29 08:02:17.831 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:17 np0005539550 nova_compute[257631]: 2025-11-29 08:02:17.862 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:02:17 np0005539550 nova_compute[257631]: 2025-11-29 08:02:17.863 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:18 np0005539550 nova_compute[257631]: 2025-11-29 08:02:18.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:18 np0005539550 nova_compute[257631]: 2025-11-29 08:02:18.863 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:18 np0005539550 nova_compute[257631]: 2025-11-29 08:02:18.864 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:18 np0005539550 nova_compute[257631]: 2025-11-29 08:02:18.864 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:02:18 np0005539550 nova_compute[257631]: 2025-11-29 08:02:18.864 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:18.935 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:18.936 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:18.937 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 21 KiB/s wr, 1 op/s
Nov 29 03:02:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004343185836590359 of space, bias 1.0, pg target 1.3029557509771077 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:02:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:02:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:20.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:20 np0005539550 nova_compute[257631]: 2025-11-29 08:02:20.925 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1.1 KiB/s wr, 0 op/s
Nov 29 03:02:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:21.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:21 np0005539550 nova_compute[257631]: 2025-11-29 08:02:21.895 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:02:21 np0005539550 nova_compute[257631]: 2025-11-29 08:02:21.896 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:02:21 np0005539550 nova_compute[257631]: 2025-11-29 08:02:21.896 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:02:21 np0005539550 nova_compute[257631]: 2025-11-29 08:02:21.896 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000052s ======
Nov 29 03:02:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Nov 29 03:02:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 03:02:23 np0005539550 nova_compute[257631]: 2025-11-29 08:02:23.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:23.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:23 np0005539550 nova_compute[257631]: 2025-11-29 08:02:23.793 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:24.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 03:02:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:25.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:25 np0005539550 nova_compute[257631]: 2025-11-29 08:02:25.927 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Nov 29 03:02:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:27.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:28.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:28 np0005539550 nova_compute[257631]: 2025-11-29 08:02:28.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 4.7 KiB/s wr, 0 op/s
Nov 29 03:02:29 np0005539550 nova_compute[257631]: 2025-11-29 08:02:29.087 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating instance_info_cache with network_info: [{"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:29 np0005539550 podman[288866]: 2025-11-29 08:02:29.522692309 +0000 UTC m=+0.062409247 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:02:29 np0005539550 podman[288865]: 2025-11-29 08:02:29.561879863 +0000 UTC m=+0.101112799 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:02:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.631 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-28e30c6c-30cb-4da9-9f29-a5195810cc1b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.632 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.632 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.632 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.633 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.633 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.634 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:02:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 4.0 KiB/s wr, 0 op/s
Nov 29 03:02:30 np0005539550 nova_compute[257631]: 2025-11-29 08:02:30.977 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:31.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.9 KiB/s rd, 4.1 KiB/s wr, 2 op/s
Nov 29 03:02:33 np0005539550 nova_compute[257631]: 2025-11-29 08:02:33.127 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:33 np0005539550 nova_compute[257631]: 2025-11-29 08:02:33.128 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:33 np0005539550 nova_compute[257631]: 2025-11-29 08:02:33.207 257641 DEBUG nova.objects.instance [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:33 np0005539550 nova_compute[257631]: 2025-11-29 08:02:33.298 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:33 np0005539550 nova_compute[257631]: 2025-11-29 08:02:33.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.127 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.128 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.129 257641 INFO nova.compute.manager [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Attaching volume 87f35256-d25d-497d-b09f-e847b4302174 to /dev/vdb#033[00m
Nov 29 03:02:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:34.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.384 257641 DEBUG os_brick.utils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.386 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.399 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.399 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[07269e77-00dd-4e27-b35c-0a97b399292e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.402 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.411 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.411 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[f969b071-45c4-4e0f-b8f6-c9f2fcc24905]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.413 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.422 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.422 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[934fec64-a68e-4ffc-8c9b-0bca2730e996]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.424 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3843e737-33bb-4f2f-abbe-3e248ec5412e]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.425 257641 DEBUG oslo_concurrency.processutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.447 257641 DEBUG oslo_concurrency.processutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.450 257641 DEBUG os_brick.initiator.connectors.lightos [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.450 257641 DEBUG os_brick.initiator.connectors.lightos [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.451 257641 DEBUG os_brick.initiator.connectors.lightos [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.451 257641 DEBUG os_brick.utils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.451 257641 DEBUG nova.virt.block_device [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating existing volume attachment record: 6a2615ee-4955-44e0-9406-eaf56d9c5b4b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:02:34 np0005539550 nova_compute[257631]: 2025-11-29 08:02:34.685 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:02:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 2.0 KiB/s rd, 4.1 KiB/s wr, 3 op/s
Nov 29 03:02:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:35.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:35 np0005539550 nova_compute[257631]: 2025-11-29 08:02:35.979 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 228 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 1.2 MiB/s wr, 16 op/s
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.137 257641 DEBUG nova.objects.instance [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.161 257641 DEBUG nova.virt.libvirt.driver [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Attempting to attach volume 87f35256-d25d-497d-b09f-e847b4302174 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.164 257641 DEBUG nova.virt.libvirt.guest [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:02:37 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:37 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:02:37 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:02:37 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.284 257641 DEBUG nova.virt.libvirt.driver [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.284 257641 DEBUG nova.virt.libvirt.driver [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.285 257641 DEBUG nova.virt.libvirt.driver [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.285 257641 DEBUG nova.virt.libvirt.driver [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] No VIF found with MAC fa:16:3e:d0:40:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.612 257641 DEBUG oslo_concurrency.lockutils [None req-384e2c78-059b-49c2-a0fb-d8195eb8c319 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:37.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:37 np0005539550 nova_compute[257631]: 2025-11-29 08:02:37.979 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:38 np0005539550 nova_compute[257631]: 2025-11-29 08:02:38.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 247 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 03:02:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:02:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:02:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 247 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 29 03:02:40 np0005539550 nova_compute[257631]: 2025-11-29 08:02:40.981 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:41.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:42 np0005539550 nova_compute[257631]: 2025-11-29 08:02:42.943 257641 DEBUG oslo_concurrency.lockutils [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:42 np0005539550 nova_compute[257631]: 2025-11-29 08:02:42.944 257641 DEBUG oslo_concurrency.lockutils [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 247 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 29 03:02:42 np0005539550 nova_compute[257631]: 2025-11-29 08:02:42.973 257641 INFO nova.compute.manager [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Detaching volume 87f35256-d25d-497d-b09f-e847b4302174#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.251 257641 INFO nova.virt.block_device [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Attempting to driver detach volume 87f35256-d25d-497d-b09f-e847b4302174 from mountpoint /dev/vdb#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.258 257641 DEBUG nova.virt.libvirt.driver [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Attempting to detach device vdb from instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.259 257641 DEBUG nova.virt.libvirt.guest [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:02:43 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.280 257641 INFO nova.virt.libvirt.driver [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully detached device vdb from instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b from the persistent domain config.#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.281 257641 DEBUG nova.virt.libvirt.driver [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.281 257641 DEBUG nova.virt.libvirt.guest [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-87f35256-d25d-497d-b09f-e847b4302174">
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <serial>87f35256-d25d-497d-b09f-e847b4302174</serial>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:02:43 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:02:43 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.442 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764403363.4416184, 28e30c6c-30cb-4da9-9f29-a5195810cc1b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.444 257641 DEBUG nova.virt.libvirt.driver [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.447 257641 INFO nova.virt.libvirt.driver [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully detached device vdb from instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b from the live domain config.#033[00m
Nov 29 03:02:43 np0005539550 nova_compute[257631]: 2025-11-29 08:02:43.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:02:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:43.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:02:44 np0005539550 nova_compute[257631]: 2025-11-29 08:02:44.022 257641 DEBUG nova.objects.instance [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'flavor' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:44 np0005539550 nova_compute[257631]: 2025-11-29 08:02:44.088 257641 DEBUG oslo_concurrency.lockutils [None req-ab8f70dd-aec6-4b7a-8bce-b595d2023b56 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:44 np0005539550 podman[288963]: 2025-11-29 08:02:44.384033821 +0000 UTC m=+0.120111760 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:02:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:02:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:02:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 239 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Nov 29 03:02:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000027s ======
Nov 29 03:02:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:45.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Nov 29 03:02:46 np0005539550 nova_compute[257631]: 2025-11-29 08:02:46.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:02:46Z|00173|binding|INFO|Releasing lport 76e8e3af-f64b-4dff-8ae5-0367f134cc2a from this chassis (sb_readonly=0)
Nov 29 03:02:46 np0005539550 nova_compute[257631]: 2025-11-29 08:02:46.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 211 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Nov 29 03:02:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:02:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:47.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.130 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.130 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.130 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.131 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.131 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.132 257641 INFO nova.compute.manager [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Terminating instance#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.133 257641 DEBUG nova.compute.manager [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:02:48 np0005539550 kernel: tap417b0676-65 (unregistering): left promiscuous mode
Nov 29 03:02:48 np0005539550 NetworkManager[49039]: <info>  [1764403368.1865] device (tap417b0676-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:02:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:02:48Z|00174|binding|INFO|Releasing lport 417b0676-65f0-4e4a-a08c-9c313d926b20 from this chassis (sb_readonly=0)
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.193 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:02:48Z|00175|binding|INFO|Setting lport 417b0676-65f0-4e4a-a08c-9c313d926b20 down in Southbound
Nov 29 03:02:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:02:48Z|00176|binding|INFO|Removing iface tap417b0676-65 ovn-installed in OVS
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.196 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.207 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:40:cb 10.100.0.5'], port_security=['fa:16:3e:d0:40:cb 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '28e30c6c-30cb-4da9-9f29-a5195810cc1b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f73b0808-21fd-43a4-809d-85e512de1cb7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'afaf65dfeab546ee991af0438784b8a3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8c6ebd91-3a75-49da-9c37-da4a9dd74667', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc8191a-1888-462d-843d-a3e5df7bfc2c, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=417b0676-65f0-4e4a-a08c-9c313d926b20) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.209 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 417b0676-65f0-4e4a-a08c-9c313d926b20 in datapath f73b0808-21fd-43a4-809d-85e512de1cb7 unbound from our chassis#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.210 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f73b0808-21fd-43a4-809d-85e512de1cb7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.212 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28ad2edd-ce29-45ab-921f-0bc6ae146a7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.213 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 namespace which is not needed anymore#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.217 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000026.scope: Deactivated successfully.
Nov 29 03:02:48 np0005539550 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000026.scope: Consumed 17.021s CPU time.
Nov 29 03:02:48 np0005539550 systemd-machined[216673]: Machine qemu-22-instance-00000026 terminated.
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [NOTICE]   (287617) : haproxy version is 2.8.14-c23fe91
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [NOTICE]   (287617) : path to executable is /usr/sbin/haproxy
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [WARNING]  (287617) : Exiting Master process...
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [WARNING]  (287617) : Exiting Master process...
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [ALERT]    (287617) : Current worker (287625) exited with code 143 (Terminated)
Nov 29 03:02:48 np0005539550 neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7[287597]: [WARNING]  (287617) : All workers exited. Exiting... (0)
Nov 29 03:02:48 np0005539550 systemd[1]: libpod-44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd.scope: Deactivated successfully.
Nov 29 03:02:48 np0005539550 NetworkManager[49039]: <info>  [1764403368.3568] manager: (tap417b0676-65): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.358 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 podman[289016]: 2025-11-29 08:02:48.361846277 +0000 UTC m=+0.064436390 container died 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.362 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.380 257641 INFO nova.virt.libvirt.driver [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Instance destroyed successfully.#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.380 257641 DEBUG nova.objects.instance [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lazy-loading 'resources' on Instance uuid 28e30c6c-30cb-4da9-9f29-a5195810cc1b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:02:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.401 257641 DEBUG nova.virt.libvirt.vif [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1805929808',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1805929808',id=38,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHEuBiKv29X7C0drWwRABfdxNdRe5p32AdTPB10q93Ar576PCZcuPC/3VX2gJkGcV+mhRJIDE7C9Qv0DoWPW0kaJVi6f6+GX+1mDf7+x5AvCtDyfE2PGiajOtGRiWA/EXQ==',key_name='tempest-keypair-874975206',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:01:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='afaf65dfeab546ee991af0438784b8a3',ramdisk_id='',reservation_id='r-4ij879vg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-343315775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:01:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ba1b423b724a47f692a3d9cbf91860d7',uuid=28e30c6c-30cb-4da9-9f29-a5195810cc1b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.401 257641 DEBUG nova.network.os_vif_util [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converting VIF {"id": "417b0676-65f0-4e4a-a08c-9c313d926b20", "address": "fa:16:3e:d0:40:cb", "network": {"id": "f73b0808-21fd-43a4-809d-85e512de1cb7", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-904838899-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "afaf65dfeab546ee991af0438784b8a3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap417b0676-65", "ovs_interfaceid": "417b0676-65f0-4e4a-a08c-9c313d926b20", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.402 257641 DEBUG nova.network.os_vif_util [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.402 257641 DEBUG os_vif [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.404 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap417b0676-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.411 257641 INFO os_vif [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d0:40:cb,bridge_name='br-int',has_traffic_filtering=True,id=417b0676-65f0-4e4a-a08c-9c313d926b20,network=Network(f73b0808-21fd-43a4-809d-85e512de1cb7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap417b0676-65')#033[00m
Nov 29 03:02:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd-userdata-shm.mount: Deactivated successfully.
Nov 29 03:02:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fec336cd54eb859dfbc792ffa4fc698f1a239afba05f9acd72d69c952ae6d3dd-merged.mount: Deactivated successfully.
Nov 29 03:02:48 np0005539550 podman[289016]: 2025-11-29 08:02:48.487912901 +0000 UTC m=+0.190503014 container cleanup 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:02:48 np0005539550 systemd[1]: libpod-conmon-44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd.scope: Deactivated successfully.
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.604 257641 DEBUG nova.compute.manager [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-unplugged-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.605 257641 DEBUG oslo_concurrency.lockutils [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.605 257641 DEBUG oslo_concurrency.lockutils [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.605 257641 DEBUG oslo_concurrency.lockutils [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.605 257641 DEBUG nova.compute.manager [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] No waiting events found dispatching network-vif-unplugged-417b0676-65f0-4e4a-a08c-9c313d926b20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.605 257641 DEBUG nova.compute.manager [req-8cc71c62-f1a6-4392-96f3-80bf020381ed req-f1be3108-ac9e-4baa-8119-da195ce8fdd7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-unplugged-417b0676-65f0-4e4a-a08c-9c313d926b20 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:02:48 np0005539550 podman[289074]: 2025-11-29 08:02:48.926499798 +0000 UTC m=+0.413000806 container remove 44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.931 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[54f42508-47ce-43a0-b3ae-54af1f0469f3]: (4, ('Sat Nov 29 08:02:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 (44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd)\n44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd\nSat Nov 29 08:02:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 (44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd)\n44ff46280a24c44a34268f17896c74690466e38cb2769782e492caffc0b68bbd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.933 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66b70d97-fb22-4897-9a6c-55540bdadc84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.934 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf73b0808-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.935 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 kernel: tapf73b0808-20: left promiscuous mode
Nov 29 03:02:48 np0005539550 nova_compute[257631]: 2025-11-29 08:02:48.950 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.953 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8f5d03-ffc9-42a3-bb7c-7919378a175f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 200 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 581 KiB/s wr, 116 op/s
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.967 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c669f39e-e930-4057-9183-1b907576381a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[016e45bb-4efb-42dc-87ea-eb49852e4489]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.984 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[395031c0-4c28-45d1-ae77-9487007ad8d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 634963, 'reachable_time': 18671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289089, 'error': None, 'target': 'ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:48 np0005539550 systemd[1]: run-netns-ovnmeta\x2df73b0808\x2d21fd\x2d43a4\x2d809d\x2d85e512de1cb7.mount: Deactivated successfully.
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.989 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f73b0808-21fd-43a4-809d-85e512de1cb7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:02:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:48.990 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6ba39050-9a94-4e70-a0ca-f190b746a69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:02:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.808 257641 DEBUG nova.compute.manager [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.808 257641 DEBUG oslo_concurrency.lockutils [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.809 257641 DEBUG oslo_concurrency.lockutils [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.809 257641 DEBUG oslo_concurrency.lockutils [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.809 257641 DEBUG nova.compute.manager [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] No waiting events found dispatching network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:02:50 np0005539550 nova_compute[257631]: 2025-11-29 08:02:50.809 257641 WARNING nova.compute.manager [req-d96e21ed-2bb9-4fe5-a685-c9c08541649b req-a5af6d02-0ef2-410e-9663-6a1d478caecb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received unexpected event network-vif-plugged-417b0676-65f0-4e4a-a08c-9c313d926b20 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:02:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 167 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 11 KiB/s wr, 119 op/s
Nov 29 03:02:51 np0005539550 nova_compute[257631]: 2025-11-29 08:02:51.024 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:51.487 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:02:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:51.488 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:02:51 np0005539550 nova_compute[257631]: 2025-11-29 08:02:51.547 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:52.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.639 257641 INFO nova.virt.libvirt.driver [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Deleting instance files /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b_del#033[00m
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.641 257641 INFO nova.virt.libvirt.driver [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Deletion of /var/lib/nova/instances/28e30c6c-30cb-4da9-9f29-a5195810cc1b_del complete#033[00m
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.747 257641 INFO nova.compute.manager [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Took 4.61 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.748 257641 DEBUG oslo.service.loopingcall [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.748 257641 DEBUG nova.compute.manager [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:02:52 np0005539550 nova_compute[257631]: 2025-11-29 08:02:52.748 257641 DEBUG nova.network.neutron [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:02:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 156 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 675 KiB/s rd, 554 KiB/s wr, 74 op/s
Nov 29 03:02:53 np0005539550 nova_compute[257631]: 2025-11-29 08:02:53.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:53.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.119 257641 DEBUG nova.network.neutron [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.153 257641 INFO nova.compute.manager [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Took 1.40 seconds to deallocate network for instance.#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.262 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.263 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.346 257641 DEBUG nova.compute.manager [req-ad687de2-4f15-4ee0-b056-9c27c0f1dce9 req-e1ca73a8-fddf-4603-9472-345dac5c4999 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Received event network-vif-deleted-417b0676-65f0-4e4a-a08c-9c313d926b20 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.350 257641 DEBUG oslo_concurrency.processutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:02:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:54.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:02:54.490 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1369872273' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.805 257641 DEBUG oslo_concurrency.processutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.811 257641 DEBUG nova.compute.provider_tree [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.862 257641 DEBUG nova.scheduler.client.report [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.881178) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374881263, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2209, "num_deletes": 254, "total_data_size": 3863620, "memory_usage": 3940448, "flush_reason": "Manual Compaction"}
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.888 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374902686, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3793794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28760, "largest_seqno": 30968, "table_properties": {"data_size": 3783890, "index_size": 6270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21221, "raw_average_key_size": 20, "raw_value_size": 3763811, "raw_average_value_size": 3679, "num_data_blocks": 273, "num_entries": 1023, "num_filter_entries": 1023, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403156, "oldest_key_time": 1764403156, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 21549 microseconds, and 9020 cpu microseconds.
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.902743) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3793794 bytes OK
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.902761) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.904070) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.904090) EVENT_LOG_v1 {"time_micros": 1764403374904085, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.904106) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3854551, prev total WAL file size 3854551, number of live WAL files 2.
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.904971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3704KB)], [65(9062KB)]
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374905032, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13074286, "oldest_snapshot_seqno": -1}
Nov 29 03:02:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 143 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 465 KiB/s rd, 1.2 MiB/s wr, 91 op/s
Nov 29 03:02:54 np0005539550 nova_compute[257631]: 2025-11-29 08:02:54.961 257641 INFO nova.scheduler.client.report [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Deleted allocations for instance 28e30c6c-30cb-4da9-9f29-a5195810cc1b#033[00m
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6283 keys, 11162282 bytes, temperature: kUnknown
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374992891, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 11162282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119529, "index_size": 25939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 161320, "raw_average_key_size": 25, "raw_value_size": 11005744, "raw_average_value_size": 1751, "num_data_blocks": 1043, "num_entries": 6283, "num_filter_entries": 6283, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.993306) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11162282 bytes
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.995749) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 126.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6807, records dropped: 524 output_compression: NoCompression
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.995802) EVENT_LOG_v1 {"time_micros": 1764403374995781, "job": 36, "event": "compaction_finished", "compaction_time_micros": 87956, "compaction_time_cpu_micros": 24268, "output_level": 6, "num_output_files": 1, "total_output_size": 11162282, "num_input_records": 6807, "num_output_records": 6283, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403374997272, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403375000578, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:54.904839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:55.000838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:55.000904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:55.000910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:55.000914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:02:55.000918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:02:55 np0005539550 nova_compute[257631]: 2025-11-29 08:02:55.112 257641 DEBUG oslo_concurrency.lockutils [None req-79a6d965-e5e1-4c2c-a507-4946fbeb9996 ba1b423b724a47f692a3d9cbf91860d7 afaf65dfeab546ee991af0438784b8a3 - - default default] Lock "28e30c6c-30cb-4da9-9f29-a5195810cc1b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:02:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:02:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:56 np0005539550 nova_compute[257631]: 2025-11-29 08:02:56.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:56.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 167 MiB data, 524 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 03:02:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:02:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:02:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:02:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:02:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:02:58 np0005539550 nova_compute[257631]: 2025-11-29 08:02:58.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:02:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 152 MiB data, 528 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 03:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:02:59
Nov 29 03:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.log', '.rgw.root']
Nov 29 03:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:02:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:02:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:02:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:02:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:00 np0005539550 podman[289171]: 2025-11-29 08:03:00.319195456 +0000 UTC m=+0.055682893 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:03:00 np0005539550 podman[289170]: 2025-11-29 08:03:00.322096991 +0000 UTC m=+0.060561689 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:03:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:00.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:00 np0005539550 nova_compute[257631]: 2025-11-29 08:03:00.759 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 112 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 03:03:01 np0005539550 nova_compute[257631]: 2025-11-29 08:03:01.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.561 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "daec3331-5f41-4011-803b-027682844e84" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.561 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.585 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.757 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.757 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.763 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.763 257641 INFO nova.compute.claims [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:03:02 np0005539550 nova_compute[257631]: 2025-11-29 08:03:02.915 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 88 MiB data, 489 MiB used, 21 GiB / 21 GiB avail; 646 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Nov 29 03:03:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2095563316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.349 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.355 257641 DEBUG nova.compute.provider_tree [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.376 257641 DEBUG nova.scheduler.client.report [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.379 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403368.3781352, 28e30c6c-30cb-4da9-9f29-a5195810cc1b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.380 257641 INFO nova.compute.manager [-] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.411 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.418 257641 DEBUG nova.compute.manager [None req-d63a56a2-81b1-4234-ab1d-96026285e4d7 - - - - - -] [instance: 28e30c6c-30cb-4da9-9f29-a5195810cc1b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.425 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.426 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.555 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.556 257641 DEBUG nova.network.neutron [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.609 257641 INFO nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.645 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:03.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.868 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.869 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.869 257641 INFO nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Creating image(s)#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.899 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.932 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.959 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:03 np0005539550 nova_compute[257631]: 2025-11-29 08:03:03.962 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.038 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.039 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.039 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.040 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.078 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.083 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 daec3331-5f41-4011-803b-027682844e84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.107 257641 DEBUG nova.network.neutron [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.107 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:04.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.417 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 daec3331-5f41-4011-803b-027682844e84_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.499 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] resizing rbd image daec3331-5f41-4011-803b-027682844e84_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.620 257641 DEBUG nova.objects.instance [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'migration_context' on Instance uuid daec3331-5f41-4011-803b-027682844e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.666 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.667 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Ensure instance console log exists: /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.668 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.668 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.668 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.669 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.674 257641 WARNING nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.679 257641 DEBUG nova.virt.libvirt.host [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.680 257641 DEBUG nova.virt.libvirt.host [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.685 257641 DEBUG nova.virt.libvirt.host [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.685 257641 DEBUG nova.virt.libvirt.host [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.687 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.688 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.688 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.688 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.689 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.689 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.689 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.690 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.690 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.691 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.691 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.691 257641 DEBUG nova.virt.hardware [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:04 np0005539550 nova_compute[257631]: 2025-11-29 08:03:04.696 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 88 MiB data, 482 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 107 op/s
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366681028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.154 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.182 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.187 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196670761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.645 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.648 257641 DEBUG nova.objects.instance [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'pci_devices' on Instance uuid daec3331-5f41-4011-803b-027682844e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.694 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <uuid>daec3331-5f41-4011-803b-027682844e84</uuid>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <name>instance-0000002a</name>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-804158360</nova:name>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:03:04</nova:creationTime>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:user uuid="516e17df54a041ee8101fb121e5d5740">tempest-ServersAdminNegativeTestJSON-1775083495-project-member</nova:user>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <nova:project uuid="392223643d6d4b8b96eaa27c4a0d41cc">tempest-ServersAdminNegativeTestJSON-1775083495</nova:project>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="serial">daec3331-5f41-4011-803b-027682844e84</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="uuid">daec3331-5f41-4011-803b-027682844e84</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/daec3331-5f41-4011-803b-027682844e84_disk">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/daec3331-5f41-4011-803b-027682844e84_disk.config">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/console.log" append="off"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:03:05 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:03:05 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:03:05 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:03:05 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.776 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.777 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.777 257641 INFO nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Using config drive#033[00m
Nov 29 03:03:05 np0005539550 nova_compute[257631]: 2025-11-29 08:03:05.804 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542479013' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:03:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/542479013' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.031 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.080 257641 INFO nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Creating config drive at /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.085 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft2xtee8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.209 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpft2xtee8" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.245 257641 DEBUG nova.storage.rbd_utils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] rbd image daec3331-5f41-4011-803b-027682844e84_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.251 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config daec3331-5f41-4011-803b-027682844e84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:06.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.443 257641 DEBUG oslo_concurrency.processutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config daec3331-5f41-4011-803b-027682844e84_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.192s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.444 257641 INFO nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Deleting local config drive /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:06 np0005539550 systemd-machined[216673]: New machine qemu-23-instance-0000002a.
Nov 29 03:03:06 np0005539550 systemd[1]: Started Virtual Machine qemu-23-instance-0000002a.
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.911 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403386.9115157, daec3331-5f41-4011-803b-027682844e84 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.913 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.915 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.916 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.919 257641 INFO nova.virt.libvirt.driver [-] [instance: daec3331-5f41-4011-803b-027682844e84] Instance spawned successfully.#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.919 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.939 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.948 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.952 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.952 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.953 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.953 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.954 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.954 257641 DEBUG nova.virt.libvirt.driver [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 106 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.995 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.995 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403386.9124603, daec3331-5f41-4011-803b-027682844e84 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:06 np0005539550 nova_compute[257631]: 2025-11-29 08:03:06.996 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.036 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.039 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.063 257641 INFO nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Took 3.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.064 257641 DEBUG nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.078 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.151 257641 INFO nova.compute.manager [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Took 4.42 seconds to build instance.#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.235 257641 DEBUG oslo_concurrency.lockutils [None req-b34d160b-456f-4289-8433-eef748da4a43 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:07 np0005539550 nova_compute[257631]: 2025-11-29 08:03:07.782 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:07.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:03:08 np0005539550 nova_compute[257631]: 2025-11-29 08:03:08.413 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:08.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:03:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 134 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Nov 29 03:03:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:09.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 134 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 183 op/s
Nov 29 03:03:11 np0005539550 nova_compute[257631]: 2025-11-29 08:03:11.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:11.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:12.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.943 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:03:12 np0005539550 nova_compute[257631]: 2025-11-29 08:03:12.943 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 134 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547062041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.380 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.416 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.497 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.497 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.657 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.658 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4583MB free_disk=20.946460723876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.659 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.659 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.747 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance daec3331-5f41-4011-803b-027682844e84 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.748 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.748 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:03:13 np0005539550 nova_compute[257631]: 2025-11-29 08:03:13.791 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:13.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:14.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3140458445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:14 np0005539550 nova_compute[257631]: 2025-11-29 08:03:14.677 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.886s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:14 np0005539550 nova_compute[257631]: 2025-11-29 08:03:14.690 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:14 np0005539550 nova_compute[257631]: 2025-11-29 08:03:14.706 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:14 np0005539550 nova_compute[257631]: 2025-11-29 08:03:14.734 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:03:14 np0005539550 nova_compute[257631]: 2025-11-29 08:03:14.734 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:14 np0005539550 podman[289706]: 2025-11-29 08:03:14.838444071 +0000 UTC m=+0.138794305 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:03:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 134 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 171 op/s
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3cf3eaaf-78f8-46e2-bf88-4188aa6abca2 does not exist
Nov 29 03:03:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fb43d76d-c6cc-4769-9c1c-d4167ade8088 does not exist
Nov 29 03:03:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 68bed0d7-c596-4037-86e1-1115904a8786 does not exist
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:03:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.734 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.735 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.735 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.735 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:03:15 np0005539550 nova_compute[257631]: 2025-11-29 08:03:15.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.034 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.141 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.142 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.142 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.142 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid daec3331-5f41-4011-803b-027682844e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:16 np0005539550 podman[289978]: 2025-11-29 08:03:16.106515345 +0000 UTC m=+0.025143732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.325 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:16 np0005539550 podman[289978]: 2025-11-29 08:03:16.394221994 +0000 UTC m=+0.312850351 container create eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:03:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:16.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.614 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.630 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:16 np0005539550 nova_compute[257631]: 2025-11-29 08:03:16.631 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:03:16 np0005539550 systemd[1]: Started libpod-conmon-eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658.scope.
Nov 29 03:03:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:16 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:03:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 166 MiB data, 529 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 169 op/s
Nov 29 03:03:17 np0005539550 podman[289978]: 2025-11-29 08:03:17.13561524 +0000 UTC m=+1.054243647 container init eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:03:17 np0005539550 podman[289978]: 2025-11-29 08:03:17.145492036 +0000 UTC m=+1.064120403 container start eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:03:17 np0005539550 adoring_germain[289993]: 167 167
Nov 29 03:03:17 np0005539550 systemd[1]: libpod-eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658.scope: Deactivated successfully.
Nov 29 03:03:17 np0005539550 podman[289978]: 2025-11-29 08:03:17.170142154 +0000 UTC m=+1.088770551 container attach eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:03:17 np0005539550 podman[289978]: 2025-11-29 08:03:17.170817052 +0000 UTC m=+1.089445429 container died eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:17 np0005539550 nova_compute[257631]: 2025-11-29 08:03:17.628 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e3c637e97e2c2e26002d2d3b0fdbc900b5a7a919a81467ede7c3e20d623c2249-merged.mount: Deactivated successfully.
Nov 29 03:03:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:17.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:17 np0005539550 nova_compute[257631]: 2025-11-29 08:03:17.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539550 nova_compute[257631]: 2025-11-29 08:03:17.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:03:17 np0005539550 nova_compute[257631]: 2025-11-29 08:03:17.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:03:18 np0005539550 podman[289978]: 2025-11-29 08:03:18.057071169 +0000 UTC m=+1.975699536 container remove eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:03:18 np0005539550 systemd[1]: libpod-conmon-eef121be095caf22c8f7abbdd3ca93a06b9b46de13eecf1ae1a512c0d4bc4658.scope: Deactivated successfully.
Nov 29 03:03:18 np0005539550 podman[290020]: 2025-11-29 08:03:18.221456276 +0000 UTC m=+0.027655607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:18 np0005539550 nova_compute[257631]: 2025-11-29 08:03:18.419 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:18.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:18 np0005539550 podman[290020]: 2025-11-29 08:03:18.482758322 +0000 UTC m=+0.288957643 container create 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:03:18 np0005539550 systemd[1]: Started libpod-conmon-116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874.scope.
Nov 29 03:03:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:18 np0005539550 podman[290020]: 2025-11-29 08:03:18.898995689 +0000 UTC m=+0.705195040 container init 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:18 np0005539550 podman[290020]: 2025-11-29 08:03:18.906821762 +0000 UTC m=+0.713021073 container start 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:18.936 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:18.938 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 206 MiB data, 548 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.0 MiB/s wr, 186 op/s
Nov 29 03:03:19 np0005539550 podman[290020]: 2025-11-29 08:03:19.233947122 +0000 UTC m=+1.040146463 container attach 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:03:19 np0005539550 sweet_lumiere[290036]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:03:19 np0005539550 sweet_lumiere[290036]: --> relative data size: 1.0
Nov 29 03:03:19 np0005539550 sweet_lumiere[290036]: --> All data devices are unavailable
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004108905348501768 of space, bias 1.0, pg target 1.2326716045505304 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001903324492834543 of space, bias 1.0, pg target 0.5690940233575283 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:03:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:03:19 np0005539550 systemd[1]: libpod-116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874.scope: Deactivated successfully.
Nov 29 03:03:19 np0005539550 podman[290020]: 2025-11-29 08:03:19.83981953 +0000 UTC m=+1.646018841 container died 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:03:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:19.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bfba3e3042a99b04615a49cca49dcd23d47f0af71457643531e1d258636e087c-merged.mount: Deactivated successfully.
Nov 29 03:03:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 29 03:03:20 np0005539550 podman[290020]: 2025-11-29 08:03:20.362475452 +0000 UTC m=+2.168674773 container remove 116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:03:20 np0005539550 systemd[1]: libpod-conmon-116cca26f7a9bdea83b63128d5a69f198bb11d6f7be2803a33f431f1d435e874.scope: Deactivated successfully.
Nov 29 03:03:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:20.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 29 03:03:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 29 03:03:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 214 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.5 MiB/s wr, 211 op/s
Nov 29 03:03:21 np0005539550 nova_compute[257631]: 2025-11-29 08:03:21.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:21.051320829 +0000 UTC m=+0.080188718 container create 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:20.998137612 +0000 UTC m=+0.027005541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:21 np0005539550 systemd[1]: Started libpod-conmon-8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129.scope.
Nov 29 03:03:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:21.24680486 +0000 UTC m=+0.275672779 container init 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:21.254735596 +0000 UTC m=+0.283603485 container start 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:03:21 np0005539550 eager_burnell[290225]: 167 167
Nov 29 03:03:21 np0005539550 systemd[1]: libpod-8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129.scope: Deactivated successfully.
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:21.302330648 +0000 UTC m=+0.331198567 container attach 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:03:21 np0005539550 podman[290208]: 2025-11-29 08:03:21.302890202 +0000 UTC m=+0.331758111 container died 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9066331c9e5ac6296f6e50501e18d468c00304bf6d02f5df0326808f4c626a26-merged.mount: Deactivated successfully.
Nov 29 03:03:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:21.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:22 np0005539550 podman[290208]: 2025-11-29 08:03:22.282807815 +0000 UTC m=+1.311675784 container remove 8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:03:22 np0005539550 systemd[1]: libpod-conmon-8741e8d1670563394acc75950efdc09a70815688a55e6a1cdc970164501c1129.scope: Deactivated successfully.
Nov 29 03:03:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:22 np0005539550 podman[290254]: 2025-11-29 08:03:22.436588137 +0000 UTC m=+0.024575987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:22 np0005539550 podman[290254]: 2025-11-29 08:03:22.638545816 +0000 UTC m=+0.226533646 container create 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:03:22 np0005539550 systemd[1]: Started libpod-conmon-1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0.scope.
Nov 29 03:03:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8749b74b1eb4ce0a25fdc42a112125366c30ef5f55a740f11add7de6b1b65a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8749b74b1eb4ce0a25fdc42a112125366c30ef5f55a740f11add7de6b1b65a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8749b74b1eb4ce0a25fdc42a112125366c30ef5f55a740f11add7de6b1b65a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8749b74b1eb4ce0a25fdc42a112125366c30ef5f55a740f11add7de6b1b65a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:22 np0005539550 podman[290254]: 2025-11-29 08:03:22.905025806 +0000 UTC m=+0.493013666 container init 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:03:22 np0005539550 podman[290254]: 2025-11-29 08:03:22.912409198 +0000 UTC m=+0.500397028 container start 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:03:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 225 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.7 MiB/s wr, 230 op/s
Nov 29 03:03:22 np0005539550 podman[290254]: 2025-11-29 08:03:22.98856788 +0000 UTC m=+0.576555730 container attach 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:23 np0005539550 nova_compute[257631]: 2025-11-29 08:03:23.422 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]: {
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:    "0": [
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:        {
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "devices": [
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "/dev/loop3"
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            ],
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "lv_name": "ceph_lv0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "lv_size": "7511998464",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "name": "ceph_lv0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "tags": {
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.cluster_name": "ceph",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.crush_device_class": "",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.encrypted": "0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.osd_id": "0",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.type": "block",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:                "ceph.vdo": "0"
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            },
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "type": "block",
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:            "vg_name": "ceph_vg0"
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:        }
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]:    ]
Nov 29 03:03:23 np0005539550 wonderful_bhabha[290270]: }
Nov 29 03:03:23 np0005539550 systemd[1]: libpod-1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0.scope: Deactivated successfully.
Nov 29 03:03:23 np0005539550 podman[290254]: 2025-11-29 08:03:23.735339465 +0000 UTC m=+1.323327325 container died 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:03:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:23.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f8749b74b1eb4ce0a25fdc42a112125366c30ef5f55a740f11add7de6b1b65a6-merged.mount: Deactivated successfully.
Nov 29 03:03:24 np0005539550 podman[290254]: 2025-11-29 08:03:24.107433249 +0000 UTC m=+1.695421079 container remove 1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:03:24 np0005539550 systemd[1]: libpod-conmon-1e25812bba5bcf93a039ec6b054bede2b685f6f752858b4790ee24f1d418d8f0.scope: Deactivated successfully.
Nov 29 03:03:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:24.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:24 np0005539550 podman[290434]: 2025-11-29 08:03:24.716387307 +0000 UTC m=+0.020958774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 29 03:03:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 239 MiB data, 572 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 7.1 MiB/s wr, 257 op/s
Nov 29 03:03:25 np0005539550 podman[290434]: 2025-11-29 08:03:25.062573321 +0000 UTC m=+0.367144778 container create 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:03:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 29 03:03:25 np0005539550 systemd[1]: Started libpod-conmon-589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123.scope.
Nov 29 03:03:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 29 03:03:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:25.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:25 np0005539550 podman[290434]: 2025-11-29 08:03:25.970236293 +0000 UTC m=+1.274807750 container init 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:03:25 np0005539550 podman[290434]: 2025-11-29 08:03:25.97708516 +0000 UTC m=+1.281656607 container start 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:03:25 np0005539550 laughing_elgamal[290450]: 167 167
Nov 29 03:03:25 np0005539550 systemd[1]: libpod-589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123.scope: Deactivated successfully.
Nov 29 03:03:26 np0005539550 podman[290434]: 2025-11-29 08:03:26.014867238 +0000 UTC m=+1.319438695 container attach 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:03:26 np0005539550 podman[290434]: 2025-11-29 08:03:26.015233638 +0000 UTC m=+1.319805075 container died 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:03:26 np0005539550 nova_compute[257631]: 2025-11-29 08:03:26.040 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0606dc7d4a995fa5f788d6f06bacc440cda82e39d8242c58903ffa18e030e24c-merged.mount: Deactivated successfully.
Nov 29 03:03:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:26.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:26 np0005539550 podman[290434]: 2025-11-29 08:03:26.827660164 +0000 UTC m=+2.132231611 container remove 589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:03:26 np0005539550 systemd[1]: libpod-conmon-589c4e6402148a554b63e4e51b24480643a2bf64b991a52448714474dd745123.scope: Deactivated successfully.
Nov 29 03:03:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 219 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.3 MiB/s wr, 248 op/s
Nov 29 03:03:27 np0005539550 podman[290474]: 2025-11-29 08:03:27.078058707 +0000 UTC m=+0.120152502 container create 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:03:27 np0005539550 podman[290474]: 2025-11-29 08:03:26.987644696 +0000 UTC m=+0.029738521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:03:27 np0005539550 systemd[1]: Started libpod-conmon-8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3.scope.
Nov 29 03:03:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:03:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16224b807b1fc3a4a09991908c34e1ff7eacad79850da4979b14ad1413a1f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16224b807b1fc3a4a09991908c34e1ff7eacad79850da4979b14ad1413a1f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16224b807b1fc3a4a09991908c34e1ff7eacad79850da4979b14ad1413a1f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16224b807b1fc3a4a09991908c34e1ff7eacad79850da4979b14ad1413a1f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:03:27 np0005539550 podman[290474]: 2025-11-29 08:03:27.437628687 +0000 UTC m=+0.479722492 container init 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:03:27 np0005539550 podman[290474]: 2025-11-29 08:03:27.446519557 +0000 UTC m=+0.488613362 container start 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:03:27 np0005539550 podman[290474]: 2025-11-29 08:03:27.551903225 +0000 UTC m=+0.593997060 container attach 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:27.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]: {
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:        "osd_id": 0,
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:        "type": "bluestore"
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]:    }
Nov 29 03:03:28 np0005539550 nifty_mirzakhani[290490]: }
Nov 29 03:03:28 np0005539550 systemd[1]: libpod-8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3.scope: Deactivated successfully.
Nov 29 03:03:28 np0005539550 podman[290474]: 2025-11-29 08:03:28.347151867 +0000 UTC m=+1.389245682 container died 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:03:28 np0005539550 nova_compute[257631]: 2025-11-29 08:03:28.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:28.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5e16224b807b1fc3a4a09991908c34e1ff7eacad79850da4979b14ad1413a1f5-merged.mount: Deactivated successfully.
Nov 29 03:03:28 np0005539550 podman[290474]: 2025-11-29 08:03:28.616199213 +0000 UTC m=+1.658293018 container remove 8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mirzakhani, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:03:28 np0005539550 systemd[1]: libpod-conmon-8640462a00c1ca4af47e27b9157f5aa5bed7d13d969da683945991b784ec00c3.scope: Deactivated successfully.
Nov 29 03:03:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:03:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 155 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 201 op/s
Nov 29 03:03:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9ebef249-583f-4970-b3a4-b28dae5aea2b does not exist
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d5f9ef6-2a49-475c-885f-255e6438b7ba does not exist
Nov 29 03:03:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b39eaf88-eeef-4c15-8955-940b67ae4e65 does not exist
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.154 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "daec3331-5f41-4011-803b-027682844e84" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.155 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.155 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "daec3331-5f41-4011-803b-027682844e84-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.155 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.155 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.156 257641 INFO nova.compute.manager [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Terminating instance#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.157 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.157 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquired lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.158 257641 DEBUG nova.network.neutron [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:03:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.307 257641 DEBUG nova.network.neutron [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 29 03:03:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 29 03:03:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.729 257641 DEBUG nova.network.neutron [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.744 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Releasing lock "refresh_cache-daec3331-5f41-4011-803b-027682844e84" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.745 257641 DEBUG nova.compute.manager [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:03:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:29.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:29 np0005539550 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Nov 29 03:03:29 np0005539550 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000002a.scope: Consumed 14.322s CPU time.
Nov 29 03:03:29 np0005539550 systemd-machined[216673]: Machine qemu-23-instance-0000002a terminated.
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.965 257641 INFO nova.virt.libvirt.driver [-] [instance: daec3331-5f41-4011-803b-027682844e84] Instance destroyed successfully.#033[00m
Nov 29 03:03:29 np0005539550 nova_compute[257631]: 2025-11-29 08:03:29.966 257641 DEBUG nova.objects.instance [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lazy-loading 'resources' on Instance uuid daec3331-5f41-4011-803b-027682844e84 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:30.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 121 MiB data, 507 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 722 KiB/s wr, 196 op/s
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.042 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:31 np0005539550 podman[290650]: 2025-11-29 08:03:31.331231062 +0000 UTC m=+0.057922230 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:03:31 np0005539550 podman[290649]: 2025-11-29 08:03:31.34003376 +0000 UTC m=+0.068863164 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.542 257641 INFO nova.virt.libvirt.driver [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Deleting instance files /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84_del#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.544 257641 INFO nova.virt.libvirt.driver [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Deletion of /var/lib/nova/instances/daec3331-5f41-4011-803b-027682844e84_del complete#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.591 257641 INFO nova.compute.manager [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] [instance: daec3331-5f41-4011-803b-027682844e84] Took 1.85 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.592 257641 DEBUG oslo.service.loopingcall [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.592 257641 DEBUG nova.compute.manager [-] [instance: daec3331-5f41-4011-803b-027682844e84] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.593 257641 DEBUG nova.network.neutron [-] [instance: daec3331-5f41-4011-803b-027682844e84] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.807 257641 DEBUG nova.network.neutron [-] [instance: daec3331-5f41-4011-803b-027682844e84] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.830 257641 DEBUG nova.network.neutron [-] [instance: daec3331-5f41-4011-803b-027682844e84] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.847 257641 INFO nova.compute.manager [-] [instance: daec3331-5f41-4011-803b-027682844e84] Took 0.25 seconds to deallocate network for instance.#033[00m
Nov 29 03:03:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:31.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.890 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.890 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:31 np0005539550 nova_compute[257631]: 2025-11-29 08:03:31.952 257641 DEBUG oslo_concurrency.processutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4015655811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.414 257641 DEBUG oslo_concurrency.processutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.420 257641 DEBUG nova.compute.provider_tree [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:32.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.444 257641 DEBUG nova.scheduler.client.report [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.474 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.522 257641 INFO nova.scheduler.client.report [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Deleted allocations for instance daec3331-5f41-4011-803b-027682844e84#033[00m
Nov 29 03:03:32 np0005539550 nova_compute[257631]: 2025-11-29 08:03:32.629 257641 DEBUG oslo_concurrency.lockutils [None req-f6b11507-b6aa-49cf-8448-4977ea42b1c5 516e17df54a041ee8101fb121e5d5740 392223643d6d4b8b96eaa27c4a0d41cc - - default default] Lock "daec3331-5f41-4011-803b-027682844e84" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.474s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 29 03:03:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 88 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 265 KiB/s rd, 177 KiB/s wr, 155 op/s
Nov 29 03:03:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 29 03:03:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 29 03:03:33 np0005539550 nova_compute[257631]: 2025-11-29 08:03:33.429 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:33.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 69 MiB data, 476 MiB used, 21 GiB / 21 GiB avail; 177 KiB/s rd, 102 KiB/s wr, 150 op/s
Nov 29 03:03:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:35.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:36 np0005539550 nova_compute[257631]: 2025-11-29 08:03:36.045 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:36.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 41 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 74 KiB/s rd, 22 KiB/s wr, 106 op/s
Nov 29 03:03:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:37.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:38 np0005539550 nova_compute[257631]: 2025-11-29 08:03:38.431 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 41 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 19 KiB/s wr, 93 op/s
Nov 29 03:03:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:39.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 29 03:03:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 29 03:03:40 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 29 03:03:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 41 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 4.9 KiB/s wr, 56 op/s
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.046 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.158 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "885925d1-baac-4205-8817-4d9b92b082de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.159 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.176 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.270 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.270 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.276 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.277 257641 INFO nova.compute.claims [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.411 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.739 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "1585e4de-af24-40bc-8d7d-d715357a957c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.740 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.770 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:03:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.879 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807552194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.909 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.915 257641 DEBUG nova.compute.provider_tree [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.935 257641 DEBUG nova.scheduler.client.report [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.980 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.980 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.983 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.989 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:03:41 np0005539550 nova_compute[257631]: 2025-11-29 08:03:41.989 257641 INFO nova.compute.claims [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.061 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.061 257641 DEBUG nova.network.neutron [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.084 257641 INFO nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.102 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.171 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.277 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.279 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.281 257641 INFO nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Creating image(s)#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.314 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.343 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.372 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.376 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.436 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.437 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.438 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.438 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:42.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.475 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.480 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 885925d1-baac-4205-8817-4d9b92b082de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:03:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4195205853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.631 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.637 257641 DEBUG nova.compute.provider_tree [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.664 257641 DEBUG nova.scheduler.client.report [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.693 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.694 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.804 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.804 257641 DEBUG nova.network.neutron [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.820 257641 INFO nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.846 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.960 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.961 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:03:42 np0005539550 nova_compute[257631]: 2025-11-29 08:03:42.961 257641 INFO nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Creating image(s)#033[00m
Nov 29 03:03:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 41 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 4.9 KiB/s wr, 65 op/s
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.143 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.183 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.220 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.227 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.254 257641 DEBUG nova.network.neutron [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.256 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.257 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 885925d1-baac-4205-8817-4d9b92b082de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.298 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.299 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.300 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.300 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.334 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.338 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1585e4de-af24-40bc-8d7d-d715357a957c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.407 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] resizing rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.563 257641 DEBUG nova.objects.instance [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'migration_context' on Instance uuid 885925d1-baac-4205-8817-4d9b92b082de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.582 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.583 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Ensure instance console log exists: /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.583 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.584 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.584 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.585 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.590 257641 WARNING nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.602 257641 DEBUG nova.virt.libvirt.host [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.603 257641 DEBUG nova.virt.libvirt.host [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.607 257641 DEBUG nova.virt.libvirt.host [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.608 257641 DEBUG nova.virt.libvirt.host [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.609 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.610 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.610 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.610 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.611 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.611 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.611 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.611 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.611 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.612 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.612 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.612 257641 DEBUG nova.virt.hardware [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.615 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.640 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1585e4de-af24-40bc-8d7d-d715357a957c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.715 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] resizing rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.777 257641 DEBUG nova.network.neutron [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.778 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.834 257641 DEBUG nova.objects.instance [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'migration_context' on Instance uuid 1585e4de-af24-40bc-8d7d-d715357a957c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.854 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.855 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Ensure instance console log exists: /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.856 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.856 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.857 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.858 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.862 257641 WARNING nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:03:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:43.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.867 257641 DEBUG nova.virt.libvirt.host [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.868 257641 DEBUG nova.virt.libvirt.host [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.871 257641 DEBUG nova.virt.libvirt.host [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.872 257641 DEBUG nova.virt.libvirt.host [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.873 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.873 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.874 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.874 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.875 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.875 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.875 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.876 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.876 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.876 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.876 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.877 257641 DEBUG nova.virt.hardware [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:03:43 np0005539550 nova_compute[257631]: 2025-11-29 08:03:43.879 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2299249176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.067 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.090 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.099 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3050617301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.338 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.367 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.373 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:44.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914232603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.535 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.538 257641 DEBUG nova.objects.instance [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 885925d1-baac-4205-8817-4d9b92b082de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.563 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <uuid>885925d1-baac-4205-8817-4d9b92b082de</uuid>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <name>instance-0000002c</name>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1219999952</nova:name>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:03:43</nova:creationTime>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:user uuid="e34525c38d50445e9771f1b8e18bc428">tempest-ListImageFiltersTestJSON-262348058-project-member</nova:user>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:project uuid="1a7ab40578b84f6aa0f4d2225a36bf9e">tempest-ListImageFiltersTestJSON-262348058</nova:project>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="serial">885925d1-baac-4205-8817-4d9b92b082de</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="uuid">885925d1-baac-4205-8817-4d9b92b082de</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/885925d1-baac-4205-8817-4d9b92b082de_disk">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/885925d1-baac-4205-8817-4d9b92b082de_disk.config">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/console.log" append="off"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:03:44 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:03:44 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.647 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.659 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.660 257641 INFO nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Using config drive#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.719 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:03:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4128506433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.844 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.847 257641 DEBUG nova.objects.instance [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'pci_devices' on Instance uuid 1585e4de-af24-40bc-8d7d-d715357a957c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.884 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <uuid>1585e4de-af24-40bc-8d7d-d715357a957c</uuid>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <name>instance-0000002d</name>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:name>tempest-ListImageFiltersTestJSON-server-833055338</nova:name>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:03:43</nova:creationTime>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:user uuid="e34525c38d50445e9771f1b8e18bc428">tempest-ListImageFiltersTestJSON-262348058-project-member</nova:user>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <nova:project uuid="1a7ab40578b84f6aa0f4d2225a36bf9e">tempest-ListImageFiltersTestJSON-262348058</nova:project>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="serial">1585e4de-af24-40bc-8d7d-d715357a957c</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="uuid">1585e4de-af24-40bc-8d7d-d715357a957c</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1585e4de-af24-40bc-8d7d-d715357a957c_disk">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1585e4de-af24-40bc-8d7d-d715357a957c_disk.config">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/console.log" append="off"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:03:44 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:03:44 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:03:44 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:03:44 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.965 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403409.9635131, daec3331-5f41-4011-803b-027682844e84 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:44 np0005539550 nova_compute[257631]: 2025-11-29 08:03:44.965 257641 INFO nova.compute.manager [-] [instance: daec3331-5f41-4011-803b-027682844e84] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:03:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 42 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 4.4 KiB/s wr, 45 op/s
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.002 257641 DEBUG nova.compute.manager [None req-11150393-823e-4b7a-aeca-1caebde44cba - - - - - -] [instance: daec3331-5f41-4011-803b-027682844e84] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.023 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.023 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.024 257641 INFO nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Using config drive#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.048 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.295 257641 INFO nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Creating config drive at /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.301 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn49a5c2l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.320 257641 INFO nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Creating config drive at /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.325 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfbhp29i6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:45 np0005539550 podman[291255]: 2025-11-29 08:03:45.328840539 +0000 UTC m=+0.069564622 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.425 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn49a5c2l" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.459 257641 DEBUG nova.storage.rbd_utils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 1585e4de-af24-40bc-8d7d-d715357a957c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.464 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config 1585e4de-af24-40bc-8d7d-d715357a957c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.487 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfbhp29i6" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.511 257641 DEBUG nova.storage.rbd_utils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] rbd image 885925d1-baac-4205-8817-4d9b92b082de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.514 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config 885925d1-baac-4205-8817-4d9b92b082de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:03:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.696 257641 DEBUG oslo_concurrency.processutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config 1585e4de-af24-40bc-8d7d-d715357a957c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.697 257641 INFO nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Deleting local config drive /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:45 np0005539550 systemd-machined[216673]: New machine qemu-24-instance-0000002d.
Nov 29 03:03:45 np0005539550 systemd[1]: Started Virtual Machine qemu-24-instance-0000002d.
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.807 257641 DEBUG oslo_concurrency.processutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config 885925d1-baac-4205-8817-4d9b92b082de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:03:45 np0005539550 nova_compute[257631]: 2025-11-29 08:03:45.808 257641 INFO nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Deleting local config drive /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de/disk.config because it was imported into RBD.#033[00m
Nov 29 03:03:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:45.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:45 np0005539550 systemd-machined[216673]: New machine qemu-25-instance-0000002c.
Nov 29 03:03:45 np0005539550 systemd[1]: Started Virtual Machine qemu-25-instance-0000002c.
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:46.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.594 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403426.5936837, 1585e4de-af24-40bc-8d7d-d715357a957c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.595 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.598 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.598 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.604 257641 INFO nova.virt.libvirt.driver [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance spawned successfully.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.604 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.627 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.633 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.640 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.641 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.642 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.642 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.644 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.645 257641 DEBUG nova.virt.libvirt.driver [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.650 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.651 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403426.595138, 1585e4de-af24-40bc-8d7d-d715357a957c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.651 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.685 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.688 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.736 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.737 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403426.7350812, 885925d1-baac-4205-8817-4d9b92b082de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.737 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.738 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.739 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.744 257641 INFO nova.virt.libvirt.driver [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance spawned successfully.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.745 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.785 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.786 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.786 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.787 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.787 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.788 257641 DEBUG nova.virt.libvirt.driver [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.794 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.797 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.805 257641 INFO nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Took 3.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.806 257641 DEBUG nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.859 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.860 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403426.7352245, 885925d1-baac-4205-8817-4d9b92b082de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.860 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] VM Started (Lifecycle Event)#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.898 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.902 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.911 257641 INFO nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 4.63 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.911 257641 DEBUG nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.913 257641 INFO nova.compute.manager [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Took 5.07 seconds to build instance.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.926 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:03:46 np0005539550 nova_compute[257631]: 2025-11-29 08:03:46.952 257641 DEBUG oslo_concurrency.lockutils [None req-48019c60-7fbd-4267-9cf4-50b1dece19b0 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 93 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 54 KiB/s rd, 2.4 MiB/s wr, 78 op/s
Nov 29 03:03:47 np0005539550 nova_compute[257631]: 2025-11-29 08:03:47.004 257641 INFO nova.compute.manager [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 5.76 seconds to build instance.#033[00m
Nov 29 03:03:47 np0005539550 nova_compute[257631]: 2025-11-29 08:03:47.027 257641 DEBUG oslo_concurrency.lockutils [None req-baa21ada-133e-4424-9df3-31245cd8d710 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:03:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:03:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:47.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:03:48 np0005539550 nova_compute[257631]: 2025-11-29 08:03:48.449 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:48.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 151 MiB data, 503 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.9 MiB/s wr, 175 op/s
Nov 29 03:03:49 np0005539550 nova_compute[257631]: 2025-11-29 08:03:49.419 257641 DEBUG nova.compute.manager [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:03:49 np0005539550 nova_compute[257631]: 2025-11-29 08:03:49.463 257641 INFO nova.compute.manager [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] instance snapshotting#033[00m
Nov 29 03:03:49 np0005539550 nova_compute[257631]: 2025-11-29 08:03:49.720 257641 INFO nova.virt.libvirt.driver [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Beginning live snapshot process#033[00m
Nov 29 03:03:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:49 np0005539550 nova_compute[257631]: 2025-11-29 08:03:49.877 257641 DEBUG nova.virt.libvirt.imagebackend [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:03:50 np0005539550 nova_compute[257631]: 2025-11-29 08:03:50.150 257641 DEBUG nova.storage.rbd_utils [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(8772112ef9624aa8bb5be2245d8a1d7e) on rbd image(885925d1-baac-4205-8817-4d9b92b082de_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:03:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 29 03:03:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 177 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.1 MiB/s wr, 268 op/s
Nov 29 03:03:51 np0005539550 nova_compute[257631]: 2025-11-29 08:03:51.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 29 03:03:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 29 03:03:51 np0005539550 nova_compute[257631]: 2025-11-29 08:03:51.504 257641 DEBUG nova.storage.rbd_utils [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] cloning vms/885925d1-baac-4205-8817-4d9b92b082de_disk@8772112ef9624aa8bb5be2245d8a1d7e to images/be6a1bec-5785-4873-ac75-fe1aa2fbc126 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:03:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:52 np0005539550 nova_compute[257631]: 2025-11-29 08:03:52.033 257641 DEBUG nova.storage.rbd_utils [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] flattening images/be6a1bec-5785-4873-ac75-fe1aa2fbc126 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:03:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:52.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 181 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.4 MiB/s wr, 287 op/s
Nov 29 03:03:53 np0005539550 nova_compute[257631]: 2025-11-29 08:03:53.190 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:53.188 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:03:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:53.189 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:03:53 np0005539550 nova_compute[257631]: 2025-11-29 08:03:53.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:53.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:54 np0005539550 nova_compute[257631]: 2025-11-29 08:03:54.938 257641 DEBUG nova.storage.rbd_utils [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] removing snapshot(8772112ef9624aa8bb5be2245d8a1d7e) on rbd image(885925d1-baac-4205-8817-4d9b92b082de_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:03:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 196 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 7.1 MiB/s wr, 314 op/s
Nov 29 03:03:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:03:55.192 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:03:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:55.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:03:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 29 03:03:56 np0005539550 nova_compute[257631]: 2025-11-29 08:03:56.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 29 03:03:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 29 03:03:56 np0005539550 nova_compute[257631]: 2025-11-29 08:03:56.712 257641 DEBUG nova.storage.rbd_utils [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(snap) on rbd image(be6a1bec-5785-4873-ac75-fe1aa2fbc126) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:03:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 204 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.8 MiB/s wr, 291 op/s
Nov 29 03:03:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 29 03:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:03:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:57.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:03:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 29 03:03:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:03:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:03:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 29 03:03:58 np0005539550 nova_compute[257631]: 2025-11-29 08:03:58.453 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:03:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:03:58.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:03:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 234 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 3.0 MiB/s wr, 243 op/s
Nov 29 03:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:03:59
Nov 29 03:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'volumes', 'vms', '.mgr', 'backups', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data']
Nov 29 03:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:03:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:03:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:03:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:03:59.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:00.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 305 MiB data, 582 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 8.9 MiB/s wr, 274 op/s
Nov 29 03:04:01 np0005539550 nova_compute[257631]: 2025-11-29 08:04:01.167 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:01.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:02 np0005539550 podman[291676]: 2025-11-29 08:04:02.3420876 +0000 UTC m=+0.071304097 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Nov 29 03:04:02 np0005539550 podman[291675]: 2025-11-29 08:04:02.344072862 +0000 UTC m=+0.073236678 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:04:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:02.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 365 MiB data, 637 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 13 MiB/s wr, 316 op/s
Nov 29 03:04:03 np0005539550 nova_compute[257631]: 2025-11-29 08:04:03.396 257641 INFO nova.virt.libvirt.driver [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Snapshot image upload complete#033[00m
Nov 29 03:04:03 np0005539550 nova_compute[257631]: 2025-11-29 08:04:03.397 257641 INFO nova.compute.manager [None req-71511785-50f0-4250-827f-4be0c0fa8aa1 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 13.93 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:04:03 np0005539550 nova_compute[257631]: 2025-11-29 08:04:03.454 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:04.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 416 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 15 MiB/s wr, 306 op/s
Nov 29 03:04:05 np0005539550 nova_compute[257631]: 2025-11-29 08:04:05.830 257641 DEBUG nova.compute.manager [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:05 np0005539550 nova_compute[257631]: 2025-11-29 08:04:05.872 257641 INFO nova.compute.manager [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] instance snapshotting#033[00m
Nov 29 03:04:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 29 03:04:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 29 03:04:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 29 03:04:06 np0005539550 nova_compute[257631]: 2025-11-29 08:04:06.135 257641 INFO nova.virt.libvirt.driver [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Beginning live snapshot process#033[00m
Nov 29 03:04:06 np0005539550 nova_compute[257631]: 2025-11-29 08:04:06.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:06 np0005539550 nova_compute[257631]: 2025-11-29 08:04:06.294 257641 DEBUG nova.virt.libvirt.imagebackend [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:04:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:06.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:06 np0005539550 nova_compute[257631]: 2025-11-29 08:04:06.484 257641 DEBUG nova.storage.rbd_utils [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(8391123c111b4750ad457ee75ff9d414) on rbd image(1585e4de-af24-40bc-8d7d-d715357a957c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 29 03:04:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 29 03:04:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 29 03:04:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 463 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 MiB/s wr, 448 op/s
Nov 29 03:04:07 np0005539550 nova_compute[257631]: 2025-11-29 08:04:07.019 257641 DEBUG nova.storage.rbd_utils [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] cloning vms/1585e4de-af24-40bc-8d7d-d715357a957c_disk@8391123c111b4750ad457ee75ff9d414 to images/ecb74341-a505-46e6-a429-f2d9cba0e680 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:04:07 np0005539550 nova_compute[257631]: 2025-11-29 08:04:07.144 257641 DEBUG nova.storage.rbd_utils [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] flattening images/ecb74341-a505-46e6-a429-f2d9cba0e680 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:04:07 np0005539550 nova_compute[257631]: 2025-11-29 08:04:07.562 257641 DEBUG nova.storage.rbd_utils [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] removing snapshot(8391123c111b4750ad457ee75ff9d414) on rbd image(1585e4de-af24-40bc-8d7d-d715357a957c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:04:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:04:08 np0005539550 nova_compute[257631]: 2025-11-29 08:04:08.456 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:08.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 29 03:04:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:04:08 np0005539550 nova_compute[257631]: 2025-11-29 08:04:08.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:08 np0005539550 nova_compute[257631]: 2025-11-29 08:04:08.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:04:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 525 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 15 MiB/s wr, 780 op/s
Nov 29 03:04:09 np0005539550 nova_compute[257631]: 2025-11-29 08:04:09.045 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:04:09 np0005539550 nova_compute[257631]: 2025-11-29 08:04:09.080 257641 DEBUG nova.storage.rbd_utils [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(snap) on rbd image(ecb74341-a505-46e6-a429-f2d9cba0e680) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 29 03:04:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 29 03:04:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 29 03:04:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 544 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 20 MiB/s rd, 15 MiB/s wr, 769 op/s
Nov 29 03:04:11 np0005539550 nova_compute[257631]: 2025-11-29 08:04:11.172 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:12.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:12 np0005539550 nova_compute[257631]: 2025-11-29 08:04:12.728 257641 INFO nova.virt.libvirt.driver [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Snapshot image upload complete#033[00m
Nov 29 03:04:12 np0005539550 nova_compute[257631]: 2025-11-29 08:04:12.728 257641 INFO nova.compute.manager [None req-314e7178-6183-42c0-9be6-871fc1b3f072 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Took 6.85 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:04:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 544 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 17 MiB/s rd, 13 MiB/s wr, 687 op/s
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.458 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:13.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.976 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.977 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.977 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.977 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:04:13 np0005539550 nova_compute[257631]: 2025-11-29 08:04:13.978 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3364690825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.452 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:14.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.741 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.742 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.746 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.746 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.851 257641 DEBUG nova.compute.manager [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.911 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.911 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.923 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.925 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4273MB free_disk=20.789447784423828GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.925 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.925 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:14 np0005539550 nova_compute[257631]: 2025-11-29 08:04:14.935 257641 INFO nova.compute.manager [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] instance snapshotting#033[00m
Nov 29 03:04:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 544 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 9.5 MiB/s wr, 562 op/s
Nov 29 03:04:15 np0005539550 nova_compute[257631]: 2025-11-29 08:04:15.182 257641 INFO nova.virt.libvirt.driver [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Beginning live snapshot process#033[00m
Nov 29 03:04:15 np0005539550 nova_compute[257631]: 2025-11-29 08:04:15.230 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:15.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.173 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 29 03:04:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 29 03:04:16 np0005539550 podman[291935]: 2025-11-29 08:04:16.371141041 +0000 UTC m=+0.092854495 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:04:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:16.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.539 257641 DEBUG nova.virt.libvirt.imagebackend [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.557 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 885925d1-baac-4205-8817-4d9b92b082de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.558 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 1585e4de-af24-40bc-8d7d-d715357a957c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.564 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.768 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 6c0fa93f-be85-425b-9feb-b4328ad6ab59 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.768 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.769 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.862 257641 DEBUG nova.storage.rbd_utils [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(15627f63cc0e4b87a35368692a6173f6) on rbd image(885925d1-baac-4205-8817-4d9b92b082de_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:16 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 29 03:04:16 np0005539550 nova_compute[257631]: 2025-11-29 08:04:16.987 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 544 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 3.9 MiB/s wr, 292 op/s
Nov 29 03:04:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:04:17Z|00177|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 03:04:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3940103236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.455 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.461 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.531 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.623 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.623 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.624 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.624 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.625 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.632 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.632 257641 INFO nova.compute.claims [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:04:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 29 03:04:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:17.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:17 np0005539550 nova_compute[257631]: 2025-11-29 08:04:17.985 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 29 03:04:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.237 257641 DEBUG nova.storage.rbd_utils [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] cloning vms/885925d1-baac-4205-8817-4d9b92b082de_disk@15627f63cc0e4b87a35368692a6173f6 to images/f84b19f0-6dcc-469f-91bc-16d537cf010a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:04:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328948617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.448 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.453 257641 DEBUG nova.compute.provider_tree [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.460 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.469 257641 DEBUG nova.scheduler.client.report [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.496 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.515 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "12c3101e-b575-4513-8124-77e3c35b2da3" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.515 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "12c3101e-b575-4513-8124-77e3c35b2da3" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.537 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] No node specified, defaulting to compute-0.ctlplane.example.com _get_nodename /usr/lib/python3.9/site-packages/nova/compute/manager.py:10505#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.572 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "12c3101e-b575-4513-8124-77e3c35b2da3" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.573 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.633 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.633 257641 DEBUG nova.network.neutron [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.646 257641 DEBUG nova.storage.rbd_utils [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] flattening images/f84b19f0-6dcc-469f-91bc-16d537cf010a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.770 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.771 257641 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.774 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.774 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.775 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.802 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.803 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.803 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.803 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.803 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 885925d1-baac-4205-8817-4d9b92b082de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.806 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.905 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.907 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.908 257641 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Creating image(s)#033[00m
Nov 29 03:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:04:18.937 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:04:18.938 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:04:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:18 np0005539550 nova_compute[257631]: 2025-11-29 08:04:18.965 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 552 MiB data, 781 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 142 op/s
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.041 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.069 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.073 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.135 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.136 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.137 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.137 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.166 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.169 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.206 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.350 257641 DEBUG nova.network.neutron [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.350 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.520 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.544 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.544 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.545 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.545 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.545 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.545 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009971551914665922 of space, bias 1.0, pg target 2.9914655743997765 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0050480265912897825 of space, bias 1.0, pg target 1.5043119242043552 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:04:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:04:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:19.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:19 np0005539550 nova_compute[257631]: 2025-11-29 08:04:19.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:04:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 29 03:04:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 29 03:04:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.336 257641 DEBUG nova.storage.rbd_utils [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] removing snapshot(15627f63cc0e4b87a35368692a6173f6) on rbd image(885925d1-baac-4205-8817-4d9b92b082de_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.418 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.494 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] resizing rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:20.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.671 257641 DEBUG nova.objects.instance [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'migration_context' on Instance uuid 6c0fa93f-be85-425b-9feb-b4328ad6ab59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.685 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.686 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Ensure instance console log exists: /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.687 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.687 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.687 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.690 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.695 257641 WARNING nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.701 257641 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.702 257641 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.705 257641 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.706 257641 DEBUG nova.virt.libvirt.host [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.707 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.707 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.708 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.708 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.708 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.709 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.709 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.709 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.709 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.710 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.710 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.710 257641 DEBUG nova.virt.hardware [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:20 np0005539550 nova_compute[257631]: 2025-11-29 08:04:20.713 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 654 MiB data, 833 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 12 MiB/s wr, 200 op/s
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.097 257641 DEBUG nova.storage.rbd_utils [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] creating snapshot(snap) on rbd image(f84b19f0-6dcc-469f-91bc-16d537cf010a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774783828' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.151 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.177 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.181 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1037591887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.625 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.629 257641 DEBUG nova.objects.instance [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c0fa93f-be85-425b-9feb-b4328ad6ab59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.650 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <uuid>6c0fa93f-be85-425b-9feb-b4328ad6ab59</uuid>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <name>instance-00000034</name>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersOnMultiNodesTest-server-1399942865-2</nova:name>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:04:20</nova:creationTime>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:user uuid="1b85c3911b7c4e558779a15904c3ce58">tempest-ServersOnMultiNodesTest-648608509-project-member</nova:user>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <nova:project uuid="4fe9ef6d6ed6441e87cf5bdb5d40af4b">tempest-ServersOnMultiNodesTest-648608509</nova:project>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="serial">6c0fa93f-be85-425b-9feb-b4328ad6ab59</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="uuid">6c0fa93f-be85-425b-9feb-b4328ad6ab59</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/console.log" append="off"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:04:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:04:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:04:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:04:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.711 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.711 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.712 257641 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Using config drive#033[00m
Nov 29 03:04:21 np0005539550 nova_compute[257631]: 2025-11-29 08:04:21.740 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:21.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 29 03:04:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 29 03:04:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.212 257641 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Creating config drive at /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config#033[00m
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.217 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzgc8309_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.346 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzgc8309_" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.380 257641 DEBUG nova.storage.rbd_utils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] rbd image 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.385 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.578 257641 DEBUG oslo_concurrency.processutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config 6c0fa93f-be85-425b-9feb-b4328ad6ab59_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.579 257641 INFO nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Deleting local config drive /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:22 np0005539550 systemd-machined[216673]: New machine qemu-26-instance-00000034.
Nov 29 03:04:22 np0005539550 systemd[1]: Started Virtual Machine qemu-26-instance-00000034.
Nov 29 03:04:22 np0005539550 nova_compute[257631]: 2025-11-29 08:04:22.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 715 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 9.5 MiB/s rd, 20 MiB/s wr, 332 op/s
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.054 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403463.053609, 6c0fa93f-be85-425b-9feb-b4328ad6ab59 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.054 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.056 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.057 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.060 257641 INFO nova.virt.libvirt.driver [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance spawned successfully.#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.060 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.088 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.093 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.093 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.094 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.094 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.094 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.095 257641 DEBUG nova.virt.libvirt.driver [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.098 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.142 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.143 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403463.053737, 6c0fa93f-be85-425b-9feb-b4328ad6ab59 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.143 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.178 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.182 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.189 257641 INFO nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Took 4.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.189 257641 DEBUG nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.223 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.275 257641 INFO nova.compute.manager [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Took 7.02 seconds to build instance.#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.295 257641 DEBUG oslo_concurrency.lockutils [None req-f6a8c1e4-9f8b-4be7-94d0-0d7b902e018b 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:23 np0005539550 nova_compute[257631]: 2025-11-29 08:04:23.461 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:23.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:24 np0005539550 nova_compute[257631]: 2025-11-29 08:04:24.241 257641 INFO nova.virt.libvirt.driver [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Snapshot image upload complete#033[00m
Nov 29 03:04:24 np0005539550 nova_compute[257631]: 2025-11-29 08:04:24.242 257641 INFO nova.compute.manager [None req-57aecf79-8106-4899-91a8-5a980c29507c e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 9.31 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:04:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:24 np0005539550 nova_compute[257631]: 2025-11-29 08:04:24.928 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 817 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 25 MiB/s wr, 536 op/s
Nov 29 03:04:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:25.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.176 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 29 03:04:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 29 03:04:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.598 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.599 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.599 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.599 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.600 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.601 257641 INFO nova.compute.manager [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Terminating instance#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.602 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "refresh_cache-6c0fa93f-be85-425b-9feb-b4328ad6ab59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.602 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquired lock "refresh_cache-6c0fa93f-be85-425b-9feb-b4328ad6ab59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.602 257641 DEBUG nova.network.neutron [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:26 np0005539550 nova_compute[257631]: 2025-11-29 08:04:26.883 257641 DEBUG nova.network.neutron [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 874 MiB data, 961 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 695 op/s
Nov 29 03:04:27 np0005539550 nova_compute[257631]: 2025-11-29 08:04:27.276 257641 DEBUG nova.network.neutron [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:27 np0005539550 nova_compute[257631]: 2025-11-29 08:04:27.400 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Releasing lock "refresh_cache-6c0fa93f-be85-425b-9feb-b4328ad6ab59" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:27 np0005539550 nova_compute[257631]: 2025-11-29 08:04:27.401 257641 DEBUG nova.compute.manager [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:27 np0005539550 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000034.scope: Deactivated successfully.
Nov 29 03:04:27 np0005539550 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000034.scope: Consumed 4.874s CPU time.
Nov 29 03:04:27 np0005539550 systemd-machined[216673]: Machine qemu-26-instance-00000034 terminated.
Nov 29 03:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:27 np0005539550 nova_compute[257631]: 2025-11-29 08:04:27.852 257641 INFO nova.virt.libvirt.driver [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance destroyed successfully.#033[00m
Nov 29 03:04:27 np0005539550 nova_compute[257631]: 2025-11-29 08:04:27.852 257641 DEBUG nova.objects.instance [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lazy-loading 'resources' on Instance uuid 6c0fa93f-be85-425b-9feb-b4328ad6ab59 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:28 np0005539550 nova_compute[257631]: 2025-11-29 08:04:28.465 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 791 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 14 MiB/s wr, 750 op/s
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.830 257641 INFO nova.virt.libvirt.driver [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Deleting instance files /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59_del#033[00m
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.831 257641 INFO nova.virt.libvirt.driver [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Deletion of /var/lib/nova/instances/6c0fa93f-be85-425b-9feb-b4328ad6ab59_del complete#033[00m
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.877 257641 INFO nova.compute.manager [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Took 2.48 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.877 257641 DEBUG oslo.service.loopingcall [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.877 257641 DEBUG nova.compute.manager [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:29 np0005539550 nova_compute[257631]: 2025-11-29 08:04:29.878 257641 DEBUG nova.network.neutron [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.026 257641 DEBUG nova.network.neutron [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.048 257641 DEBUG nova.network.neutron [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.068 257641 INFO nova.compute.manager [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Took 0.19 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.123 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.124 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:30 np0005539550 podman[292698]: 2025-11-29 08:04:30.14296977 +0000 UTC m=+0.065773004 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.255 257641 DEBUG oslo_concurrency.processutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:30 np0005539550 podman[292698]: 2025-11-29 08:04:30.26230891 +0000 UTC m=+0.185112144 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:04:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449253743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.704 257641 DEBUG oslo_concurrency.processutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.710 257641 DEBUG nova.compute.provider_tree [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.772 257641 DEBUG nova.scheduler.client.report [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.811 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:30 np0005539550 nova_compute[257631]: 2025-11-29 08:04:30.850 257641 INFO nova.scheduler.client.report [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Deleted allocations for instance 6c0fa93f-be85-425b-9feb-b4328ad6ab59#033[00m
Nov 29 03:04:30 np0005539550 podman[292919]: 2025-11-29 08:04:30.872931361 +0000 UTC m=+0.057713815 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:04:30 np0005539550 podman[292919]: 2025-11-29 08:04:30.880356463 +0000 UTC m=+0.065138877 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 29 03:04:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 697 MiB data, 902 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 750 op/s
Nov 29 03:04:31 np0005539550 podman[292984]: 2025-11-29 08:04:31.080139316 +0000 UTC m=+0.057964041 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=2.2.4, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20)
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 29 03:04:31 np0005539550 podman[292984]: 2025-11-29 08:04:31.095236817 +0000 UTC m=+0.073061542 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Nov 29 03:04:31 np0005539550 nova_compute[257631]: 2025-11-29 08:04:31.132 257641 DEBUG oslo_concurrency.lockutils [None req-ae6534bf-bc3a-45a9-b32a-d259e24ccdba 1b85c3911b7c4e558779a15904c3ce58 4fe9ef6d6ed6441e87cf5bdb5d40af4b - - default default] Lock "6c0fa93f-be85-425b-9feb-b4328ad6ab59" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:31 np0005539550 nova_compute[257631]: 2025-11-29 08:04:31.178 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:04:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:31.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:32 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b7459f01-2d87-47d1-b5ea-dd446fb8922b does not exist
Nov 29 03:04:32 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2cfe1065-e80f-4cc0-b89f-a647fa9340ad does not exist
Nov 29 03:04:32 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7b5c098f-46e2-47b0-be57-2b404c2b5bf0 does not exist
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 29 03:04:32 np0005539550 podman[293173]: 2025-11-29 08:04:32.900786287 +0000 UTC m=+0.053081715 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:04:32 np0005539550 podman[293172]: 2025-11-29 08:04:32.910958021 +0000 UTC m=+0.064463950 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:04:32 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 29 03:04:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 698 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 1.2 MiB/s wr, 408 op/s
Nov 29 03:04:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:04:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.357106653 +0000 UTC m=+0.044487743 container create 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:33 np0005539550 systemd[1]: Started libpod-conmon-5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3.scope.
Nov 29 03:04:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.333459091 +0000 UTC m=+0.020840201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.44002196 +0000 UTC m=+0.127403050 container init 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.447550595 +0000 UTC m=+0.134931665 container start 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.452877443 +0000 UTC m=+0.140258543 container attach 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:04:33 np0005539550 nervous_jones[293338]: 167 167
Nov 29 03:04:33 np0005539550 systemd[1]: libpod-5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3.scope: Deactivated successfully.
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.456078946 +0000 UTC m=+0.143460016 container died 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:04:33 np0005539550 nova_compute[257631]: 2025-11-29 08:04:33.516 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3668c5a5e2cad9b167ef4d3d7c7914c84c03757c94f390b807be0320d1f22328-merged.mount: Deactivated successfully.
Nov 29 03:04:33 np0005539550 podman[293322]: 2025-11-29 08:04:33.54700697 +0000 UTC m=+0.234388040 container remove 5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:04:33 np0005539550 systemd[1]: libpod-conmon-5941d2a7edb53e5c3422b284c46577b89c6b5389f0f92cc69583ed1b4085ccf3.scope: Deactivated successfully.
Nov 29 03:04:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:04:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3001.4 total, 600.0 interval#012Cumulative writes: 19K writes, 75K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 5879 syncs, 3.26 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6886 writes, 29K keys, 6886 commit groups, 1.0 writes per commit group, ingest: 30.55 MB, 0.05 MB/s#012Interval WAL: 6886 writes, 2590 syncs, 2.66 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:04:33 np0005539550 podman[293361]: 2025-11-29 08:04:33.729758082 +0000 UTC m=+0.050731535 container create ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:04:33 np0005539550 systemd[1]: Started libpod-conmon-ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1.scope.
Nov 29 03:04:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:33 np0005539550 podman[293361]: 2025-11-29 08:04:33.708572533 +0000 UTC m=+0.029546006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:33 np0005539550 podman[293361]: 2025-11-29 08:04:33.813621063 +0000 UTC m=+0.134594516 container init ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:04:33 np0005539550 podman[293361]: 2025-11-29 08:04:33.821269011 +0000 UTC m=+0.142242464 container start ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:04:33 np0005539550 podman[293361]: 2025-11-29 08:04:33.825386378 +0000 UTC m=+0.146359841 container attach ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:34 np0005539550 gallant_einstein[293377]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:04:34 np0005539550 gallant_einstein[293377]: --> relative data size: 1.0
Nov 29 03:04:34 np0005539550 gallant_einstein[293377]: --> All data devices are unavailable
Nov 29 03:04:34 np0005539550 systemd[1]: libpod-ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1.scope: Deactivated successfully.
Nov 29 03:04:34 np0005539550 conmon[293377]: conmon ad4def00f8f8e0cae336 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1.scope/container/memory.events
Nov 29 03:04:34 np0005539550 podman[293361]: 2025-11-29 08:04:34.740069642 +0000 UTC m=+1.061043095 container died ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:04:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-86d88cc1605bf73f52afe50c000c7bbf394f9cc42f5cafad5ee95e66cb04102f-merged.mount: Deactivated successfully.
Nov 29 03:04:34 np0005539550 podman[293361]: 2025-11-29 08:04:34.800506517 +0000 UTC m=+1.121479970 container remove ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:04:34 np0005539550 systemd[1]: libpod-conmon-ad4def00f8f8e0cae336a38e0b165359705074393b49c420a11b92f8e26991b1.scope: Deactivated successfully.
Nov 29 03:04:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 722 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 399 op/s
Nov 29 03:04:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 29 03:04:35 np0005539550 podman[293547]: 2025-11-29 08:04:35.40700303 +0000 UTC m=+0.019245710 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:35 np0005539550 podman[293547]: 2025-11-29 08:04:35.516513385 +0000 UTC m=+0.128756055 container create 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:04:35 np0005539550 systemd[1]: Started libpod-conmon-754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915.scope.
Nov 29 03:04:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 29 03:04:35 np0005539550 podman[293547]: 2025-11-29 08:04:35.88977811 +0000 UTC m=+0.502020800 container init 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:04:35 np0005539550 podman[293547]: 2025-11-29 08:04:35.902441328 +0000 UTC m=+0.514684008 container start 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:04:35 np0005539550 youthful_cori[293563]: 167 167
Nov 29 03:04:35 np0005539550 systemd[1]: libpod-754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915.scope: Deactivated successfully.
Nov 29 03:04:35 np0005539550 conmon[293563]: conmon 754f19ef38c8a67ff14d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915.scope/container/memory.events
Nov 29 03:04:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:35.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 29 03:04:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 29 03:04:36 np0005539550 podman[293547]: 2025-11-29 08:04:36.071111826 +0000 UTC m=+0.683354526 container attach 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:04:36 np0005539550 podman[293547]: 2025-11-29 08:04:36.072308127 +0000 UTC m=+0.684550797 container died 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:04:36 np0005539550 nova_compute[257631]: 2025-11-29 08:04:36.181 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 29 03:04:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3fba93de90d0bf7612becb50d95c6bf8c6850feff8e374ab99ff223d9f0a64e1-merged.mount: Deactivated successfully.
Nov 29 03:04:36 np0005539550 podman[293547]: 2025-11-29 08:04:36.367992093 +0000 UTC m=+0.980234773 container remove 754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:04:36 np0005539550 systemd[1]: libpod-conmon-754f19ef38c8a67ff14d6fc9227300830814555dbe68e062e432737949df0915.scope: Deactivated successfully.
Nov 29 03:04:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:36 np0005539550 podman[293588]: 2025-11-29 08:04:36.594009187 +0000 UTC m=+0.056850542 container create 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:04:36 np0005539550 systemd[1]: Started libpod-conmon-1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04.scope.
Nov 29 03:04:36 np0005539550 podman[293588]: 2025-11-29 08:04:36.566925896 +0000 UTC m=+0.029767271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1020712cbd2f534802d00949e2dd5288e6a2962362c81d02aaaffea4832761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1020712cbd2f534802d00949e2dd5288e6a2962362c81d02aaaffea4832761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1020712cbd2f534802d00949e2dd5288e6a2962362c81d02aaaffea4832761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da1020712cbd2f534802d00949e2dd5288e6a2962362c81d02aaaffea4832761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:36 np0005539550 podman[293588]: 2025-11-29 08:04:36.690121524 +0000 UTC m=+0.152962899 container init 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:04:36 np0005539550 podman[293588]: 2025-11-29 08:04:36.696064918 +0000 UTC m=+0.158906273 container start 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:04:36 np0005539550 podman[293588]: 2025-11-29 08:04:36.700168004 +0000 UTC m=+0.163009389 container attach 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 631 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 198 op/s
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]: {
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:    "0": [
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:        {
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "devices": [
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "/dev/loop3"
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            ],
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "lv_name": "ceph_lv0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "lv_size": "7511998464",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "name": "ceph_lv0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "tags": {
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.cluster_name": "ceph",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.crush_device_class": "",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.encrypted": "0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.osd_id": "0",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.type": "block",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:                "ceph.vdo": "0"
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            },
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "type": "block",
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:            "vg_name": "ceph_vg0"
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:        }
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]:    ]
Nov 29 03:04:37 np0005539550 zen_heyrovsky[293605]: }
Nov 29 03:04:37 np0005539550 systemd[1]: libpod-1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04.scope: Deactivated successfully.
Nov 29 03:04:37 np0005539550 podman[293588]: 2025-11-29 08:04:37.482569203 +0000 UTC m=+0.945410568 container died 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:04:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da1020712cbd2f534802d00949e2dd5288e6a2962362c81d02aaaffea4832761-merged.mount: Deactivated successfully.
Nov 29 03:04:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 29 03:04:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 29 03:04:37 np0005539550 podman[293588]: 2025-11-29 08:04:37.54348981 +0000 UTC m=+1.006331165 container remove 1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:04:37 np0005539550 systemd[1]: libpod-conmon-1faea8a65d88f7865bb804bac633667f8257644d81ab591a88dad11b22337b04.scope: Deactivated successfully.
Nov 29 03:04:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:37.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.144 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "1585e4de-af24-40bc-8d7d-d715357a957c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.145 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.145 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "1585e4de-af24-40bc-8d7d-d715357a957c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.145 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.145 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.146 257641 INFO nova.compute.manager [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Terminating instance#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.147 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "refresh_cache-1585e4de-af24-40bc-8d7d-d715357a957c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.147 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquired lock "refresh_cache-1585e4de-af24-40bc-8d7d-d715357a957c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.147 257641 DEBUG nova.network.neutron [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.219639348 +0000 UTC m=+0.040628003 container create 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:04:38 np0005539550 systemd[1]: Started libpod-conmon-2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332.scope.
Nov 29 03:04:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.28924743 +0000 UTC m=+0.110236115 container init 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.295273996 +0000 UTC m=+0.116262671 container start 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.29852011 +0000 UTC m=+0.119508795 container attach 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:04:38 np0005539550 infallible_bardeen[293786]: 167 167
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.204239509 +0000 UTC m=+0.025228184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:38 np0005539550 systemd[1]: libpod-2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332.scope: Deactivated successfully.
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.301074887 +0000 UTC m=+0.122063542 container died 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.323 257641 DEBUG nova.network.neutron [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7122ad447efc3807e17e93db2e91733b57bed84e8a08dfa079fd58d686819f72-merged.mount: Deactivated successfully.
Nov 29 03:04:38 np0005539550 podman[293769]: 2025-11-29 08:04:38.343392832 +0000 UTC m=+0.164381487 container remove 2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:04:38 np0005539550 systemd[1]: libpod-conmon-2fa9a59f6c766985da47f2b75ecf1785512a25fa9527738f8eab73e1472e9332.scope: Deactivated successfully.
Nov 29 03:04:38 np0005539550 podman[293809]: 2025-11-29 08:04:38.509025761 +0000 UTC m=+0.046486825 container create 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:04:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:38 np0005539550 nova_compute[257631]: 2025-11-29 08:04:38.520 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:38 np0005539550 systemd[1]: Started libpod-conmon-317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba.scope.
Nov 29 03:04:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:04:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e36cb4657c37ea9dada8dab36dfaa008040272eecb4f208c89778b3feb6e61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e36cb4657c37ea9dada8dab36dfaa008040272eecb4f208c89778b3feb6e61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e36cb4657c37ea9dada8dab36dfaa008040272eecb4f208c89778b3feb6e61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4e36cb4657c37ea9dada8dab36dfaa008040272eecb4f208c89778b3feb6e61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:04:38 np0005539550 podman[293809]: 2025-11-29 08:04:38.492449522 +0000 UTC m=+0.029910606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:04:38 np0005539550 podman[293809]: 2025-11-29 08:04:38.594409992 +0000 UTC m=+0.131871076 container init 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:04:38 np0005539550 podman[293809]: 2025-11-29 08:04:38.60014274 +0000 UTC m=+0.137603804 container start 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:04:38 np0005539550 podman[293809]: 2025-11-29 08:04:38.605786837 +0000 UTC m=+0.143247931 container attach 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:04:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 385 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 878 KiB/s rd, 2.3 MiB/s wr, 393 op/s
Nov 29 03:04:39 np0005539550 nova_compute[257631]: 2025-11-29 08:04:39.338 257641 DEBUG nova.network.neutron [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:39 np0005539550 nova_compute[257631]: 2025-11-29 08:04:39.366 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Releasing lock "refresh_cache-1585e4de-af24-40bc-8d7d-d715357a957c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:39 np0005539550 nova_compute[257631]: 2025-11-29 08:04:39.367 257641 DEBUG nova.compute.manager [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]: {
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:        "osd_id": 0,
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:        "type": "bluestore"
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]:    }
Nov 29 03:04:39 np0005539550 peaceful_mendel[293826]: }
Nov 29 03:04:39 np0005539550 systemd[1]: libpod-317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba.scope: Deactivated successfully.
Nov 29 03:04:39 np0005539550 podman[293809]: 2025-11-29 08:04:39.448533727 +0000 UTC m=+0.985994811 container died 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:04:39 np0005539550 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Nov 29 03:04:39 np0005539550 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000002d.scope: Consumed 17.045s CPU time.
Nov 29 03:04:39 np0005539550 systemd-machined[216673]: Machine qemu-24-instance-0000002d terminated.
Nov 29 03:04:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e4e36cb4657c37ea9dada8dab36dfaa008040272eecb4f208c89778b3feb6e61-merged.mount: Deactivated successfully.
Nov 29 03:04:39 np0005539550 podman[293809]: 2025-11-29 08:04:39.505078331 +0000 UTC m=+1.042539395 container remove 317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:04:39 np0005539550 systemd[1]: libpod-conmon-317766776dde0218f2d234963dd2a731845333c4311aa6fd2b47f1df17d957ba.scope: Deactivated successfully.
Nov 29 03:04:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:04:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:04:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 64d4afa3-504a-4974-a2a9-f32f07273384 does not exist
Nov 29 03:04:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 11a8ecbc-5dde-446a-85eb-03dd12598a0b does not exist
Nov 29 03:04:39 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c6f99f88-b36a-47d5-9741-cf89108877a9 does not exist
Nov 29 03:04:39 np0005539550 nova_compute[257631]: 2025-11-29 08:04:39.588 257641 INFO nova.virt.libvirt.driver [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance destroyed successfully.#033[00m
Nov 29 03:04:39 np0005539550 nova_compute[257631]: 2025-11-29 08:04:39.589 257641 DEBUG nova.objects.instance [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'resources' on Instance uuid 1585e4de-af24-40bc-8d7d-d715357a957c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:39.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.011 257641 INFO nova.virt.libvirt.driver [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Deleting instance files /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c_del#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.013 257641 INFO nova.virt.libvirt.driver [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Deletion of /var/lib/nova/instances/1585e4de-af24-40bc-8d7d-d715357a957c_del complete#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.070 257641 INFO nova.compute.manager [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.071 257641 DEBUG oslo.service.loopingcall [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.071 257641 DEBUG nova.compute.manager [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.071 257641 DEBUG nova.network.neutron [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.216 257641 DEBUG nova.network.neutron [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.235 257641 DEBUG nova.network.neutron [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.250 257641 INFO nova.compute.manager [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Took 0.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.299 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.300 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.394 257641 DEBUG oslo_concurrency.processutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:04:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246780988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.827 257641 DEBUG oslo_concurrency.processutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.836 257641 DEBUG nova.compute.provider_tree [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.854 257641 DEBUG nova.scheduler.client.report [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.884 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:40 np0005539550 nova_compute[257631]: 2025-11-29 08:04:40.929 257641 INFO nova.scheduler.client.report [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Deleted allocations for instance 1585e4de-af24-40bc-8d7d-d715357a957c#033[00m
Nov 29 03:04:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 238 MiB data, 649 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 997 KiB/s wr, 459 op/s
Nov 29 03:04:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.013 257641 DEBUG oslo_concurrency.lockutils [None req-725bcd4f-a4ee-4934-8173-26069cb022eb e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "1585e4de-af24-40bc-8d7d-d715357a957c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 29 03:04:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.183 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:41.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.968 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "885925d1-baac-4205-8817-4d9b92b082de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.969 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.969 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "885925d1-baac-4205-8817-4d9b92b082de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.969 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.969 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.971 257641 INFO nova.compute.manager [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Terminating instance#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.971 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.972 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquired lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:04:41 np0005539550 nova_compute[257631]: 2025-11-29 08:04:41.972 257641 DEBUG nova.network.neutron [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.144 257641 DEBUG nova.network.neutron [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.392 257641 DEBUG nova.network.neutron [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.406 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Releasing lock "refresh_cache-885925d1-baac-4205-8817-4d9b92b082de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.407 257641 DEBUG nova.compute.manager [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:04:42 np0005539550 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Nov 29 03:04:42 np0005539550 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d0000002c.scope: Consumed 15.964s CPU time.
Nov 29 03:04:42 np0005539550 systemd-machined[216673]: Machine qemu-25-instance-0000002c terminated.
Nov 29 03:04:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.629 257641 INFO nova.virt.libvirt.driver [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance destroyed successfully.#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.630 257641 DEBUG nova.objects.instance [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lazy-loading 'resources' on Instance uuid 885925d1-baac-4205-8817-4d9b92b082de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.850 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403467.8500302, 6c0fa93f-be85-425b-9feb-b4328ad6ab59 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.851 257641 INFO nova.compute.manager [-] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:42 np0005539550 nova_compute[257631]: 2025-11-29 08:04:42.870 257641 DEBUG nova.compute.manager [None req-df8a2320-06b3-4e50-9f62-49ee45961273 - - - - - -] [instance: 6c0fa93f-be85-425b-9feb-b4328ad6ab59] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 185 MiB data, 612 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 69 KiB/s wr, 386 op/s
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.214 257641 INFO nova.virt.libvirt.driver [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Deleting instance files /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de_del#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.215 257641 INFO nova.virt.libvirt.driver [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Deletion of /var/lib/nova/instances/885925d1-baac-4205-8817-4d9b92b082de_del complete#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.280 257641 INFO nova.compute.manager [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.280 257641 DEBUG oslo.service.loopingcall [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.281 257641 DEBUG nova.compute.manager [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.281 257641 DEBUG nova.network.neutron [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.485 257641 DEBUG nova.network.neutron [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.500 257641 DEBUG nova.network.neutron [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.521 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.539 257641 INFO nova.compute.manager [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Took 0.26 seconds to deallocate network for instance.#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.614 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.615 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:43 np0005539550 nova_compute[257631]: 2025-11-29 08:04:43.662 257641 DEBUG oslo_concurrency.processutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:43.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280273898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.153 257641 DEBUG oslo_concurrency.processutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.160 257641 DEBUG nova.compute.provider_tree [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.176 257641 DEBUG nova.scheduler.client.report [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.206 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.233 257641 INFO nova.scheduler.client.report [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Deleted allocations for instance 885925d1-baac-4205-8817-4d9b92b082de#033[00m
Nov 29 03:04:44 np0005539550 nova_compute[257631]: 2025-11-29 08:04:44.319 257641 DEBUG oslo_concurrency.lockutils [None req-5282a401-70a2-41e1-be14-43b298e0f9d8 e34525c38d50445e9771f1b8e18bc428 1a7ab40578b84f6aa0f4d2225a36bf9e - - default default] Lock "885925d1-baac-4205-8817-4d9b92b082de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.350s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 122 MiB data, 580 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 62 KiB/s wr, 410 op/s
Nov 29 03:04:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:45.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 29 03:04:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 29 03:04:46 np0005539550 nova_compute[257631]: 2025-11-29 08:04:46.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 88 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 37 KiB/s wr, 238 op/s
Nov 29 03:04:47 np0005539550 podman[294001]: 2025-11-29 08:04:47.36822873 +0000 UTC m=+0.108954382 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:04:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 29 03:04:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 29 03:04:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 29 03:04:47 np0005539550 nova_compute[257631]: 2025-11-29 08:04:47.732 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:04:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:04:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:47.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.325 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.325 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.346 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.446 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.446 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.452 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.453 257641 INFO nova.compute.claims [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:04:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:04:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:48.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.554 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:48 np0005539550 nova_compute[257631]: 2025-11-29 08:04:48.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:04:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3868529053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:04:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.1 KiB/s wr, 204 op/s
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.011 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.017 257641 DEBUG nova.compute.provider_tree [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.035 257641 DEBUG nova.scheduler.client.report [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.062 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.063 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.121 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.144 257641 INFO nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.163 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.243 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.245 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.245 257641 INFO nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating image(s)#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.273 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.302 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.329 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.332 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.397 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.399 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.399 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.400 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.427 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.431 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 59628432-68dc-48d9-8986-8511c376a62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.750 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 59628432-68dc-48d9-8986-8511c376a62d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 29 03:04:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 29 03:04:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.849 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] resizing rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:04:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:49.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:49 np0005539550 nova_compute[257631]: 2025-11-29 08:04:49.977 257641 DEBUG nova.objects.instance [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'migration_context' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.005 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.006 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Ensure instance console log exists: /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.006 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.006 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.007 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.008 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.013 257641 WARNING nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.019 257641 DEBUG nova.virt.libvirt.host [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.020 257641 DEBUG nova.virt.libvirt.host [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.026 257641 DEBUG nova.virt.libvirt.host [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.027 257641 DEBUG nova.virt.libvirt.host [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.028 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.028 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.028 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.029 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.029 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.029 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.029 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.029 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.030 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.030 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.030 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.030 257641 DEBUG nova.virt.hardware [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.033 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586006942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.468 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.507 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.512 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:50.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:04:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2185195696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.973 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.975 257641 DEBUG nova.objects.instance [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'pci_devices' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:04:50 np0005539550 nova_compute[257631]: 2025-11-29 08:04:50.997 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:04:50 np0005539550 nova_compute[257631]:  <uuid>59628432-68dc-48d9-8986-8511c376a62d</uuid>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:  <name>instance-00000037</name>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:04:50 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-106050071</nova:name>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:04:50</nova:creationTime>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:04:50 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:04:50 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <nova:user uuid="dc44b9aeabb442f582688b672dd724f3">tempest-UnshelveToHostMultiNodesTest-1345262936-project-member</nova:user>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <nova:project uuid="49ee945ea42e47ad9f070078a4d5179b">tempest-UnshelveToHostMultiNodesTest-1345262936</nova:project>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="serial">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="uuid">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk.config">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log" append="off"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:04:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:04:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:04:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:04:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:04:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 101 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 1.0 MiB/s wr, 136 op/s
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.069 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.070 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.070 257641 INFO nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Using config drive#033[00m
Nov 29 03:04:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.099 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.238 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.485 257641 INFO nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating config drive at /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.489 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntf35xa_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.613 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntf35xa_" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.644 257641 DEBUG nova.storage.rbd_utils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.647 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.814 257641 DEBUG oslo_concurrency.processutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:04:51 np0005539550 nova_compute[257631]: 2025-11-29 08:04:51.815 257641 INFO nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting local config drive /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:04:51 np0005539550 systemd-machined[216673]: New machine qemu-27-instance-00000037.
Nov 29 03:04:51 np0005539550 systemd[1]: Started Virtual Machine qemu-27-instance-00000037.
Nov 29 03:04:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:51.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.494 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403492.493691, 59628432-68dc-48d9-8986-8511c376a62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.494 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.496 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.497 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.501 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance spawned successfully.#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.501 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.528 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:04:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.534 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.535 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.535 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.535 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.536 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.536 257641 DEBUG nova.virt.libvirt.driver [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.540 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.575 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.576 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403492.493794, 59628432-68dc-48d9-8986-8511c376a62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.576 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.614 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.618 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.624 257641 INFO nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Took 3.38 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.625 257641 DEBUG nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.640 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.697 257641 INFO nova.compute.manager [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Took 4.28 seconds to build instance.#033[00m
Nov 29 03:04:52 np0005539550 nova_compute[257631]: 2025-11-29 08:04:52.711 257641 DEBUG oslo_concurrency.lockutils [None req-96a4970a-caa7-41ad-8ed8-5830be42879e dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:04:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 149 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 194 KiB/s rd, 4.3 MiB/s wr, 150 op/s
Nov 29 03:04:53 np0005539550 nova_compute[257631]: 2025-11-29 08:04:53.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:53.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:54 np0005539550 nova_compute[257631]: 2025-11-29 08:04:54.587 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403479.5860524, 1585e4de-af24-40bc-8d7d-d715357a957c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:54 np0005539550 nova_compute[257631]: 2025-11-29 08:04:54.587 257641 INFO nova.compute.manager [-] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:54 np0005539550 nova_compute[257631]: 2025-11-29 08:04:54.607 257641 DEBUG nova.compute.manager [None req-cfd30be2-ca9e-425f-afae-92c1a2094988 - - - - - -] [instance: 1585e4de-af24-40bc-8d7d-d715357a957c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 174 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 5.4 MiB/s wr, 239 op/s
Nov 29 03:04:55 np0005539550 nova_compute[257631]: 2025-11-29 08:04:55.815 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:04:55 np0005539550 nova_compute[257631]: 2025-11-29 08:04:55.815 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:04:55 np0005539550 nova_compute[257631]: 2025-11-29 08:04:55.816 257641 INFO nova.compute.manager [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shelving#033[00m
Nov 29 03:04:55 np0005539550 nova_compute[257631]: 2025-11-29 08:04:55.841 257641 DEBUG nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:04:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:55.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:04:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 29 03:04:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 29 03:04:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 29 03:04:56 np0005539550 nova_compute[257631]: 2025-11-29 08:04:56.239 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:56.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 213 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 8.6 MiB/s wr, 267 op/s
Nov 29 03:04:57 np0005539550 nova_compute[257631]: 2025-11-29 08:04:57.628 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403482.6275134, 885925d1-baac-4205-8817-4d9b92b082de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:04:57 np0005539550 nova_compute[257631]: 2025-11-29 08:04:57.629 257641 INFO nova.compute.manager [-] [instance: 885925d1-baac-4205-8817-4d9b92b082de] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:04:57 np0005539550 nova_compute[257631]: 2025-11-29 08:04:57.648 257641 DEBUG nova.compute.manager [None req-f5866b36-4da6-4273-a848-297b3aab10ac - - - - - -] [instance: 885925d1-baac-4205-8817-4d9b92b082de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:57.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:04:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:04:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:04:58.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:04:58 np0005539550 nova_compute[257631]: 2025-11-29 08:04:58.580 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 213 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.4 MiB/s wr, 298 op/s
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:04:59
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'vms', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 29 03:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:04:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:04:59.761 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:04:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:04:59.762 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:04:59 np0005539550 nova_compute[257631]: 2025-11-29 08:04:59.807 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:04:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 29 03:04:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 29 03:04:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 29 03:04:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:04:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:04:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:04:59.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:00.764 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 29 03:05:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 29 03:05:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 29 03:05:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 218 MiB data, 616 MiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.4 MiB/s wr, 347 op/s
Nov 29 03:05:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:01 np0005539550 nova_compute[257631]: 2025-11-29 08:05:01.241 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 29 03:05:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:01.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 29 03:05:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 29 03:05:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 256 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 315 op/s
Nov 29 03:05:03 np0005539550 podman[294453]: 2025-11-29 08:05:03.376648791 +0000 UTC m=+0.108451509 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:05:03 np0005539550 podman[294454]: 2025-11-29 08:05:03.395878779 +0000 UTC m=+0.124572246 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:05:03 np0005539550 nova_compute[257631]: 2025-11-29 08:05:03.582 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:04.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 277 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 6.1 MiB/s wr, 276 op/s
Nov 29 03:05:05 np0005539550 nova_compute[257631]: 2025-11-29 08:05:05.886 257641 DEBUG nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:05.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 29 03:05:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 29 03:05:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 29 03:05:06 np0005539550 nova_compute[257631]: 2025-11-29 08:05:06.243 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 272 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 8.4 MiB/s wr, 203 op/s
Nov 29 03:05:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:07.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:05:08 np0005539550 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 29 03:05:08 np0005539550 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000037.scope: Consumed 13.553s CPU time.
Nov 29 03:05:08 np0005539550 systemd-machined[216673]: Machine qemu-27-instance-00000037 terminated.
Nov 29 03:05:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:08 np0005539550 nova_compute[257631]: 2025-11-29 08:05:08.584 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:05:08 np0005539550 nova_compute[257631]: 2025-11-29 08:05:08.902 257641 INFO nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:05:08 np0005539550 nova_compute[257631]: 2025-11-29 08:05:08.908 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:08 np0005539550 nova_compute[257631]: 2025-11-29 08:05:08.908 257641 DEBUG nova.objects.instance [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'numa_topology' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 159 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.9 MiB/s wr, 324 op/s
Nov 29 03:05:09 np0005539550 nova_compute[257631]: 2025-11-29 08:05:09.258 257641 INFO nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Beginning cold snapshot process#033[00m
Nov 29 03:05:09 np0005539550 nova_compute[257631]: 2025-11-29 08:05:09.407 257641 DEBUG nova.virt.libvirt.imagebackend [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:05:09 np0005539550 nova_compute[257631]: 2025-11-29 08:05:09.880 257641 DEBUG nova.storage.rbd_utils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] creating snapshot(4ab41aa3ca0146a7b1e0e38cc3a9241a) on rbd image(59628432-68dc-48d9-8986-8511c376a62d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:09.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 29 03:05:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 29 03:05:10 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 29 03:05:10 np0005539550 nova_compute[257631]: 2025-11-29 08:05:10.268 257641 DEBUG nova.storage.rbd_utils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] cloning vms/59628432-68dc-48d9-8986-8511c376a62d_disk@4ab41aa3ca0146a7b1e0e38cc3a9241a to images/f6492822-3635-4784-934e-4aff1e94e9cf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:10 np0005539550 nova_compute[257631]: 2025-11-29 08:05:10.407 257641 DEBUG nova.storage.rbd_utils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] flattening images/f6492822-3635-4784-934e-4aff1e94e9cf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:10.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:10 np0005539550 nova_compute[257631]: 2025-11-29 08:05:10.956 257641 DEBUG nova.storage.rbd_utils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] removing snapshot(4ab41aa3ca0146a7b1e0e38cc3a9241a) on rbd image(59628432-68dc-48d9-8986-8511c376a62d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 121 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.1 MiB/s wr, 279 op/s
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.105894) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511105958, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1958, "num_deletes": 263, "total_data_size": 3078870, "memory_usage": 3120232, "flush_reason": "Manual Compaction"}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511123778, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2065724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30969, "largest_seqno": 32926, "table_properties": {"data_size": 2058157, "index_size": 4257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 19552, "raw_average_key_size": 22, "raw_value_size": 2041730, "raw_average_value_size": 2309, "num_data_blocks": 185, "num_entries": 884, "num_filter_entries": 884, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403375, "oldest_key_time": 1764403375, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 17999 microseconds, and 6012 cpu microseconds.
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.123891) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2065724 bytes OK
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.123912) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.125327) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.125338) EVENT_LOG_v1 {"time_micros": 1764403511125334, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.125356) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3070385, prev total WAL file size 3070385, number of live WAL files 2.
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.126201) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2017KB)], [68(10MB)]
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511126253, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13228006, "oldest_snapshot_seqno": -1}
Nov 29 03:05:11 np0005539550 nova_compute[257631]: 2025-11-29 08:05:11.153 257641 DEBUG nova.storage.rbd_utils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] creating snapshot(snap) on rbd image(f6492822-3635-4784-934e-4aff1e94e9cf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6695 keys, 10317301 bytes, temperature: kUnknown
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511199078, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 10317301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10273683, "index_size": 25778, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 170763, "raw_average_key_size": 25, "raw_value_size": 10154548, "raw_average_value_size": 1516, "num_data_blocks": 1038, "num_entries": 6695, "num_filter_entries": 6695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.199321) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 10317301 bytes
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.200923) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 10.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 7167, records dropped: 472 output_compression: NoCompression
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.200940) EVENT_LOG_v1 {"time_micros": 1764403511200931, "job": 38, "event": "compaction_finished", "compaction_time_micros": 72896, "compaction_time_cpu_micros": 27672, "output_level": 6, "num_output_files": 1, "total_output_size": 10317301, "num_input_records": 7167, "num_output_records": 6695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511201316, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403511203106, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.126098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.203148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.203153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.203154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.203156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:11.203157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:11 np0005539550 nova_compute[257631]: 2025-11-29 08:05:11.245 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:11.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 29 03:05:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 29 03:05:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 29 03:05:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:12.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 158 MiB data, 599 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 7.2 MiB/s wr, 322 op/s
Nov 29 03:05:13 np0005539550 nova_compute[257631]: 2025-11-29 08:05:13.586 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:13 np0005539550 nova_compute[257631]: 2025-11-29 08:05:13.943 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:13.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.006 257641 INFO nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Snapshot image upload complete#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.006 257641 DEBUG nova.compute.manager [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.064 257641 INFO nova.compute.manager [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shelve offloading#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.071 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.071 257641 DEBUG nova.compute.manager [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.074 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.074 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquired lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.074 257641 DEBUG nova.network.neutron [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.275 257641 DEBUG nova.network.neutron [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.507 257641 DEBUG nova.network.neutron [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.521 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Releasing lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.528 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.529 257641 DEBUG nova.objects.instance [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'resources' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:14.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.944 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.945 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.994 257641 INFO nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting instance files /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del#033[00m
Nov 29 03:05:14 np0005539550 nova_compute[257631]: 2025-11-29 08:05:14.995 257641 INFO nova.virt.libvirt.driver [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deletion of /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del complete#033[00m
Nov 29 03:05:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 223 MiB data, 659 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 11 MiB/s wr, 191 op/s
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.131 257641 INFO nova.scheduler.client.report [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Deleted allocations for instance 59628432-68dc-48d9-8986-8511c376a62d#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.189 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.190 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.210 257641 DEBUG oslo_concurrency.processutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022009575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.401 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.582 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.583 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4605MB free_disk=20.92882537841797GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.583 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163868868' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.673 257641 DEBUG oslo_concurrency.processutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.678 257641 DEBUG nova.compute.provider_tree [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.692 257641 DEBUG nova.scheduler.client.report [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.732 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.735 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.792 257641 DEBUG oslo_concurrency.lockutils [None req-a2f23323-fb19-4da9-8edc-84092691df04 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 19.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.811 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.811 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:05:15 np0005539550 nova_compute[257631]: 2025-11-29 08:05:15.836 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:15.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/361921660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.324 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.329 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.387 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.415 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:05:16 np0005539550 nova_compute[257631]: 2025-11-29 08:05:16.416 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 266 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 15 MiB/s wr, 282 op/s
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.114 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.115 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.116 257641 INFO nova.compute.manager [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Unshelving#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.213 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.214 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.217 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'pci_requests' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.232 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'numa_topology' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.248 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.249 257641 INFO nova.compute.claims [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.349 257641 DEBUG nova.scheduler.client.report [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.366 257641 DEBUG nova.scheduler.client.report [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.366 257641 DEBUG nova.compute.provider_tree [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.388 257641 DEBUG nova.scheduler.client.report [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.407 257641 DEBUG nova.scheduler.client.report [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.416 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.417 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.417 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.432 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.432 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.433 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.442 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986459822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.876 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.881 257641 DEBUG nova.compute.provider_tree [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.895 257641 DEBUG nova.scheduler.client.report [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:17 np0005539550 nova_compute[257631]: 2025-11-29 08:05:17.916 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:17.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.270 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.271 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquired lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.272 257641 DEBUG nova.network.neutron [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:18 np0005539550 podman[294802]: 2025-11-29 08:05:18.38304212 +0000 UTC m=+0.119862045 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:05:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.588 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.752 257641 DEBUG nova.network.neutron [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:18 np0005539550 nova_compute[257631]: 2025-11-29 08:05:18.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 213 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 11 MiB/s wr, 330 op/s
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.275 257641 DEBUG nova.network.neutron [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.298 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Releasing lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.299 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.299 257641 INFO nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating image(s)#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.329 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.332 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.511 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.538 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.541 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "1665eb3a7d00db2e0dfe3fddcd921b304ef5d103" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.542 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "1665eb3a7d00db2e0dfe3fddcd921b304ef5d103" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.733 257641 DEBUG nova.virt.libvirt.imagebackend [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f6492822-3635-4784-934e-4aff1e94e9cf/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f6492822-3635-4784-934e-4aff1e94e9cf/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.788 257641 DEBUG nova.virt.libvirt.imagebackend [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/f6492822-3635-4784-934e-4aff1e94e9cf/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.789 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] cloning images/f6492822-3635-4784-934e-4aff1e94e9cf@snap to None/59628432-68dc-48d9-8986-8511c376a62d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001990748010887772 of space, bias 1.0, pg target 0.5972244032663316 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066374988367765 of space, bias 1.0, pg target 1.2199124965103294 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:05:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:05:19 np0005539550 nova_compute[257631]: 2025-11-29 08:05:19.927 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "1665eb3a7d00db2e0dfe3fddcd921b304ef5d103" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:19.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.067 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'migration_context' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.123 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] flattening vms/59628432-68dc-48d9-8986-8511c376a62d_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.488 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Image rbd:vms/59628432-68dc-48d9-8986-8511c376a62d_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.489 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.489 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Ensure instance console log exists: /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.490 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.490 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.490 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.492 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:04:55Z,direct_url=<?>,disk_format='raw',id=f6492822-3635-4784-934e-4aff1e94e9cf,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-106050071-shelved',owner='49ee945ea42e47ad9f070078a4d5179b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.496 257641 WARNING nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.500 257641 DEBUG nova.virt.libvirt.host [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.502 257641 DEBUG nova.virt.libvirt.host [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.504 257641 DEBUG nova.virt.libvirt.host [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.505 257641 DEBUG nova.virt.libvirt.host [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.506 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.506 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:04:55Z,direct_url=<?>,disk_format='raw',id=f6492822-3635-4784-934e-4aff1e94e9cf,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-106050071-shelved',owner='49ee945ea42e47ad9f070078a4d5179b',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:05:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.507 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.508 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.508 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.508 257641 DEBUG nova.virt.hardware [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.508 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.537 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:20.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440706099' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.968 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:20 np0005539550 nova_compute[257631]: 2025-11-29 08:05:20.997 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.001 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 217 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 11 MiB/s wr, 423 op/s
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.248 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2618749016' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.443 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.445 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.460 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <uuid>59628432-68dc-48d9-8986-8511c376a62d</uuid>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <name>instance-00000037</name>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-106050071</nova:name>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:05:20</nova:creationTime>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:user uuid="dc44b9aeabb442f582688b672dd724f3">tempest-UnshelveToHostMultiNodesTest-1345262936-project-member</nova:user>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <nova:project uuid="49ee945ea42e47ad9f070078a4d5179b">tempest-UnshelveToHostMultiNodesTest-1345262936</nova:project>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="f6492822-3635-4784-934e-4aff1e94e9cf"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="serial">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="uuid">59628432-68dc-48d9-8986-8511c376a62d</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/59628432-68dc-48d9-8986-8511c376a62d_disk.config">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/console.log" append="off"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:05:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:05:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:05:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:05:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.504 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.505 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.505 257641 INFO nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Using config drive#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.542 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.568 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.627 257641 DEBUG nova.objects.instance [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lazy-loading 'keypairs' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.854 257641 INFO nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Creating config drive at /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config#033[00m
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.860 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dqyz4ui execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:21.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:21 np0005539550 nova_compute[257631]: 2025-11-29 08:05:21.994 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8dqyz4ui" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.024 257641 DEBUG nova.storage.rbd_utils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] rbd image 59628432-68dc-48d9-8986-8511c376a62d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.029 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.201 257641 DEBUG oslo_concurrency.processutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config 59628432-68dc-48d9-8986-8511c376a62d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.202 257641 INFO nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting local config drive /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:22 np0005539550 systemd-machined[216673]: New machine qemu-28-instance-00000037.
Nov 29 03:05:22 np0005539550 systemd[1]: Started Virtual Machine qemu-28-instance-00000037.
Nov 29 03:05:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:22.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.809 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 59628432-68dc-48d9-8986-8511c376a62d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.809 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403522.8088043, 59628432-68dc-48d9-8986-8511c376a62d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.810 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.812 257641 DEBUG nova.compute.manager [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.813 257641 DEBUG nova.virt.libvirt.driver [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.816 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance spawned successfully.#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.834 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.836 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.883 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.883 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403522.8101494, 59628432-68dc-48d9-8986-8511c376a62d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.883 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.904 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.908 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:22 np0005539550 nova_compute[257631]: 2025-11-29 08:05:22.929 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 29 03:05:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 250 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 5.6 MiB/s wr, 389 op/s
Nov 29 03:05:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 29 03:05:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 29 03:05:23 np0005539550 nova_compute[257631]: 2025-11-29 08:05:23.643 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 29 03:05:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:24.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 29 03:05:24 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 29 03:05:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 274 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 5.8 MiB/s wr, 330 op/s
Nov 29 03:05:25 np0005539550 nova_compute[257631]: 2025-11-29 08:05:25.412 257641 DEBUG nova.compute.manager [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:25 np0005539550 nova_compute[257631]: 2025-11-29 08:05:25.472 257641 DEBUG oslo_concurrency.lockutils [None req-47b8fe80-6025-418a-befd-5764bb142917 82a66062264749d58d7659df1ac8e620 761fb1f5e11e49f0957cb4ed97553c31 - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:25.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 29 03:05:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 29 03:05:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 29 03:05:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:26 np0005539550 nova_compute[257631]: 2025-11-29 08:05:26.250 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 29 03:05:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 288 MiB data, 713 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 8.7 MiB/s wr, 266 op/s
Nov 29 03:05:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 29 03:05:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 29 03:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:27.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:28 np0005539550 nova_compute[257631]: 2025-11-29 08:05:28.645 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:28 np0005539550 nova_compute[257631]: 2025-11-29 08:05:28.683 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "59628432-68dc-48d9-8986-8511c376a62d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:28 np0005539550 nova_compute[257631]: 2025-11-29 08:05:28.683 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:28 np0005539550 nova_compute[257631]: 2025-11-29 08:05:28.684 257641 INFO nova.compute.manager [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shelving#033[00m
Nov 29 03:05:28 np0005539550 nova_compute[257631]: 2025-11-29 08:05:28.705 257641 DEBUG nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:05:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 260 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 8.3 MiB/s wr, 418 op/s
Nov 29 03:05:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:29.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 29 03:05:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 29 03:05:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 278 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.7 MiB/s wr, 386 op/s
Nov 29 03:05:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 29 03:05:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 29 03:05:31 np0005539550 nova_compute[257631]: 2025-11-29 08:05:31.253 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 257 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.5 MiB/s wr, 380 op/s
Nov 29 03:05:33 np0005539550 nova_compute[257631]: 2025-11-29 08:05:33.647 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:34 np0005539550 podman[295281]: 2025-11-29 08:05:34.328382339 +0000 UTC m=+0.062405977 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:05:34 np0005539550 podman[295280]: 2025-11-29 08:05:34.331639704 +0000 UTC m=+0.065265671 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:05:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 241 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.9 MiB/s wr, 314 op/s
Nov 29 03:05:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:35.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 29 03:05:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 29 03:05:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 29 03:05:36 np0005539550 nova_compute[257631]: 2025-11-29 08:05:36.255 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:36.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 200 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 680 KiB/s rd, 2.1 MiB/s wr, 176 op/s
Nov 29 03:05:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:37.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:38 np0005539550 nova_compute[257631]: 2025-11-29 08:05:38.651 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:38 np0005539550 nova_compute[257631]: 2025-11-29 08:05:38.753 257641 DEBUG nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:05:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 228 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.4 MiB/s wr, 215 op/s
Nov 29 03:05:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:39.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:40.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d2035c62-1a9b-4df9-b82c-3729dc8ddc7a does not exist
Nov 29 03:05:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ee6da47d-6f53-430e-b9f0-e9ee821453ad does not exist
Nov 29 03:05:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev cb299e66-ccbd-4df4-b299-861326e513e6 does not exist
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:05:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:05:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 246 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.5 MiB/s wr, 193 op/s
Nov 29 03:05:41 np0005539550 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000037.scope: Deactivated successfully.
Nov 29 03:05:41 np0005539550 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000037.scope: Consumed 14.378s CPU time.
Nov 29 03:05:41 np0005539550 systemd-machined[216673]: Machine qemu-28-instance-00000037 terminated.
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:05:41 np0005539550 nova_compute[257631]: 2025-11-29 08:05:41.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.376192417 +0000 UTC m=+0.023146461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.483779692 +0000 UTC m=+0.130733706 container create 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:41 np0005539550 systemd[1]: Started libpod-conmon-6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622.scope.
Nov 29 03:05:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:41 np0005539550 nova_compute[257631]: 2025-11-29 08:05:41.770 257641 INFO nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:05:41 np0005539550 nova_compute[257631]: 2025-11-29 08:05:41.775 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:41 np0005539550 nova_compute[257631]: 2025-11-29 08:05:41.775 257641 DEBUG nova.objects.instance [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'numa_topology' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.802152676 +0000 UTC m=+0.449106710 container init 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.810155763 +0000 UTC m=+0.457109787 container start 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:05:41 np0005539550 sharp_visvesvaraya[295608]: 167 167
Nov 29 03:05:41 np0005539550 systemd[1]: libpod-6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622.scope: Deactivated successfully.
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.945499338 +0000 UTC m=+0.592453382 container attach 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:41 np0005539550 podman[295592]: 2025-11-29 08:05:41.946969426 +0000 UTC m=+0.593923440 container died 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.979192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541979315, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 963, "num_deletes": 512, "total_data_size": 756621, "memory_usage": 775440, "flush_reason": "Manual Compaction"}
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541987493, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 745489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32927, "largest_seqno": 33889, "table_properties": {"data_size": 741098, "index_size": 1531, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13227, "raw_average_key_size": 18, "raw_value_size": 730084, "raw_average_value_size": 1038, "num_data_blocks": 67, "num_entries": 703, "num_filter_entries": 703, "num_deletions": 512, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403511, "oldest_key_time": 1764403511, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 8332 microseconds, and 4391 cpu microseconds.
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.987555) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 745489 bytes OK
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.987579) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.989993) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.990042) EVENT_LOG_v1 {"time_micros": 1764403541990034, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.990069) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 750844, prev total WAL file size 750844, number of live WAL files 2.
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.991012) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(728KB)], [71(10075KB)]
Nov 29 03:05:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403541991285, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11062790, "oldest_snapshot_seqno": -1}
Nov 29 03:05:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:41.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3fe065287e77535cab8c148a5b896f18910dcf8ebe662b8dc1213b221bd2f3cf-merged.mount: Deactivated successfully.
Nov 29 03:05:42 np0005539550 nova_compute[257631]: 2025-11-29 08:05:42.060 257641 INFO nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Beginning cold snapshot process#033[00m
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6360 keys, 8908894 bytes, temperature: kUnknown
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542156903, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8908894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8867928, "index_size": 23960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15941, "raw_key_size": 165845, "raw_average_key_size": 26, "raw_value_size": 8754927, "raw_average_value_size": 1376, "num_data_blocks": 950, "num_entries": 6360, "num_filter_entries": 6360, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403541, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:05:42 np0005539550 podman[295592]: 2025-11-29 08:05:42.15837936 +0000 UTC m=+0.805333374 container remove 6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.157189) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8908894 bytes
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.159597) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.8 rd, 53.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.8 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(26.8) write-amplify(12.0) OK, records in: 7398, records dropped: 1038 output_compression: NoCompression
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.159614) EVENT_LOG_v1 {"time_micros": 1764403542159606, "job": 40, "event": "compaction_finished", "compaction_time_micros": 165700, "compaction_time_cpu_micros": 47865, "output_level": 6, "num_output_files": 1, "total_output_size": 8908894, "num_input_records": 7398, "num_output_records": 6360, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542159907, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403542161797, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:41.990760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.161833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.161837) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.161838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.161840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:05:42.161842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:05:42 np0005539550 systemd[1]: libpod-conmon-6aa94d9d125b26393db555331d8603fc424da7a719ad9302a68e8e021af56622.scope: Deactivated successfully.
Nov 29 03:05:42 np0005539550 nova_compute[257631]: 2025-11-29 08:05:42.220 257641 DEBUG nova.virt.libvirt.imagebackend [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:05:42 np0005539550 podman[295665]: 2025-11-29 08:05:42.336567751 +0000 UTC m=+0.047451951 container create 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:05:42 np0005539550 systemd[1]: Started libpod-conmon-8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184.scope.
Nov 29 03:05:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:42 np0005539550 podman[295665]: 2025-11-29 08:05:42.318083958 +0000 UTC m=+0.028968198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:42 np0005539550 podman[295665]: 2025-11-29 08:05:42.425312668 +0000 UTC m=+0.136196878 container init 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:42 np0005539550 podman[295665]: 2025-11-29 08:05:42.433490033 +0000 UTC m=+0.144374243 container start 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:42 np0005539550 podman[295665]: 2025-11-29 08:05:42.437616006 +0000 UTC m=+0.148500236 container attach 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:05:42 np0005539550 nova_compute[257631]: 2025-11-29 08:05:42.444 257641 DEBUG nova.storage.rbd_utils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] creating snapshot(6945183f11714fe4a8f8579de80540f3) on rbd image(59628432-68dc-48d9-8986-8511c376a62d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:42.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 248 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 865 KiB/s rd, 2.8 MiB/s wr, 146 op/s
Nov 29 03:05:43 np0005539550 sleepy_chaum[295681]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:05:43 np0005539550 sleepy_chaum[295681]: --> relative data size: 1.0
Nov 29 03:05:43 np0005539550 sleepy_chaum[295681]: --> All data devices are unavailable
Nov 29 03:05:43 np0005539550 systemd[1]: libpod-8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184.scope: Deactivated successfully.
Nov 29 03:05:43 np0005539550 podman[295665]: 2025-11-29 08:05:43.327618004 +0000 UTC m=+1.038502234 container died 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:05:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.701 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.751 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.751 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.770 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.848 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.849 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.856 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.856 257641 INFO nova.compute.claims [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:05:43 np0005539550 nova_compute[257631]: 2025-11-29 08:05:43.987 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:43.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 29 03:05:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 29 03:05:44 np0005539550 nova_compute[257631]: 2025-11-29 08:05:44.486 257641 DEBUG nova.storage.rbd_utils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] cloning vms/59628432-68dc-48d9-8986-8511c376a62d_disk@6945183f11714fe4a8f8579de80540f3 to images/f953a29b-1cff-4723-9bbc-02d6d0bd3151 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:05:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-daf17bfbeb734f5dc079e47f5d31d81a0481ea8c276a9fe60f46db4d3bdbf33f-merged.mount: Deactivated successfully.
Nov 29 03:05:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:44.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:44 np0005539550 nova_compute[257631]: 2025-11-29 08:05:44.606 257641 DEBUG nova.storage.rbd_utils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] flattening images/f953a29b-1cff-4723-9bbc-02d6d0bd3151 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:05:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454749006' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:44 np0005539550 nova_compute[257631]: 2025-11-29 08:05:44.988 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.001s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:44 np0005539550 nova_compute[257631]: 2025-11-29 08:05:44.996 257641 DEBUG nova.compute.provider_tree [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.014 257641 DEBUG nova.scheduler.client.report [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 248 MiB data, 668 MiB used, 20 GiB / 21 GiB avail; 769 KiB/s rd, 2.7 MiB/s wr, 116 op/s
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.060 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.061 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:05:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.142 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.142 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.179 257641 INFO nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.209 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.308 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.310 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.310 257641 INFO nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Creating image(s)#033[00m
Nov 29 03:05:45 np0005539550 podman[295665]: 2025-11-29 08:05:45.515143696 +0000 UTC m=+3.226027906 container remove 8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:05:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 29 03:05:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 29 03:05:45 np0005539550 systemd[1]: libpod-conmon-8af7793cbc784b615706c0a0b25661153d80babd01b649f95d0af05546238184.scope: Deactivated successfully.
Nov 29 03:05:45 np0005539550 nova_compute[257631]: 2025-11-29 08:05:45.860 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:45.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.14716341 +0000 UTC m=+0.066661203 container create 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.149 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.181 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.184 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.101851114 +0000 UTC m=+0.021348937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.215 257641 DEBUG nova.policy [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ee7a93f60394fd9b004c90c25ff5fc1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9e02874a2dc44489adba1420baa460f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:05:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.252 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.253 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.254 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.254 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.279 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.283 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 73947417-cf68-4b6b-820b-66f001a8a178_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.309 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:46 np0005539550 systemd[1]: Started libpod-conmon-668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c.scope.
Nov 29 03:05:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.402414505 +0000 UTC m=+0.321912328 container init 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.409723838 +0000 UTC m=+0.329221631 container start 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:05:46 np0005539550 loving_burnell[296038]: 167 167
Nov 29 03:05:46 np0005539550 systemd[1]: libpod-668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c.scope: Deactivated successfully.
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.458 257641 DEBUG nova.storage.rbd_utils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] removing snapshot(6945183f11714fe4a8f8579de80540f3) on rbd image(59628432-68dc-48d9-8986-8511c376a62d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.553348351 +0000 UTC m=+0.472846174 container attach 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.553844334 +0000 UTC m=+0.473342127 container died 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.704 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 73947417-cf68-4b6b-820b-66f001a8a178_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-32f8d13a5c6bc8f5542db74520ce1b1ef8e141e707b0c31f6f641718095ac0cf-merged.mount: Deactivated successfully.
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.779 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] resizing rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:05:46 np0005539550 podman[295980]: 2025-11-29 08:05:46.872383505 +0000 UTC m=+0.791881298 container remove 668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.891 257641 DEBUG nova.objects.instance [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 73947417-cf68-4b6b-820b-66f001a8a178 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:46 np0005539550 systemd[1]: libpod-conmon-668200eca628bab927eb2f743267ce8f865a73d00e4524c7e469941d81c1ec6c.scope: Deactivated successfully.
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.908 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.908 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Ensure instance console log exists: /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.909 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.910 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.910 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:46 np0005539550 nova_compute[257631]: 2025-11-29 08:05:46.918 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Successfully created port: 54f58d56-0af4-48e7-9ab3-a22887a4161f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:05:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 248 MiB data, 669 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 85 KiB/s wr, 83 op/s
Nov 29 03:05:47 np0005539550 podman[296168]: 2025-11-29 08:05:47.035065506 +0000 UTC m=+0.041490642 container create 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:05:47 np0005539550 podman[296168]: 2025-11-29 08:05:47.016478119 +0000 UTC m=+0.022903285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:47 np0005539550 systemd[1]: Started libpod-conmon-5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb.scope.
Nov 29 03:05:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd1b82bc7273d2db0ca3f284013f55d80bb73d1c36d56d8c37452a5cb5dbf6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd1b82bc7273d2db0ca3f284013f55d80bb73d1c36d56d8c37452a5cb5dbf6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd1b82bc7273d2db0ca3f284013f55d80bb73d1c36d56d8c37452a5cb5dbf6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcd1b82bc7273d2db0ca3f284013f55d80bb73d1c36d56d8c37452a5cb5dbf6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:47 np0005539550 podman[296168]: 2025-11-29 08:05:47.277311633 +0000 UTC m=+0.283736799 container init 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 29 03:05:47 np0005539550 podman[296168]: 2025-11-29 08:05:47.285778466 +0000 UTC m=+0.292203602 container start 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:05:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 29 03:05:47 np0005539550 podman[296168]: 2025-11-29 08:05:47.322667221 +0000 UTC m=+0.329092377 container attach 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:05:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 29 03:05:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.440 257641 DEBUG nova.storage.rbd_utils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] creating snapshot(snap) on rbd image(f953a29b-1cff-4723-9bbc-02d6d0bd3151) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.682 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Successfully updated port: 54f58d56-0af4-48e7-9ab3-a22887a4161f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.702 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.702 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquired lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.702 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.774 257641 DEBUG nova.compute.manager [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.774 257641 DEBUG nova.compute.manager [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing instance network info cache due to event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.774 257641 DEBUG oslo_concurrency.lockutils [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:47 np0005539550 nova_compute[257631]: 2025-11-29 08:05:47.870 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:47.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]: {
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:    "0": [
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:        {
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "devices": [
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "/dev/loop3"
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            ],
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "lv_name": "ceph_lv0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "lv_size": "7511998464",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "name": "ceph_lv0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "tags": {
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.cluster_name": "ceph",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.crush_device_class": "",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.encrypted": "0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.osd_id": "0",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.type": "block",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:                "ceph.vdo": "0"
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            },
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "type": "block",
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:            "vg_name": "ceph_vg0"
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:        }
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]:    ]
Nov 29 03:05:48 np0005539550 trusting_hofstadter[296184]: }
Nov 29 03:05:48 np0005539550 systemd[1]: libpod-5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb.scope: Deactivated successfully.
Nov 29 03:05:48 np0005539550 podman[296168]: 2025-11-29 08:05:48.13734943 +0000 UTC m=+1.143774566 container died 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:05:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:48.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.618 257641 DEBUG nova.network.neutron [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.641 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Releasing lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.642 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Instance network_info: |[{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.643 257641 DEBUG oslo_concurrency.lockutils [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.643 257641 DEBUG nova.network.neutron [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.645 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Start _get_guest_xml network_info=[{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.651 257641 WARNING nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.660 257641 DEBUG nova.virt.libvirt.host [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.661 257641 DEBUG nova.virt.libvirt.host [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.668 257641 DEBUG nova.virt.libvirt.host [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.668 257641 DEBUG nova.virt.libvirt.host [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.669 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.670 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.670 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.670 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.670 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.671 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.671 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.671 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.671 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.671 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.672 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.672 257641 DEBUG nova.virt.hardware [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.675 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:48 np0005539550 nova_compute[257631]: 2025-11-29 08:05:48.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 29 03:05:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bdcd1b82bc7273d2db0ca3f284013f55d80bb73d1c36d56d8c37452a5cb5dbf6-merged.mount: Deactivated successfully.
Nov 29 03:05:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 29 03:05:48 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 29 03:05:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 363 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 16 MiB/s rd, 14 MiB/s wr, 361 op/s
Nov 29 03:05:49 np0005539550 podman[296168]: 2025-11-29 08:05:49.075470405 +0000 UTC m=+2.081895541 container remove 5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:05:49 np0005539550 systemd[1]: libpod-conmon-5a75396c9626cc4e08929398aec4003d47593dfa18ef58ab96312b5090e107bb.scope: Deactivated successfully.
Nov 29 03:05:49 np0005539550 podman[296227]: 2025-11-29 08:05:49.189570338 +0000 UTC m=+0.417513596 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:05:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3582646783' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.290 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.320 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.325 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.678688879 +0000 UTC m=+0.040251661 container create e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.659843986 +0000 UTC m=+0.021406788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:49 np0005539550 systemd[1]: Started libpod-conmon-e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1.scope.
Nov 29 03:05:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:05:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141483633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:05:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.793 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.794 257641 DEBUG nova.virt.libvirt.vif [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-224129150',display_name='tempest-SecurityGroupsTestJSON-server-224129150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-224129150',id=60,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-hxs7ld9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:45Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=73947417-cf68-4b6b-820b-66f001a8a178,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.795 257641 DEBUG nova.network.os_vif_util [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.796 257641 DEBUG nova.network.os_vif_util [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.797 257641 DEBUG nova.objects.instance [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 73947417-cf68-4b6b-820b-66f001a8a178 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.810 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <uuid>73947417-cf68-4b6b-820b-66f001a8a178</uuid>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <name>instance-0000003c</name>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:name>tempest-SecurityGroupsTestJSON-server-224129150</nova:name>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:05:48</nova:creationTime>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:user uuid="9ee7a93f60394fd9b004c90c25ff5fc1">tempest-SecurityGroupsTestJSON-1605163301-project-member</nova:user>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:project uuid="9e02874a2dc44489adba1420baa460f2">tempest-SecurityGroupsTestJSON-1605163301</nova:project>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <nova:port uuid="54f58d56-0af4-48e7-9ab3-a22887a4161f">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="serial">73947417-cf68-4b6b-820b-66f001a8a178</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="uuid">73947417-cf68-4b6b-820b-66f001a8a178</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/73947417-cf68-4b6b-820b-66f001a8a178_disk">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/73947417-cf68-4b6b-820b-66f001a8a178_disk.config">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:2f:59:d5"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <target dev="tap54f58d56-0a"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/console.log" append="off"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:05:49 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:05:49 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:05:49 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:05:49 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.810 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Preparing to wait for external event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.810 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.811 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.811 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.811 257641 DEBUG nova.virt.libvirt.vif [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-224129150',display_name='tempest-SecurityGroupsTestJSON-server-224129150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-224129150',id=60,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-hxs7ld9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:05:45Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=73947417-cf68-4b6b-820b-66f001a8a178,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.811 257641 DEBUG nova.network.os_vif_util [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.812 257641 DEBUG nova.network.os_vif_util [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.812 257641 DEBUG os_vif [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.813 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.813 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.813 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.819 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54f58d56-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.820 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54f58d56-0a, col_values=(('external_ids', {'iface-id': '54f58d56-0af4-48e7-9ab3-a22887a4161f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:59:d5', 'vm-uuid': '73947417-cf68-4b6b-820b-66f001a8a178'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.821 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.822 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:05:49 np0005539550 NetworkManager[49039]: <info>  [1764403549.8234] manager: (tap54f58d56-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.829 257641 INFO os_vif [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a')#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.835 257641 DEBUG nova.network.neutron [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updated VIF entry in instance network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.836 257641 DEBUG nova.network.neutron [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.839310339 +0000 UTC m=+0.200873141 container init e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.848774616 +0000 UTC m=+0.210337398 container start e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:49 np0005539550 systemd[1]: libpod-e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1.scope: Deactivated successfully.
Nov 29 03:05:49 np0005539550 hardcore_brahmagupta[296469]: 167 167
Nov 29 03:05:49 np0005539550 conmon[296469]: conmon e2ee194cf78e2bc22cc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1.scope/container/memory.events
Nov 29 03:05:49 np0005539550 nova_compute[257631]: 2025-11-29 08:05:49.861 257641 DEBUG oslo_concurrency.lockutils [req-ade3b18b-b21a-4238-9e3c-ada2501c2142 req-659beae5-7ff7-4fb6-b6f9-86bf1ef0e001 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.998663727 +0000 UTC m=+0.360226509 container attach e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:05:49 np0005539550 podman[296453]: 2025-11-29 08:05:49.999283882 +0000 UTC m=+0.360846654 container died e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:05:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.048 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.049 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.050 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] No VIF found with MAC fa:16:3e:2f:59:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.050 257641 INFO nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Using config drive#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.128 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 29 03:05:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-176c248e16c4677f5e22237e98f53889af02becfe2366bdb7878ea74cd4028df-merged.mount: Deactivated successfully.
Nov 29 03:05:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 29 03:05:50 np0005539550 podman[296453]: 2025-11-29 08:05:50.521541884 +0000 UTC m=+0.883104676 container remove e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.571 257641 INFO nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Snapshot image upload complete#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.571 257641 DEBUG nova.compute.manager [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:50.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.614 257641 INFO nova.compute.manager [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Shelve offloading#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.621 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.621 257641 DEBUG nova.compute.manager [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.624 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.624 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquired lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:50 np0005539550 nova_compute[257631]: 2025-11-29 08:05:50.624 257641 DEBUG nova.network.neutron [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:05:50 np0005539550 systemd[1]: libpod-conmon-e2ee194cf78e2bc22cc7a3b6e61d1850e4a863c2fb0655916fcbe7cd73c498f1.scope: Deactivated successfully.
Nov 29 03:05:50 np0005539550 podman[296516]: 2025-11-29 08:05:50.666910451 +0000 UTC m=+0.026679070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:05:50 np0005539550 podman[296516]: 2025-11-29 08:05:50.760907949 +0000 UTC m=+0.120676538 container create 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:05:50 np0005539550 systemd[1]: Started libpod-conmon-662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309.scope.
Nov 29 03:05:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8aa495c546c72f35bbbc66b14fa9bf125bf5d0f3c3f9c1f502a1d3fa5fb473/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8aa495c546c72f35bbbc66b14fa9bf125bf5d0f3c3f9c1f502a1d3fa5fb473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8aa495c546c72f35bbbc66b14fa9bf125bf5d0f3c3f9c1f502a1d3fa5fb473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd8aa495c546c72f35bbbc66b14fa9bf125bf5d0f3c3f9c1f502a1d3fa5fb473/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:50 np0005539550 podman[296516]: 2025-11-29 08:05:50.883590397 +0000 UTC m=+0.243358986 container init 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:05:50 np0005539550 podman[296516]: 2025-11-29 08:05:50.891909915 +0000 UTC m=+0.251678504 container start 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:05:51 np0005539550 podman[296516]: 2025-11-29 08:05:51.003224618 +0000 UTC m=+0.362993207 container attach 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:05:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 433 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 20 MiB/s rd, 20 MiB/s wr, 471 op/s
Nov 29 03:05:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.263 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.319 257641 DEBUG nova.network.neutron [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:05:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.333 257641 INFO nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Creating config drive at /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.338 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0ba68we execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.471 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0ba68we" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.504 257641 DEBUG nova.storage.rbd_utils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] rbd image 73947417-cf68-4b6b-820b-66f001a8a178_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.507 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config 73947417-cf68-4b6b-820b-66f001a8a178_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 29 03:05:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]: {
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:        "osd_id": 0,
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:        "type": "bluestore"
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]:    }
Nov 29 03:05:51 np0005539550 thirsty_heyrovsky[296532]: }
Nov 29 03:05:51 np0005539550 systemd[1]: libpod-662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309.scope: Deactivated successfully.
Nov 29 03:05:51 np0005539550 podman[296516]: 2025-11-29 08:05:51.82798172 +0000 UTC m=+1.187750309 container died 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.927 257641 DEBUG oslo_concurrency.processutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config 73947417-cf68-4b6b-820b-66f001a8a178_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.928 257641 INFO nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Deleting local config drive /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178/disk.config because it was imported into RBD.#033[00m
Nov 29 03:05:51 np0005539550 kernel: tap54f58d56-0a: entered promiscuous mode
Nov 29 03:05:51 np0005539550 NetworkManager[49039]: <info>  [1764403551.9839] manager: (tap54f58d56-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 03:05:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:05:51Z|00178|binding|INFO|Claiming lport 54f58d56-0af4-48e7-9ab3-a22887a4161f for this chassis.
Nov 29 03:05:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:05:51Z|00179|binding|INFO|54f58d56-0af4-48e7-9ab3-a22887a4161f: Claiming fa:16:3e:2f:59:d5 10.100.0.14
Nov 29 03:05:51 np0005539550 nova_compute[257631]: 2025-11-29 08:05:51.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:51.996 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:59:d5 10.100.0.14'], port_security=['fa:16:3e:2f:59:d5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73947417-cf68-4b6b-820b-66f001a8a178', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=54f58d56-0af4-48e7-9ab3-a22887a4161f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:51.998 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 54f58d56-0af4-48e7-9ab3-a22887a4161f in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 bound to our chassis#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:51.999 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddaca73e-4e30-4040-a35d-8d63a2e74570#033[00m
Nov 29 03:05:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fd8aa495c546c72f35bbbc66b14fa9bf125bf5d0f3c3f9c1f502a1d3fa5fb473-merged.mount: Deactivated successfully.
Nov 29 03:05:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:52.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.013 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a27551b8-670b-455e-bffe-620b5065c04b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.014 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddaca73e-41 in ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.016 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddaca73e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.016 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[93f674bf-5dc6-4db5-b867-c6bdb9020182]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.017 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[060b73f6-f11c-4ce2-9e89-9458840314e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 systemd-machined[216673]: New machine qemu-29-instance-0000003c.
Nov 29 03:05:52 np0005539550 systemd-udevd[296670]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.029 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[3622dc9d-b2c9-42b0-a62d-3079ba8bba99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 NetworkManager[49039]: <info>  [1764403552.0363] device (tap54f58d56-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:05:52 np0005539550 NetworkManager[49039]: <info>  [1764403552.0376] device (tap54f58d56-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.052 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c047a11a-8994-49a2-acd8-c8223ae87597]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 systemd[1]: Started Virtual Machine qemu-29-instance-0000003c.
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.068 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:05:52Z|00180|binding|INFO|Setting lport 54f58d56-0af4-48e7-9ab3-a22887a4161f ovn-installed in OVS
Nov 29 03:05:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:05:52Z|00181|binding|INFO|Setting lport 54f58d56-0af4-48e7-9ab3-a22887a4161f up in Southbound
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.077 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.083 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[da46abde-4f98-4619-be21-d71e811e4dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 systemd-udevd[296674]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:05:52 np0005539550 NetworkManager[49039]: <info>  [1764403552.0904] manager: (tapddaca73e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.090 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bddee423-6e22-4a17-862b-419e040d65ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.120 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdbabda-4ba7-48aa-8e20-741363d648d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.122 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ab7f66-4365-41d5-93ed-023cfef94e00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 NetworkManager[49039]: <info>  [1764403552.1437] device (tapddaca73e-40): carrier: link connected
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.148 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[02a065ab-9e25-4feb-91f2-1319fefb0a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.164 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5771609e-57d2-4eed-9cbc-dc09f4903845]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659980, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296702, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.178 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fbed16e4-54c9-426a-99ea-df4a791d57e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:dcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 659980, 'tstamp': 659980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296703, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.194 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[582af15c-9b1a-470a-81ff-7ed78b150c93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddaca73e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:0d:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659980, 'reachable_time': 38556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296704, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.222 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4b3cc397-02e4-4b1b-b739-4de00000168d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.268 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[777a9926-2573-4252-811d-09dbd5832bc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.270 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.270 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.270 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddaca73e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 NetworkManager[49039]: <info>  [1764403552.2738] manager: (tapddaca73e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 29 03:05:52 np0005539550 kernel: tapddaca73e-40: entered promiscuous mode
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.274 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddaca73e-40, col_values=(('external_ids', {'iface-id': '1387aa6b-e0e8-4d33-958c-8368713de9bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:05:52Z|00182|binding|INFO|Releasing lport 1387aa6b-e0e8-4d33-958c-8368713de9bf from this chassis (sb_readonly=0)
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.293 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.293 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.294 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[71260757-d922-4eac-bf09-e203bed034d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.294 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/ddaca73e-4e30-4040-a35d-8d63a2e74570.pid.haproxy
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID ddaca73e-4e30-4040-a35d-8d63a2e74570
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:05:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:05:52.295 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'env', 'PROCESS_TAG=haproxy-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddaca73e-4e30-4040-a35d-8d63a2e74570.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:05:52 np0005539550 podman[296516]: 2025-11-29 08:05:52.426743671 +0000 UTC m=+1.786512260 container remove 662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_heyrovsky, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:05:52 np0005539550 systemd[1]: libpod-conmon-662eee1a92f9cffbe7658a9416a15e4a4946ca7f418cca56ffae2a05ab0b7309.scope: Deactivated successfully.
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.442 257641 DEBUG nova.network.neutron [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.460 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Releasing lock "refresh_cache-59628432-68dc-48d9-8986-8511c376a62d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.476 257641 INFO nova.virt.libvirt.driver [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Instance destroyed successfully.#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.476 257641 DEBUG nova.objects.instance [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lazy-loading 'resources' on Instance uuid 59628432-68dc-48d9-8986-8511c376a62d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.551 257641 DEBUG nova.compute.manager [req-61e8b6eb-1730-43dc-bfe1-b8ad1d673cd9 req-0a7f438b-1da5-48e8-8df5-0f5d19cefff5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.552 257641 DEBUG oslo_concurrency.lockutils [req-61e8b6eb-1730-43dc-bfe1-b8ad1d673cd9 req-0a7f438b-1da5-48e8-8df5-0f5d19cefff5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.552 257641 DEBUG oslo_concurrency.lockutils [req-61e8b6eb-1730-43dc-bfe1-b8ad1d673cd9 req-0a7f438b-1da5-48e8-8df5-0f5d19cefff5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.552 257641 DEBUG oslo_concurrency.lockutils [req-61e8b6eb-1730-43dc-bfe1-b8ad1d673cd9 req-0a7f438b-1da5-48e8-8df5-0f5d19cefff5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.553 257641 DEBUG nova.compute.manager [req-61e8b6eb-1730-43dc-bfe1-b8ad1d673cd9 req-0a7f438b-1da5-48e8-8df5-0f5d19cefff5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Processing event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:05:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:52.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:52 np0005539550 podman[296785]: 2025-11-29 08:05:52.661990893 +0000 UTC m=+0.023527741 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:05:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:05:52 np0005539550 podman[296785]: 2025-11-29 08:05:52.971483838 +0000 UTC m=+0.333020666 container create 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.971 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403552.970396, 73947417-cf68-4b6b-820b-66f001a8a178 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.971 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] VM Started (Lifecycle Event)#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.973 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.977 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.983 257641 INFO nova.virt.libvirt.driver [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Instance spawned successfully.#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.983 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:05:52 np0005539550 nova_compute[257631]: 2025-11-29 08:05:52.998 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.004 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:53 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6e1ec01e-2a02-4f8d-9951-f3de13d91603 does not exist
Nov 29 03:05:53 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 39ebad2f-23c3-49a0-a554-c92047d2de62 does not exist
Nov 29 03:05:53 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a2644447-c849-4d2f-aaab-22d723abf4d6 does not exist
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.009 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.010 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.010 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.011 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.011 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.012 257641 DEBUG nova.virt.libvirt.driver [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:05:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 433 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 20 MiB/s rd, 19 MiB/s wr, 453 op/s
Nov 29 03:05:53 np0005539550 systemd[1]: Started libpod-conmon-9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9.scope.
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.033 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.035 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403552.9744303, 73947417-cf68-4b6b-820b-66f001a8a178 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.035 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:05:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.055 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.060 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403552.977531, 73947417-cf68-4b6b-820b-66f001a8a178 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.061 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:05:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bbc59c0f25202263a8e14805ff86f8eae7d78ffe03309a46c5740ab5ffe0568/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.065 257641 INFO nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Took 7.76 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.066 257641 DEBUG nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.080 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.083 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.105 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:05:53 np0005539550 podman[296785]: 2025-11-29 08:05:53.106462244 +0000 UTC m=+0.467999092 container init 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:05:53 np0005539550 podman[296785]: 2025-11-29 08:05:53.113097981 +0000 UTC m=+0.474634809 container start 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.135 257641 INFO nova.compute.manager [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Took 9.32 seconds to build instance.#033[00m
Nov 29 03:05:53 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [NOTICE]   (296861) : New worker (296865) forked
Nov 29 03:05:53 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [NOTICE]   (296861) : Loading success.
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.153 257641 DEBUG oslo_concurrency.lockutils [None req-d8f15572-daaa-4796-97d9-5494d03d70c1 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.946 257641 INFO nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deleting instance files /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del#033[00m
Nov 29 03:05:53 np0005539550 nova_compute[257631]: 2025-11-29 08:05:53.948 257641 INFO nova.virt.libvirt.driver [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Deletion of /var/lib/nova/instances/59628432-68dc-48d9-8986-8511c376a62d_del complete#033[00m
Nov 29 03:05:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:05:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:54.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.062 257641 INFO nova.scheduler.client.report [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Deleted allocations for instance 59628432-68dc-48d9-8986-8511c376a62d#033[00m
Nov 29 03:05:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 29 03:05:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.117 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.118 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.160 257641 DEBUG oslo_concurrency.processutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:05:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:05:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:54.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:05:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:05:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721953620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.622 257641 DEBUG oslo_concurrency.processutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.630 257641 DEBUG nova.compute.provider_tree [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.636 257641 DEBUG nova.compute.manager [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.636 257641 DEBUG oslo_concurrency.lockutils [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.637 257641 DEBUG oslo_concurrency.lockutils [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.637 257641 DEBUG oslo_concurrency.lockutils [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.637 257641 DEBUG nova.compute.manager [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] No waiting events found dispatching network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.637 257641 WARNING nova.compute.manager [req-fd198358-d8ca-4d76-82c0-29594edd49fe req-9f2b059c-9cc0-4735-8202-979e9c3d9a32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received unexpected event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.647 257641 DEBUG nova.scheduler.client.report [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.665 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.709 257641 DEBUG oslo_concurrency.lockutils [None req-0987313a-93d5-4114-bfd8-29a573245230 dc44b9aeabb442f582688b672dd724f3 49ee945ea42e47ad9f070078a4d5179b - - default default] Lock "59628432-68dc-48d9-8986-8511c376a62d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 26.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:05:54 np0005539550 nova_compute[257631]: 2025-11-29 08:05:54.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 424 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 11 MiB/s wr, 338 op/s
Nov 29 03:05:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 29 03:05:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 29 03:05:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:56.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:05:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 29 03:05:56 np0005539550 nova_compute[257631]: 2025-11-29 08:05:56.264 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:05:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539550 nova_compute[257631]: 2025-11-29 08:05:56.291 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403541.2899313, 59628432-68dc-48d9-8986-8511c376a62d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:05:56 np0005539550 nova_compute[257631]: 2025-11-29 08:05:56.292 257641 INFO nova.compute.manager [-] [instance: 59628432-68dc-48d9-8986-8511c376a62d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:05:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 29 03:05:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:05:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:05:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 420 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.3 MiB/s wr, 318 op/s
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.618 257641 DEBUG nova.compute.manager [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.619 257641 DEBUG nova.compute.manager [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing instance network info cache due to event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.619 257641 DEBUG oslo_concurrency.lockutils [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.619 257641 DEBUG oslo_concurrency.lockutils [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.620 257641 DEBUG nova.network.neutron [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:57 np0005539550 nova_compute[257631]: 2025-11-29 08:05:57.639 257641 DEBUG nova.compute.manager [None req-90d427e5-84e5-4f40-a9e2-29f2d8ccb812 - - - - - -] [instance: 59628432-68dc-48d9-8986-8511c376a62d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:05:58.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:05:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:05:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:05:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:05:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:05:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:05:58 np0005539550 nova_compute[257631]: 2025-11-29 08:05:58.989 257641 DEBUG nova.compute.manager [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:05:58 np0005539550 nova_compute[257631]: 2025-11-29 08:05:58.989 257641 DEBUG nova.compute.manager [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing instance network info cache due to event network-changed-54f58d56-0af4-48e7-9ab3-a22887a4161f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:05:58 np0005539550 nova_compute[257631]: 2025-11-29 08:05:58.990 257641 DEBUG oslo_concurrency.lockutils [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 433 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.6 MiB/s wr, 388 op/s
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:05:59
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.log', 'images']
Nov 29 03:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.626 257641 DEBUG nova.network.neutron [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updated VIF entry in instance network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.626 257641 DEBUG nova.network.neutron [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.653 257641 DEBUG oslo_concurrency.lockutils [req-7bf9ff4b-f719-48ce-a307-b86124db52ab req-65ea1384-ec66-46ad-81b2-70309b32692b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.654 257641 DEBUG oslo_concurrency.lockutils [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.654 257641 DEBUG nova.network.neutron [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Refreshing network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:05:59 np0005539550 nova_compute[257631]: 2025-11-29 08:05:59.826 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:00.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:00 np0005539550 nova_compute[257631]: 2025-11-29 08:06:00.882 257641 DEBUG nova.network.neutron [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updated VIF entry in instance network info cache for port 54f58d56-0af4-48e7-9ab3-a22887a4161f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:00 np0005539550 nova_compute[257631]: 2025-11-29 08:06:00.883 257641 DEBUG nova.network.neutron [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:00 np0005539550 nova_compute[257631]: 2025-11-29 08:06:00.902 257641 DEBUG oslo_concurrency.lockutils [req-3bff813b-14bb-4ea4-840e-f81472fa95cc req-3d0bf8ee-03cb-4dd5-9b2d-23262e1a9940 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 400 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.3 MiB/s wr, 382 op/s
Nov 29 03:06:01 np0005539550 nova_compute[257631]: 2025-11-29 08:06:01.265 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 29 03:06:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 29 03:06:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 29 03:06:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:02.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 29 03:06:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 29 03:06:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 29 03:06:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 400 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.5 MiB/s wr, 246 op/s
Nov 29 03:06:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:04.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:04.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 29 03:06:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 29 03:06:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 29 03:06:04 np0005539550 nova_compute[257631]: 2025-11-29 08:06:04.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:04.993 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:04 np0005539550 nova_compute[257631]: 2025-11-29 08:06:04.994 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:04.996 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:06:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 422 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.0 MiB/s wr, 347 op/s
Nov 29 03:06:05 np0005539550 podman[296905]: 2025-11-29 08:06:05.336094209 +0000 UTC m=+0.067962976 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:06:05 np0005539550 podman[296904]: 2025-11-29 08:06:05.343392102 +0000 UTC m=+0.078005748 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:06.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:06 np0005539550 nova_compute[257631]: 2025-11-29 08:06:06.267 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 415 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 8.8 MiB/s wr, 298 op/s
Nov 29 03:06:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:07Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:59:d5 10.100.0.14
Nov 29 03:06:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:07Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:59:d5 10.100.0.14
Nov 29 03:06:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:08.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:06:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:08.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.873 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.874 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.890 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.963 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.964 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.970 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:08 np0005539550 nova_compute[257631]: 2025-11-29 08:06:08.970 257641 INFO nova.compute.claims [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:06:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 319 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 9.0 MiB/s wr, 377 op/s
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.079 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570604764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.516 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.521 257641 DEBUG nova.compute.provider_tree [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.539 257641 DEBUG nova.scheduler.client.report [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.561 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.562 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.607 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.607 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.638 257641 INFO nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.671 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.823 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.825 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.825 257641 INFO nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Creating image(s)#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.853 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.883 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.911 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.914 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.952 257641 DEBUG nova.policy [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:09 np0005539550 nova_compute[257631]: 2025-11-29 08:06:09.954 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.010 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.011 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.012 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.012 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.041 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.045 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9ffa29cd-6836-4384-84d9-b65c739901cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.358 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9ffa29cd-6836-4384-84d9-b65c739901cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.431 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.545 257641 DEBUG nova.objects.instance [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid 9ffa29cd-6836-4384-84d9-b65c739901cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.699 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.699 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Ensure instance console log exists: /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.700 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.700 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.700 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:10 np0005539550 nova_compute[257631]: 2025-11-29 08:06:10.892 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Successfully created port: c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 299 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.1 MiB/s wr, 443 op/s
Nov 29 03:06:11 np0005539550 nova_compute[257631]: 2025-11-29 08:06:11.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 29 03:06:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 29 03:06:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 29 03:06:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:12.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.498 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Successfully updated port: c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.519 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.520 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.520 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.809 257641 DEBUG nova.compute.manager [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-changed-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.810 257641 DEBUG nova.compute.manager [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Refreshing instance network info cache due to event network-changed-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.810 257641 DEBUG oslo_concurrency.lockutils [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:12 np0005539550 nova_compute[257631]: 2025-11-29 08:06:12.892 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 299 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 328 op/s
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.015 257641 DEBUG nova.network.neutron [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Updating instance_info_cache with network_info: [{"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.041 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.042 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance network_info: |[{"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.042 257641 DEBUG oslo_concurrency.lockutils [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.042 257641 DEBUG nova.network.neutron [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Refreshing network info cache for port c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.046 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Start _get_guest_xml network_info=[{"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.050 257641 WARNING nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.054 257641 DEBUG nova.virt.libvirt.host [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.055 257641 DEBUG nova.virt.libvirt.host [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.057 257641 DEBUG nova.virt.libvirt.host [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.058 257641 DEBUG nova.virt.libvirt.host [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.060 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.060 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.060 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.061 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.061 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.061 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.061 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.062 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.062 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.062 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.062 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.063 257641 DEBUG nova.virt.hardware [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.066 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1676199999' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.497 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.528 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.532 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:14.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1181862379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:14.998 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:14 np0005539550 nova_compute[257631]: 2025-11-29 08:06:14.998 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.000 257641 DEBUG nova.virt.libvirt.vif [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1929531448',display_name='tempest-DeleteServersTestJSON-server-1929531448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1929531448',id=62,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-0oiay26p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:09Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9ffa29cd-6836-4384-84d9-b65c739901cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.000 257641 DEBUG nova.network.os_vif_util [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.001 257641 DEBUG nova.network.os_vif_util [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.002 257641 DEBUG nova.objects.instance [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ffa29cd-6836-4384-84d9-b65c739901cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.003 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 317 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.7 MiB/s wr, 396 op/s
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.035 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <uuid>9ffa29cd-6836-4384-84d9-b65c739901cc</uuid>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <name>instance-0000003e</name>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:name>tempest-DeleteServersTestJSON-server-1929531448</nova:name>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:06:14</nova:creationTime>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <nova:port uuid="c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="serial">9ffa29cd-6836-4384-84d9-b65c739901cc</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="uuid">9ffa29cd-6836-4384-84d9-b65c739901cc</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/9ffa29cd-6836-4384-84d9-b65c739901cc_disk">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:25:23:60"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <target dev="tapc1a148ee-c8"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/console.log" append="off"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:06:15 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:06:15 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:06:15 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:06:15 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.037 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Preparing to wait for external event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.037 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.038 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.038 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.039 257641 DEBUG nova.virt.libvirt.vif [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1929531448',display_name='tempest-DeleteServersTestJSON-server-1929531448',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1929531448',id=62,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-0oiay26p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:09Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9ffa29cd-6836-4384-84d9-b65c739901cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.039 257641 DEBUG nova.network.os_vif_util [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.039 257641 DEBUG nova.network.os_vif_util [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.040 257641 DEBUG os_vif [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.040 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.041 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.041 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.045 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.045 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1a148ee-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.046 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1a148ee-c8, col_values=(('external_ids', {'iface-id': 'c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:25:23:60', 'vm-uuid': '9ffa29cd-6836-4384-84d9-b65c739901cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:15 np0005539550 NetworkManager[49039]: <info>  [1764403575.0489] manager: (tapc1a148ee-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.054 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.055 257641 INFO os_vif [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8')#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.138 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.139 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.139 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:25:23:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.139 257641 INFO nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Using config drive#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.163 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.716 257641 INFO nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Creating config drive at /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.721 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83a6f521 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.753 257641 DEBUG nova.network.neutron [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Updated VIF entry in instance network info cache for port c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.754 257641 DEBUG nova.network.neutron [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Updating instance_info_cache with network_info: [{"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.773 257641 DEBUG oslo_concurrency.lockutils [req-402fd9d8-53ac-4df9-a31d-887145c42fd5 req-eb6c97c1-fe7f-4c19-9653-e60dbf1c6e9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9ffa29cd-6836-4384-84d9-b65c739901cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.938 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.939 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.939 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.939 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:06:15 np0005539550 nova_compute[257631]: 2025-11-29 08:06:15.939 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:16.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.271 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.353 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83a6f521" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227774079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.385 257641 DEBUG nova.storage.rbd_utils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.389 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config 9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.412 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.489 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.490 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.493 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.493 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.550 257641 DEBUG oslo_concurrency.processutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config 9ffa29cd-6836-4384-84d9-b65c739901cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.551 257641 INFO nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Deleting local config drive /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:06:16 np0005539550 kernel: tapc1a148ee-c8: entered promiscuous mode
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.5971] manager: (tapc1a148ee-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 29 03:06:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:16.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:16Z|00183|binding|INFO|Claiming lport c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 for this chassis.
Nov 29 03:06:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:16Z|00184|binding|INFO|c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9: Claiming fa:16:3e:25:23:60 10.100.0.13
Nov 29 03:06:16 np0005539550 systemd-udevd[297344]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.648 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:23:60 10.100.0.13'], port_security=['fa:16:3e:25:23:60 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9ffa29cd-6836-4384-84d9-b65c739901cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.649 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.6502] device (tapc1a148ee-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.651 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.6518] device (tapc1a148ee-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.664 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22541cfc-1504-4af8-800f-f263ddf766a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.665 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.667 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.667 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[64d30883-7aac-4695-adc6-4d2eae68862c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.669 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e4da1bbc-98c1-43a2-9d88-5d43182ddebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 systemd-machined[216673]: New machine qemu-30-instance-0000003e.
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.680 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[4f0f50c2-c1c7-47be-b2fe-81d2a881bac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 systemd[1]: Started Virtual Machine qemu-30-instance-0000003e.
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.707 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a058f70c-e8cc-4635-b8a1-427797b97c48]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.723 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:16Z|00185|binding|INFO|Setting lport c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 ovn-installed in OVS
Nov 29 03:06:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:16Z|00186|binding|INFO|Setting lport c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 up in Southbound
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.728 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.741 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[87fe3402-88c7-44bb-a954-2c9597ab1f31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 systemd-udevd[297347]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.7495] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.748 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4833341c-6929-48bc-a9aa-334936faa257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.776 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.778 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4315MB free_disk=20.863048553466797GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.778 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.778 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.781 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4afac154-3796-410e-9528-e975c7fd68ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.784 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[dc927665-aa04-427b-b30d-3002751574e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.8086] device (tapa8be8715-20): carrier: link connected
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.815 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[274118bf-e8c6-4f97-9f9b-d9685087a994]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.831 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[20b58338-ffd8-4eca-a06b-958b8196ea6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662447, 'reachable_time': 19292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297380, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.847 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[39a48c9d-1630-4414-9558-2063bfb6f5b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662447, 'tstamp': 662447}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297381, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.862 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f945e8a7-10a5-4de7-ae48-e23c0a539610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662447, 'reachable_time': 19292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297382, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.879 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 73947417-cf68-4b6b-820b-66f001a8a178 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.880 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9ffa29cd-6836-4384-84d9-b65c739901cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.880 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.880 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.890 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[477703b2-52a2-4734-b3cf-268c2f4f5514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.936 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f6ef7b2-8423-4afd-a985-3b7f37395254]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.938 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.938 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.938 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:16 np0005539550 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:06:16 np0005539550 NetworkManager[49039]: <info>  [1764403576.9410] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.940 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.944 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:16Z|00187|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.946 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.947 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e70b27aa-b5d2-4b13-afe1-893357f8d159]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.948 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:06:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:16.948 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.966 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.984 257641 DEBUG nova.compute.manager [req-9e4c11f3-1bf8-41af-b9e5-ad8db09d0d11 req-8c2a85b2-7e96-4ae8-822b-a184fccfd8bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.984 257641 DEBUG oslo_concurrency.lockutils [req-9e4c11f3-1bf8-41af-b9e5-ad8db09d0d11 req-8c2a85b2-7e96-4ae8-822b-a184fccfd8bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.985 257641 DEBUG oslo_concurrency.lockutils [req-9e4c11f3-1bf8-41af-b9e5-ad8db09d0d11 req-8c2a85b2-7e96-4ae8-822b-a184fccfd8bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.985 257641 DEBUG oslo_concurrency.lockutils [req-9e4c11f3-1bf8-41af-b9e5-ad8db09d0d11 req-8c2a85b2-7e96-4ae8-822b-a184fccfd8bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:16 np0005539550 nova_compute[257631]: 2025-11-29 08:06:16.985 257641 DEBUG nova.compute.manager [req-9e4c11f3-1bf8-41af-b9e5-ad8db09d0d11 req-8c2a85b2-7e96-4ae8-822b-a184fccfd8bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Processing event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:06:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 339 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.4 MiB/s wr, 385 op/s
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.302 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403577.3024018, 9ffa29cd-6836-4384-84d9-b65c739901cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.303 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.306 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.309 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.312 257641 INFO nova.virt.libvirt.driver [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance spawned successfully.#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.312 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:06:17 np0005539550 podman[297473]: 2025-11-29 08:06:17.325740513 +0000 UTC m=+0.062184951 container create 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.336 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.341 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.344 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.345 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.345 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.345 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.346 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.346 257641 DEBUG nova.virt.libvirt.driver [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:06:17 np0005539550 systemd[1]: Started libpod-conmon-6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12.scope.
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.374 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.375 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403577.3055913, 9ffa29cd-6836-4384-84d9-b65c739901cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.375 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:06:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127204339' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:17 np0005539550 podman[297473]: 2025-11-29 08:06:17.288824357 +0000 UTC m=+0.025268825 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:06:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:06:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc14beb73aa993a14e442cd16e1413c493fabfa5715890577877539f83a8da3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.403 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.405 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:17 np0005539550 podman[297473]: 2025-11-29 08:06:17.407581757 +0000 UTC m=+0.144026225 container init 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.409 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.411 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403577.3082464, 9ffa29cd-6836-4384-84d9-b65c739901cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.411 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:06:17 np0005539550 podman[297473]: 2025-11-29 08:06:17.41369359 +0000 UTC m=+0.150138028 container start 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.414 257641 INFO nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Took 7.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.414 257641 DEBUG nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.432 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:17 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [NOTICE]   (297495) : New worker (297497) forked
Nov 29 03:06:17 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [NOTICE]   (297495) : Loading success.
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.457 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.461 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.472 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.472 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.494 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.520 257641 INFO nova.compute.manager [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Took 8.59 seconds to build instance.#033[00m
Nov 29 03:06:17 np0005539550 nova_compute[257631]: 2025-11-29 08:06:17.543 257641 DEBUG oslo_concurrency.lockutils [None req-9c2ffb86-be40-4560-83d8-fbca74b66312 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 29 03:06:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 29 03:06:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 29 03:06:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:06:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:18.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.467 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.468 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.468 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.468 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:06:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.838 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.839 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.839 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:06:18 np0005539550 nova_compute[257631]: 2025-11-29 08:06:18.839 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 73947417-cf68-4b6b-820b-66f001a8a178 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:18.939 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:18.940 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:18.940 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 311 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.1 MiB/s wr, 282 op/s
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.150 257641 DEBUG nova.compute.manager [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.151 257641 DEBUG oslo_concurrency.lockutils [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.151 257641 DEBUG oslo_concurrency.lockutils [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.152 257641 DEBUG oslo_concurrency.lockutils [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.152 257641 DEBUG nova.compute.manager [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] No waiting events found dispatching network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.152 257641 WARNING nova.compute.manager [req-bbf4e546-d665-4b9d-9a4f-f64b0e269064 req-40cafab5-9b86-4036-a212-51265ba1b2db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received unexpected event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:06:19 np0005539550 podman[297507]: 2025-11-29 08:06:19.35926956 +0000 UTC m=+0.090260026 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.690 257641 DEBUG oslo_concurrency.lockutils [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.690 257641 DEBUG oslo_concurrency.lockutils [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.691 257641 DEBUG nova.compute.manager [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.695 257641 DEBUG nova.compute.manager [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.696 257641 DEBUG nova.objects.instance [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'flavor' on Instance uuid 9ffa29cd-6836-4384-84d9-b65c739901cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:19 np0005539550 nova_compute[257631]: 2025-11-29 08:06:19.722 257641 DEBUG nova.virt.libvirt.driver [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006319393553415225 of space, bias 1.0, pg target 1.8958180660245674 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002212669249022892 of space, bias 1.0, pg target 0.6615881054578447 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:06:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:06:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:20.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:20 np0005539550 nova_compute[257631]: 2025-11-29 08:06:20.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 264 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 335 op/s
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.273 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.873 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [{"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.887 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-73947417-cf68-4b6b-820b-66f001a8a178" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.888 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.889 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.889 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.889 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.889 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:21 np0005539550 nova_compute[257631]: 2025-11-29 08:06:21.890 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:06:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:22.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:22.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 264 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.1 MiB/s wr, 325 op/s
Nov 29 03:06:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:24.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:24.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 214 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 1.1 MiB/s wr, 301 op/s
Nov 29 03:06:25 np0005539550 nova_compute[257631]: 2025-11-29 08:06:25.051 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:26.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:26 np0005539550 nova_compute[257631]: 2025-11-29 08:06:26.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 29 03:06:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 29 03:06:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 29 03:06:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:26.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 234 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.3 MiB/s wr, 259 op/s
Nov 29 03:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:28.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 260 MiB data, 675 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 233 op/s
Nov 29 03:06:29 np0005539550 nova_compute[257631]: 2025-11-29 08:06:29.772 257641 DEBUG nova.virt.libvirt.driver [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:06:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:30Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:25:23:60 10.100.0.13
Nov 29 03:06:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:30Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:25:23:60 10.100.0.13
Nov 29 03:06:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:30 np0005539550 nova_compute[257631]: 2025-11-29 08:06:30.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:30.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 271 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Nov 29 03:06:31 np0005539550 nova_compute[257631]: 2025-11-29 08:06:31.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:32.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 271 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 173 op/s
Nov 29 03:06:33 np0005539550 kernel: tapc1a148ee-c8 (unregistering): left promiscuous mode
Nov 29 03:06:33 np0005539550 NetworkManager[49039]: <info>  [1764403593.0983] device (tapc1a148ee-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:33Z|00188|binding|INFO|Releasing lport c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 from this chassis (sb_readonly=0)
Nov 29 03:06:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:33Z|00189|binding|INFO|Setting lport c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 down in Southbound
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:33Z|00190|binding|INFO|Removing iface tapc1a148ee-c8 ovn-installed in OVS
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.111 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.115 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:25:23:60 10.100.0.13'], port_security=['fa:16:3e:25:23:60 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '9ffa29cd-6836-4384-84d9-b65c739901cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.117 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.119 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.122 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7475c4-bab5-4a34-8e9b-153737e7332a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.124 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.135 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539550 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Nov 29 03:06:33 np0005539550 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000003e.scope: Consumed 13.650s CPU time.
Nov 29 03:06:33 np0005539550 systemd-machined[216673]: Machine qemu-30-instance-0000003e terminated.
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [NOTICE]   (297495) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [NOTICE]   (297495) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [WARNING]  (297495) : Exiting Master process...
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [WARNING]  (297495) : Exiting Master process...
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [ALERT]    (297495) : Current worker (297497) exited with code 143 (Terminated)
Nov 29 03:06:33 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[297489]: [WARNING]  (297495) : All workers exited. Exiting... (0)
Nov 29 03:06:33 np0005539550 systemd[1]: libpod-6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12.scope: Deactivated successfully.
Nov 29 03:06:33 np0005539550 podman[297617]: 2025-11-29 08:06:33.276409744 +0000 UTC m=+0.052305273 container died 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:06:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dfc14beb73aa993a14e442cd16e1413c493fabfa5715890577877539f83a8da3-merged.mount: Deactivated successfully.
Nov 29 03:06:33 np0005539550 podman[297617]: 2025-11-29 08:06:33.331841845 +0000 UTC m=+0.107737374 container cleanup 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.336 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:06:33 np0005539550 systemd[1]: libpod-conmon-6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12.scope: Deactivated successfully.
Nov 29 03:06:33 np0005539550 podman[297652]: 2025-11-29 08:06:33.402325062 +0000 UTC m=+0.047004679 container remove 6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.408 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[844d6cf8-0d08-4fca-aa1d-8f53befac177]: (4, ('Sat Nov 29 08:06:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12)\n6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12\nSat Nov 29 08:06:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12)\n6aa8d2e4d9c3f043d527060e67a8350816d2cfa67e7452f78c355afb6e4bbb12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.410 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3bfccb-b8fb-40fd-8e34-0cada98e478e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.411 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.412 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539550 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.433 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.436 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d644b80-b98f-4d77-a2ab-6688c53adba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.450 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6561a5d2-1c4e-4813-9e81-cab104bc00cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.451 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bf166790-0b40-4138-899a-9505ef635076]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.465 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0bae6f11-972b-49d6-8ee5-15cebe5ce89e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662440, 'reachable_time': 35467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297674, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.468 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:33.468 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fa67e6b3-5c3d-425f-9de7-1f6f5f52ac9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:33 np0005539550 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.791 257641 INFO nova.virt.libvirt.driver [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.798 257641 INFO nova.virt.libvirt.driver [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance destroyed successfully.#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.798 257641 DEBUG nova.objects.instance [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'numa_topology' on Instance uuid 9ffa29cd-6836-4384-84d9-b65c739901cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.839 257641 DEBUG nova.compute.manager [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-vif-unplugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.839 257641 DEBUG oslo_concurrency.lockutils [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.840 257641 DEBUG oslo_concurrency.lockutils [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.840 257641 DEBUG oslo_concurrency.lockutils [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.840 257641 DEBUG nova.compute.manager [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] No waiting events found dispatching network-vif-unplugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.841 257641 WARNING nova.compute.manager [req-da9c837d-fa90-4faf-b463-675bf804a340 req-c0371740-3245-42b5-ba59-37dcc26d9684 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received unexpected event network-vif-unplugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.856 257641 DEBUG nova.compute.manager [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:33 np0005539550 nova_compute[257631]: 2025-11-29 08:06:33.944 257641 DEBUG oslo_concurrency.lockutils [None req-76a9c400-0c77-4fb8-915c-c7090c92cdc8 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:34.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:34.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 295 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.9 MiB/s wr, 193 op/s
Nov 29 03:06:35 np0005539550 nova_compute[257631]: 2025-11-29 08:06:35.058 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.277 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:36 np0005539550 podman[297677]: 2025-11-29 08:06:36.332931766 +0000 UTC m=+0.068870909 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:06:36 np0005539550 podman[297678]: 2025-11-29 08:06:36.35384173 +0000 UTC m=+0.086652395 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.469 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.469 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.470 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.470 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.470 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.471 257641 INFO nova.compute.manager [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Terminating instance#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.472 257641 DEBUG nova.compute.manager [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.479 257641 INFO nova.virt.libvirt.driver [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Instance destroyed successfully.#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.480 257641 DEBUG nova.objects.instance [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid 9ffa29cd-6836-4384-84d9-b65c739901cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.498 257641 DEBUG nova.virt.libvirt.vif [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1929531448',display_name='tempest-DeleteServersTestJSON-server-1929531448',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1929531448',id=62,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:06:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-0oiay26p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:06:33Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=9ffa29cd-6836-4384-84d9-b65c739901cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.499 257641 DEBUG nova.network.os_vif_util [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "address": "fa:16:3e:25:23:60", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1a148ee-c8", "ovs_interfaceid": "c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.500 257641 DEBUG nova.network.os_vif_util [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.501 257641 DEBUG os_vif [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.503 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.503 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1a148ee-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.506 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.510 257641 INFO os_vif [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:25:23:60,bridge_name='br-int',has_traffic_filtering=True,id=c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1a148ee-c8')#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.584 257641 DEBUG nova.compute.manager [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.585 257641 DEBUG oslo_concurrency.lockutils [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.585 257641 DEBUG oslo_concurrency.lockutils [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.585 257641 DEBUG oslo_concurrency.lockutils [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.585 257641 DEBUG nova.compute.manager [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] No waiting events found dispatching network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:36 np0005539550 nova_compute[257631]: 2025-11-29 08:06:36.586 257641 WARNING nova.compute.manager [req-7dba0c8f-3136-4507-8a90-600c792d927e req-9f7afc4c-f8ba-4db3-b56b-5b22cd10a2ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received unexpected event network-vif-plugged-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 for instance with vm_state stopped and task_state deleting.#033[00m
Nov 29 03:06:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:06:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:36.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:06:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 298 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 290 op/s
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.053 257641 INFO nova.virt.libvirt.driver [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Deleting instance files /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc_del#033[00m
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.054 257641 INFO nova.virt.libvirt.driver [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Deletion of /var/lib/nova/instances/9ffa29cd-6836-4384-84d9-b65c739901cc_del complete#033[00m
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.142 257641 INFO nova.compute.manager [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Took 0.67 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.143 257641 DEBUG oslo.service.loopingcall [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.143 257641 DEBUG nova.compute.manager [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:37 np0005539550 nova_compute[257631]: 2025-11-29 08:06:37.143 257641 DEBUG nova.network.neutron [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:38.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.107 257641 DEBUG nova.network.neutron [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.202 257641 INFO nova.compute.manager [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.271 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.271 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.349 257641 DEBUG oslo_concurrency.processutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 29 03:06:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 29 03:06:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 29 03:06:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:38.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937906776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.799 257641 DEBUG oslo_concurrency.processutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.806 257641 DEBUG nova.compute.provider_tree [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.831 257641 DEBUG nova.scheduler.client.report [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:38 np0005539550 nova_compute[257631]: 2025-11-29 08:06:38.884 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 239 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 508 op/s
Nov 29 03:06:39 np0005539550 nova_compute[257631]: 2025-11-29 08:06:39.199 257641 INFO nova.scheduler.client.report [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance 9ffa29cd-6836-4384-84d9-b65c739901cc#033[00m
Nov 29 03:06:39 np0005539550 nova_compute[257631]: 2025-11-29 08:06:39.406 257641 DEBUG oslo_concurrency.lockutils [None req-5afd320f-6333-4a1a-b790-a499f41b7f73 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "9ffa29cd-6836-4384-84d9-b65c739901cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 29 03:06:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 29 03:06:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 29 03:06:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:40 np0005539550 nova_compute[257631]: 2025-11-29 08:06:40.176 257641 DEBUG nova.compute.manager [req-5a4962e0-ebdd-4805-b29a-9adb065478bc req-95178a2f-0287-4732-ad87-f1298e9d7620 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Received event network-vif-deleted-c1a148ee-c8f7-4bc3-b5b7-878f4bcc35b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 29 03:06:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 29 03:06:40 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 29 03:06:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 231 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.4 MiB/s wr, 694 op/s
Nov 29 03:06:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:41 np0005539550 nova_compute[257631]: 2025-11-29 08:06:41.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:41 np0005539550 nova_compute[257631]: 2025-11-29 08:06:41.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 29 03:06:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 29 03:06:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 29 03:06:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:42.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:42.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 231 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 153 op/s
Nov 29 03:06:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.320 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.321 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.340 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.458 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.458 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.465 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.465 257641 INFO nova.compute.claims [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:06:44 np0005539550 nova_compute[257631]: 2025-11-29 08:06:44.618 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:44.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 231 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 3.6 MiB/s wr, 365 op/s
Nov 29 03:06:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1397706397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.081 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.086 257641 DEBUG nova.compute.provider_tree [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.110 257641 DEBUG nova.scheduler.client.report [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.131 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.131 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.183 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.183 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.204 257641 INFO nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.227 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.349 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.351 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.352 257641 INFO nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Creating image(s)#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.379 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.403 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.427 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.431 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.497 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.498 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.499 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.499 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.520 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.524 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.552 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.553 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.553 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.554 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.554 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.556 257641 INFO nova.compute.manager [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Terminating instance#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.556 257641 DEBUG nova.compute.manager [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.587 257641 DEBUG nova.policy [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:06:45 np0005539550 kernel: tap54f58d56-0a (unregistering): left promiscuous mode
Nov 29 03:06:45 np0005539550 NetworkManager[49039]: <info>  [1764403605.7376] device (tap54f58d56-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:06:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:45Z|00191|binding|INFO|Releasing lport 54f58d56-0af4-48e7-9ab3-a22887a4161f from this chassis (sb_readonly=0)
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.746 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:45Z|00192|binding|INFO|Setting lport 54f58d56-0af4-48e7-9ab3-a22887a4161f down in Southbound
Nov 29 03:06:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:06:45Z|00193|binding|INFO|Removing iface tap54f58d56-0a ovn-installed in OVS
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.749 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.771 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:45.781 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:59:d5 10.100.0.14'], port_security=['fa:16:3e:2f:59:d5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '73947417-cf68-4b6b-820b-66f001a8a178', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9e02874a2dc44489adba1420baa460f2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '06d3cd7a-8161-46d3-8dfb-eba1ecfb9db2 67c54971-4908-4d7c-99da-a2d6ab7441db 9d197e59-5d15-4b55-952d-2197baef6822', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=97bfead0-da72-4782-bfb1-84e12ea4a595, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=54f58d56-0af4-48e7-9ab3-a22887a4161f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:06:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:45.783 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 54f58d56-0af4-48e7-9ab3-a22887a4161f in datapath ddaca73e-4e30-4040-a35d-8d63a2e74570 unbound from our chassis#033[00m
Nov 29 03:06:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:45.784 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddaca73e-4e30-4040-a35d-8d63a2e74570, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:06:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:45.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d282fd9a-c72f-47cc-a1a4-61bca832dce5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:45.785 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 namespace which is not needed anymore#033[00m
Nov 29 03:06:45 np0005539550 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539550 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000003c.scope: Consumed 15.763s CPU time.
Nov 29 03:06:45 np0005539550 systemd-machined[216673]: Machine qemu-29-instance-0000003c terminated.
Nov 29 03:06:45 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [NOTICE]   (296861) : haproxy version is 2.8.14-c23fe91
Nov 29 03:06:45 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [NOTICE]   (296861) : path to executable is /usr/sbin/haproxy
Nov 29 03:06:45 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [WARNING]  (296861) : Exiting Master process...
Nov 29 03:06:45 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [ALERT]    (296861) : Current worker (296865) exited with code 143 (Terminated)
Nov 29 03:06:45 np0005539550 neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570[296827]: [WARNING]  (296861) : All workers exited. Exiting... (0)
Nov 29 03:06:45 np0005539550 systemd[1]: libpod-9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9.scope: Deactivated successfully.
Nov 29 03:06:45 np0005539550 podman[297897]: 2025-11-29 08:06:45.935513723 +0000 UTC m=+0.050681173 container died 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:06:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1bbc59c0f25202263a8e14805ff86f8eae7d78ffe03309a46c5740ab5ffe0568-merged.mount: Deactivated successfully.
Nov 29 03:06:45 np0005539550 podman[297897]: 2025-11-29 08:06:45.988902132 +0000 UTC m=+0.104069582 container cleanup 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:06:45 np0005539550 nova_compute[257631]: 2025-11-29 08:06:45.992 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:46 np0005539550 systemd[1]: libpod-conmon-9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9.scope: Deactivated successfully.
Nov 29 03:06:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:46 np0005539550 podman[297943]: 2025-11-29 08:06:46.066622342 +0000 UTC m=+0.046370285 container remove 9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.072 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[97eea771-9e86-4bce-81a2-e33f8d9f2654]: (4, ('Sat Nov 29 08:06:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9)\n9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9\nSat Nov 29 08:06:46 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 (9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9)\n9c53c5f2583e76c591f1fa87d2eb17eb4478eebc7897d82656ab716a961e55f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.074 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac1b12f-a85b-43b2-9303-8563ec4feff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.075 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddaca73e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.077 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539550 kernel: tapddaca73e-40: left promiscuous mode
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.084 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.101 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae822bd-a9f6-4a58-9cbc-d3600f9ed8a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.113 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf6edbd-d969-4486-a2ab-14ee3d175a14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.116 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2377d7c9-b081-4ace-8a21-b7ae9cfeeda6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.129 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.131 257641 INFO nova.virt.libvirt.driver [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Instance destroyed successfully.#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.131 257641 DEBUG nova.objects.instance [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lazy-loading 'resources' on Instance uuid 73947417-cf68-4b6b-820b-66f001a8a178 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.132 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[484957c8-46bf-4c04-92f2-228464455013]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 659974, 'reachable_time': 38610, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298007, 'error': None, 'target': 'ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 systemd[1]: run-netns-ovnmeta\x2dddaca73e\x2d4e30\x2d4040\x2da35d\x2d8d63a2e74570.mount: Deactivated successfully.
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.136 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddaca73e-4e30-4040-a35d-8d63a2e74570 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:06:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:06:46.136 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[96ff4617-6163-4c89-a584-1f94e1dda196]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.150 257641 DEBUG nova.virt.libvirt.vif [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:05:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-224129150',display_name='tempest-SecurityGroupsTestJSON-server-224129150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-224129150',id=60,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:05:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9e02874a2dc44489adba1420baa460f2',ramdisk_id='',reservation_id='r-hxs7ld9x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1605163301',owner_user_name='tempest-SecurityGroupsTestJSON-1605163301-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:05:53Z,user_data=None,user_id='9ee7a93f60394fd9b004c90c25ff5fc1',uuid=73947417-cf68-4b6b-820b-66f001a8a178,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.150 257641 DEBUG nova.network.os_vif_util [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converting VIF {"id": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "address": "fa:16:3e:2f:59:d5", "network": {"id": "ddaca73e-4e30-4040-a35d-8d63a2e74570", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1724696280-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9e02874a2dc44489adba1420baa460f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54f58d56-0a", "ovs_interfaceid": "54f58d56-0af4-48e7-9ab3-a22887a4161f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.151 257641 DEBUG nova.network.os_vif_util [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.152 257641 DEBUG os_vif [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.153 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.153 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54f58d56-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.155 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.158 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.160 257641 INFO os_vif [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:59:d5,bridge_name='br-int',has_traffic_filtering=True,id=54f58d56-0af4-48e7-9ab3-a22887a4161f,network=Network(ddaca73e-4e30-4040-a35d-8d63a2e74570),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54f58d56-0a')#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.220 257641 DEBUG nova.objects.instance [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.246 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.246 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Ensure instance console log exists: /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.247 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.248 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.249 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 29 03:06:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.547 257641 DEBUG nova.compute.manager [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-unplugged-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.547 257641 DEBUG oslo_concurrency.lockutils [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.548 257641 DEBUG oslo_concurrency.lockutils [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.548 257641 DEBUG oslo_concurrency.lockutils [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.548 257641 DEBUG nova.compute.manager [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] No waiting events found dispatching network-vif-unplugged-54f58d56-0af4-48e7-9ab3-a22887a4161f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.549 257641 DEBUG nova.compute.manager [req-9abdac1e-b7e5-4c2b-9f36-a183b0b83646 req-e5c21c3b-acc0-451c-9127-c4ff895b5915 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-unplugged-54f58d56-0af4-48e7-9ab3-a22887a4161f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:06:46 np0005539550 nova_compute[257631]: 2025-11-29 08:06:46.635 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Successfully created port: 796f4fca-ba4d-44b1-9886-29e41980cd49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:06:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:46.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 214 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.4 MiB/s wr, 272 op/s
Nov 29 03:06:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:48.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.131 257641 INFO nova.virt.libvirt.driver [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Deleting instance files /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178_del#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.132 257641 INFO nova.virt.libvirt.driver [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Deletion of /var/lib/nova/instances/73947417-cf68-4b6b-820b-66f001a8a178_del complete#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.187 257641 INFO nova.compute.manager [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Took 2.63 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.188 257641 DEBUG oslo.service.loopingcall [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.188 257641 DEBUG nova.compute.manager [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.188 257641 DEBUG nova.network.neutron [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.348 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403593.345772, 9ffa29cd-6836-4384-84d9-b65c739901cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.349 257641 INFO nova.compute.manager [-] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:06:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:48.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:48 np0005539550 nova_compute[257631]: 2025-11-29 08:06:48.989 257641 DEBUG nova.compute.manager [None req-e75ef3bc-ebcb-44eb-8443-599076a39093 - - - - - -] [instance: 9ffa29cd-6836-4384-84d9-b65c739901cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:06:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 133 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 296 op/s
Nov 29 03:06:49 np0005539550 nova_compute[257631]: 2025-11-29 08:06:49.489 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Successfully updated port: 796f4fca-ba4d-44b1-9886-29e41980cd49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:06:49 np0005539550 nova_compute[257631]: 2025-11-29 08:06:49.517 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:49 np0005539550 nova_compute[257631]: 2025-11-29 08:06:49.517 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:49 np0005539550 nova_compute[257631]: 2025-11-29 08:06:49.518 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:06:49 np0005539550 nova_compute[257631]: 2025-11-29 08:06:49.851 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:06:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:50.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:50 np0005539550 podman[298047]: 2025-11-29 08:06:50.368689561 +0000 UTC m=+0.103806305 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:06:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:50.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.873 257641 DEBUG nova.compute.manager [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.873 257641 DEBUG oslo_concurrency.lockutils [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "73947417-cf68-4b6b-820b-66f001a8a178-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.873 257641 DEBUG oslo_concurrency.lockutils [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.874 257641 DEBUG oslo_concurrency.lockutils [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.874 257641 DEBUG nova.compute.manager [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] No waiting events found dispatching network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:06:50 np0005539550 nova_compute[257631]: 2025-11-29 08:06:50.874 257641 WARNING nova.compute.manager [req-706b3a5f-f4d3-4e56-a3de-9af337d4a81d req-a6dbc56f-90ae-4433-ae4a-533a602a777d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received unexpected event network-vif-plugged-54f58d56-0af4-48e7-9ab3-a22887a4161f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:06:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.3 MiB/s wr, 323 op/s
Nov 29 03:06:51 np0005539550 nova_compute[257631]: 2025-11-29 08:06:51.155 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 29 03:06:51 np0005539550 nova_compute[257631]: 2025-11-29 08:06:51.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 29 03:06:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 29 03:06:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.639 257641 DEBUG nova.network.neutron [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:52.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.687 257641 INFO nova.compute.manager [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Took 4.50 seconds to deallocate network for instance.#033[00m
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.765 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.767 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.855 257641 DEBUG oslo_concurrency.processutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:52 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.999 257641 DEBUG nova.compute.manager [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-changed-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:52.999 257641 DEBUG nova.compute.manager [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Refreshing instance network info cache due to event network-changed-796f4fca-ba4d-44b1-9886-29e41980cd49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.000 257641 DEBUG oslo_concurrency.lockutils [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:06:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 88 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 757 KiB/s rd, 2.7 MiB/s wr, 183 op/s
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.064 257641 DEBUG nova.network.neutron [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Updating instance_info_cache with network_info: [{"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.087 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.088 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Instance network_info: |[{"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.088 257641 DEBUG oslo_concurrency.lockutils [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.089 257641 DEBUG nova.network.neutron [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Refreshing network info cache for port 796f4fca-ba4d-44b1-9886-29e41980cd49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.093 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Start _get_guest_xml network_info=[{"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.099 257641 WARNING nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.105 257641 DEBUG nova.virt.libvirt.host [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.106 257641 DEBUG nova.virt.libvirt.host [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.116 257641 DEBUG nova.virt.libvirt.host [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.117 257641 DEBUG nova.virt.libvirt.host [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.118 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.119 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.119 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.119 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.120 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.120 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.120 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.120 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.121 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.121 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.121 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.121 257641 DEBUG nova.virt.hardware [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.124 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2438475044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.325 257641 DEBUG oslo_concurrency.processutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.331 257641 DEBUG nova.compute.provider_tree [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.358 257641 DEBUG nova.scheduler.client.report [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.387 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.425 257641 INFO nova.scheduler.client.report [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Deleted allocations for instance 73947417-cf68-4b6b-820b-66f001a8a178#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.525 257641 DEBUG oslo_concurrency.lockutils [None req-0a3b3136-2300-4c7a-aaf7-03f1ac0727c7 9ee7a93f60394fd9b004c90c25ff5fc1 9e02874a2dc44489adba1420baa460f2 - - default default] Lock "73947417-cf68-4b6b-820b-66f001a8a178" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4129105672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.579 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.603 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:06:53 np0005539550 nova_compute[257631]: 2025-11-29 08:06:53.608 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2586273263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:06:54 np0005539550 nova_compute[257631]: 2025-11-29 08:06:54.049 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:06:54 np0005539550 nova_compute[257631]: 2025-11-29 08:06:54.053 257641 DEBUG nova.virt.libvirt.vif [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-692868014',display_name='tempest-DeleteServersTestJSON-server-692868014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-692868014',id=66,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-s4r0xwct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:45Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=09dcdda4-714d-4aaf-a92b-e6a80cea0b28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:06:54 np0005539550 nova_compute[257631]: 2025-11-29 08:06:54.053 257641 DEBUG nova.network.os_vif_util [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:06:54 np0005539550 nova_compute[257631]: 2025-11-29 08:06:54.054 257641 DEBUG nova.network.os_vif_util [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:06:54 np0005539550 nova_compute[257631]: 2025-11-29 08:06:54.055 257641 DEBUG nova.objects.instance [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:06:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:54.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0367e8ac-1a3e-43c5-a1c8-0e0f7d727790 does not exist
Nov 29 03:06:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ac8a2061-786e-45dc-9393-b200dfabe9ae does not exist
Nov 29 03:06:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev cc6cf350-8b5b-4e77-9ab7-4a107f2d6598 does not exist
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:06:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:06:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 100 KiB/s rd, 2.4 MiB/s wr, 146 op/s
Nov 29 03:06:55 np0005539550 nova_compute[257631]: 2025-11-29 08:06:55.155 257641 DEBUG nova.network.neutron [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Updated VIF entry in instance network info cache for port 796f4fca-ba4d-44b1-9886-29e41980cd49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:06:55 np0005539550 nova_compute[257631]: 2025-11-29 08:06:55.156 257641 DEBUG nova.network.neutron [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Updating instance_info_cache with network_info: [{"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.376452394 +0000 UTC m=+0.045175844 container create d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:06:55 np0005539550 systemd[1]: Started libpod-conmon-d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09.scope.
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.351204981 +0000 UTC m=+0.019928451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.486541876 +0000 UTC m=+0.155265346 container init d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.494728301 +0000 UTC m=+0.163451751 container start d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.498338832 +0000 UTC m=+0.167062302 container attach d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:06:55 np0005539550 compassionate_newton[298618]: 167 167
Nov 29 03:06:55 np0005539550 systemd[1]: libpod-d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09.scope: Deactivated successfully.
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.50305064 +0000 UTC m=+0.171774090 container died d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:06:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b2737162fe65e23ad565dad04ec4384e61354dfd4e93f0fcae1204fbcf7d425e-merged.mount: Deactivated successfully.
Nov 29 03:06:55 np0005539550 podman[298603]: 2025-11-29 08:06:55.545101955 +0000 UTC m=+0.213825405 container remove d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:06:55 np0005539550 systemd[1]: libpod-conmon-d9ad244ef5e6a649f9c3ceab6b39938fe77de6bb460de0e5885c4aa4a3ec2a09.scope: Deactivated successfully.
Nov 29 03:06:55 np0005539550 podman[298642]: 2025-11-29 08:06:55.708569176 +0000 UTC m=+0.046356064 container create 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:06:55 np0005539550 systemd[1]: Started libpod-conmon-9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f.scope.
Nov 29 03:06:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:06:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:55 np0005539550 podman[298642]: 2025-11-29 08:06:55.688675467 +0000 UTC m=+0.026462375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:55 np0005539550 podman[298642]: 2025-11-29 08:06:55.94147133 +0000 UTC m=+0.279258308 container init 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:06:55 np0005539550 podman[298642]: 2025-11-29 08:06:55.970385345 +0000 UTC m=+0.308172233 container start 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:06:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:56.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:56 np0005539550 podman[298642]: 2025-11-29 08:06:56.100473848 +0000 UTC m=+0.438260756 container attach 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:06:56 np0005539550 nova_compute[257631]: 2025-11-29 08:06:56.157 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:56 np0005539550 nova_compute[257631]: 2025-11-29 08:06:56.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:06:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:06:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:56 np0005539550 clever_lamarr[298658]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:06:56 np0005539550 clever_lamarr[298658]: --> relative data size: 1.0
Nov 29 03:06:56 np0005539550 clever_lamarr[298658]: --> All data devices are unavailable
Nov 29 03:06:56 np0005539550 systemd[1]: libpod-9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f.scope: Deactivated successfully.
Nov 29 03:06:56 np0005539550 conmon[298658]: conmon 9d85d2960c3a1456b226 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f.scope/container/memory.events
Nov 29 03:06:56 np0005539550 podman[298642]: 2025-11-29 08:06:56.832591176 +0000 UTC m=+1.170378064 container died 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:06:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 29 03:06:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-af6e31c45ce4cb9f8be22a0c1618e149c73bddf73ef653b8b5f35e190adda899-merged.mount: Deactivated successfully.
Nov 29 03:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:06:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:06:58.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:06:58 np0005539550 podman[298642]: 2025-11-29 08:06:58.148823528 +0000 UTC m=+2.486610416 container remove 9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lamarr, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:06:58 np0005539550 systemd[1]: libpod-conmon-9d85d2960c3a1456b2261c9a9618fe0bb2f1a8b45ca2037cdf7cbacb553e516f.scope: Deactivated successfully.
Nov 29 03:06:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:06:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:06:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:06:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:06:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:06:58.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.811578724 +0000 UTC m=+0.056999401 container create 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:06:58 np0005539550 systemd[1]: Started libpod-conmon-468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562.scope.
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.780356191 +0000 UTC m=+0.025776888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.902710761 +0000 UTC m=+0.148131468 container init 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.912822344 +0000 UTC m=+0.158243021 container start 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.916218529 +0000 UTC m=+0.161639226 container attach 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:06:58 np0005539550 brave_golick[298844]: 167 167
Nov 29 03:06:58 np0005539550 systemd[1]: libpod-468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562.scope: Deactivated successfully.
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.918709442 +0000 UTC m=+0.164130119 container died 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:06:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0c74497d127578581d2214eeb529e20c0f101b182fedd45d275c863118dafda4-merged.mount: Deactivated successfully.
Nov 29 03:06:58 np0005539550 podman[298828]: 2025-11-29 08:06:58.969125377 +0000 UTC m=+0.214546054 container remove 468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_golick, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:06:58 np0005539550 systemd[1]: libpod-conmon-468217ddd940a9de8fa27199e4735debb64bb790ac904a91a0f02545c5a8c562.scope: Deactivated successfully.
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.4 MiB/s wr, 60 op/s
Nov 29 03:06:59 np0005539550 podman[298867]: 2025-11-29 08:06:59.141239105 +0000 UTC m=+0.052402166 container create 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:06:59 np0005539550 podman[298867]: 2025-11-29 08:06:59.115440968 +0000 UTC m=+0.026604049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:06:59 np0005539550 systemd[1]: Started libpod-conmon-63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade.scope.
Nov 29 03:06:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:06:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e704ad2b278c30852681d0be63fe700ba6786174afdf1a58ec54914f71852f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e704ad2b278c30852681d0be63fe700ba6786174afdf1a58ec54914f71852f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e704ad2b278c30852681d0be63fe700ba6786174afdf1a58ec54914f71852f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e704ad2b278c30852681d0be63fe700ba6786174afdf1a58ec54914f71852f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:06:59
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'images', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root']
Nov 29 03:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:06:59 np0005539550 podman[298867]: 2025-11-29 08:06:59.555037416 +0000 UTC m=+0.466200477 container init 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:06:59 np0005539550 podman[298867]: 2025-11-29 08:06:59.5627617 +0000 UTC m=+0.473924751 container start 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:06:59 np0005539550 podman[298867]: 2025-11-29 08:06:59.724284472 +0000 UTC m=+0.635447523 container attach 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:07:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:00.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]: {
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:    "0": [
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:        {
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "devices": [
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "/dev/loop3"
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            ],
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "lv_name": "ceph_lv0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "lv_size": "7511998464",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "name": "ceph_lv0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "tags": {
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.cluster_name": "ceph",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.crush_device_class": "",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.encrypted": "0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.osd_id": "0",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.type": "block",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:                "ceph.vdo": "0"
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            },
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "type": "block",
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:            "vg_name": "ceph_vg0"
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:        }
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]:    ]
Nov 29 03:07:00 np0005539550 dreamy_nash[298883]: }
Nov 29 03:07:00 np0005539550 systemd[1]: libpod-63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade.scope: Deactivated successfully.
Nov 29 03:07:00 np0005539550 podman[298867]: 2025-11-29 08:07:00.395133423 +0000 UTC m=+1.306296484 container died 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:07:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-37e704ad2b278c30852681d0be63fe700ba6786174afdf1a58ec54914f71852f-merged.mount: Deactivated successfully.
Nov 29 03:07:00 np0005539550 podman[298867]: 2025-11-29 08:07:00.453979009 +0000 UTC m=+1.365142060 container remove 63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:07:00 np0005539550 systemd[1]: libpod-conmon-63c4464d91d6c386f4fbab82c9ed26e4c3c6e825241e44228b50b2c33392cade.scope: Deactivated successfully.
Nov 29 03:07:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.873 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <uuid>09dcdda4-714d-4aaf-a92b-e6a80cea0b28</uuid>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <name>instance-00000042</name>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:name>tempest-DeleteServersTestJSON-server-692868014</nova:name>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:06:53</nova:creationTime>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <nova:port uuid="796f4fca-ba4d-44b1-9886-29e41980cd49">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="serial">09dcdda4-714d-4aaf-a92b-e6a80cea0b28</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="uuid">09dcdda4-714d-4aaf-a92b-e6a80cea0b28</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:f6:3b:27"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <target dev="tap796f4fca-ba"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/console.log" append="off"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:07:00 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:07:00 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:07:00 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:07:00 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.874 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Preparing to wait for external event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.874 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.874 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.874 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.875 257641 DEBUG nova.virt.libvirt.vif [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-692868014',display_name='tempest-DeleteServersTestJSON-server-692868014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-692868014',id=66,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-s4r0xwct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:06:45Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=09dcdda4-714d-4aaf-a92b-e6a80cea0b28,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.875 257641 DEBUG nova.network.os_vif_util [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.876 257641 DEBUG nova.network.os_vif_util [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.876 257641 DEBUG os_vif [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.877 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.877 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.881 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.881 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap796f4fca-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.881 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap796f4fca-ba, col_values=(('external_ids', {'iface-id': '796f4fca-ba4d-44b1-9886-29e41980cd49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:3b:27', 'vm-uuid': '09dcdda4-714d-4aaf-a92b-e6a80cea0b28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539550 NetworkManager[49039]: <info>  [1764403620.8854] manager: (tap796f4fca-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.891 257641 INFO os_vif [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba')#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.913 257641 DEBUG oslo_concurrency.lockutils [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-09dcdda4-714d-4aaf-a92b-e6a80cea0b28" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.913 257641 DEBUG nova.compute.manager [req-8471da9a-d2cb-4ee8-b64e-95004cb59b63 req-61eb222b-d002-45d5-a337-71d2761ef46a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Received event network-vif-deleted-54f58d56-0af4-48e7-9ab3-a22887a4161f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.952 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.952 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.952 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:f6:3b:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.953 257641 INFO nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Using config drive#033[00m
Nov 29 03:07:00 np0005539550 nova_compute[257631]: 2025-11-29 08:07:00.977 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.035 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403606.0007102, 73947417-cf68-4b6b-820b-66f001a8a178 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.036 257641 INFO nova.compute.manager [-] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:07:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.066 257641 DEBUG nova.compute.manager [None req-699e8b9a-0e92-4a48-90cf-59f1728013e5 - - - - - -] [instance: 73947417-cf68-4b6b-820b-66f001a8a178] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.087557904 +0000 UTC m=+0.047009670 container create 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:07:01 np0005539550 systemd[1]: Started libpod-conmon-8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c.scope.
Nov 29 03:07:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.067246945 +0000 UTC m=+0.026698731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.172777882 +0000 UTC m=+0.132229658 container init 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.179229404 +0000 UTC m=+0.138681160 container start 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:07:01 np0005539550 sweet_robinson[299083]: 167 167
Nov 29 03:07:01 np0005539550 systemd[1]: libpod-8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c.scope: Deactivated successfully.
Nov 29 03:07:01 np0005539550 conmon[299083]: conmon 8650b69d0843856f7291 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c.scope/container/memory.events
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.184357033 +0000 UTC m=+0.143808829 container attach 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.185137082 +0000 UTC m=+0.144588858 container died 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:07:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fd7179230e476d868a9657efb57ad9f742db5152a72aaa69738cc215cfec7172-merged.mount: Deactivated successfully.
Nov 29 03:07:01 np0005539550 podman[299067]: 2025-11-29 08:07:01.23447247 +0000 UTC m=+0.193924226 container remove 8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_robinson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:07:01 np0005539550 systemd[1]: libpod-conmon-8650b69d0843856f72912b98005cded2d7cbed99878408cf439cc17c35f3482c.scope: Deactivated successfully.
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539550 podman[299106]: 2025-11-29 08:07:01.410403314 +0000 UTC m=+0.048037616 container create cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:07:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:01 np0005539550 systemd[1]: Started libpod-conmon-cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274.scope.
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.474 257641 INFO nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Creating config drive at /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.480 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkqaz4ho execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:07:01 np0005539550 podman[299106]: 2025-11-29 08:07:01.389452308 +0000 UTC m=+0.027086630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:07:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c7d42fafbed3c70b30c8a8bf43636b9d808d9e8b1e4b996b9c5130a3e03428/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c7d42fafbed3c70b30c8a8bf43636b9d808d9e8b1e4b996b9c5130a3e03428/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c7d42fafbed3c70b30c8a8bf43636b9d808d9e8b1e4b996b9c5130a3e03428/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c7d42fafbed3c70b30c8a8bf43636b9d808d9e8b1e4b996b9c5130a3e03428/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.619 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfkqaz4ho" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.653 257641 DEBUG nova.storage.rbd_utils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.658 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:01 np0005539550 podman[299106]: 2025-11-29 08:07:01.679245088 +0000 UTC m=+0.316879410 container init cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:07:01 np0005539550 podman[299106]: 2025-11-29 08:07:01.690802538 +0000 UTC m=+0.328436840 container start cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:07:01 np0005539550 podman[299106]: 2025-11-29 08:07:01.711150069 +0000 UTC m=+0.348784391 container attach cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.849 257641 DEBUG oslo_concurrency.processutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config 09dcdda4-714d-4aaf-a92b-e6a80cea0b28_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.850 257641 INFO nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Deleting local config drive /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:01 np0005539550 kernel: tap796f4fca-ba: entered promiscuous mode
Nov 29 03:07:01 np0005539550 NetworkManager[49039]: <info>  [1764403621.9087] manager: (tap796f4fca-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 29 03:07:01 np0005539550 systemd-udevd[299177]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:01Z|00194|binding|INFO|Claiming lport 796f4fca-ba4d-44b1-9886-29e41980cd49 for this chassis.
Nov 29 03:07:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:01Z|00195|binding|INFO|796f4fca-ba4d-44b1-9886-29e41980cd49: Claiming fa:16:3e:f6:3b:27 10.100.0.11
Nov 29 03:07:01 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.967 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:01 np0005539550 NetworkManager[49039]: <info>  [1764403621.9795] device (tap796f4fca-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:01.982 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:3b:27 10.100.0.11'], port_security=['fa:16:3e:f6:3b:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '09dcdda4-714d-4aaf-a92b-e6a80cea0b28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=796f4fca-ba4d-44b1-9886-29e41980cd49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:01.983 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 796f4fca-ba4d-44b1-9886-29e41980cd49 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:07:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:01.985 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:07:01 np0005539550 NetworkManager[49039]: <info>  [1764403621.9872] device (tap796f4fca-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:01Z|00196|binding|INFO|Setting lport 796f4fca-ba4d-44b1-9886-29e41980cd49 ovn-installed in OVS
Nov 29 03:07:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:01Z|00197|binding|INFO|Setting lport 796f4fca-ba4d-44b1-9886-29e41980cd49 up in Southbound
Nov 29 03:07:02 np0005539550 nova_compute[257631]: 2025-11-29 08:07:01.998 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:01.997 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc63839-4ca0-4a4d-8add-43af0d4aa8b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:01.998 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.000 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.000 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16543f87-621d-4804-81de-c5d1fdebff1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.002 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6c49d461-68e5-42d3-9f84-a6dd7cdb8d82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 systemd-machined[216673]: New machine qemu-31-instance-00000042.
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.017 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8e4932-23c2-4256-80ad-9e2d254e8667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 systemd[1]: Started Virtual Machine qemu-31-instance-00000042.
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.031 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2be1e18a-ba99-4746-8a12-6be4f1ceb495]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.065 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[425a5677-5cb1-4dec-8098-231f29d93c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:02.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:02 np0005539550 NetworkManager[49039]: <info>  [1764403622.0908] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/86)
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.091 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[25143e5c-e350-452d-a1d7-a74248dfac5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.127 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[390b1ca4-1735-4ab4-9c89-8438181a15f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.130 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a4de8c-8933-41ad-aa35-53eaad5e2300]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 NetworkManager[49039]: <info>  [1764403622.1561] device (tapa8be8715-20): carrier: link connected
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.156 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[06861a7a-02f4-49a3-8fb5-8a999d41c5de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.173 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e746325d-ada4-4c10-a9fd-c97c9507263f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666981, 'reachable_time': 19327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299214, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.190 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8653fc29-6d8e-4864-8f81-6475940034bd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666981, 'tstamp': 666981}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299215, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.206 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[15931cf5-ed2d-4361-a479-069b40b51478]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666981, 'reachable_time': 19327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299216, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.237 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e657b0cd-bbd5-42cb-9970-6a4d9edece9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.295 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[658a07a2-a454-46e2-9f4a-605628ffef58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.297 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.297 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.297 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:02 np0005539550 nova_compute[257631]: 2025-11-29 08:07:02.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539550 NetworkManager[49039]: <info>  [1764403622.3002] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Nov 29 03:07:02 np0005539550 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:07:02 np0005539550 nova_compute[257631]: 2025-11-29 08:07:02.303 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.304 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:02 np0005539550 nova_compute[257631]: 2025-11-29 08:07:02.306 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:02Z|00198|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:07:02 np0005539550 nova_compute[257631]: 2025-11-29 08:07:02.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.324 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.326 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[11b0ba2e-8cb2-41c2-b8ae-40b312e8f683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.326 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:02.329 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:02 np0005539550 loving_moser[299122]: {
Nov 29 03:07:02 np0005539550 loving_moser[299122]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:07:02 np0005539550 loving_moser[299122]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:07:02 np0005539550 loving_moser[299122]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:07:02 np0005539550 loving_moser[299122]:        "osd_id": 0,
Nov 29 03:07:02 np0005539550 loving_moser[299122]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:07:02 np0005539550 loving_moser[299122]:        "type": "bluestore"
Nov 29 03:07:02 np0005539550 loving_moser[299122]:    }
Nov 29 03:07:02 np0005539550 loving_moser[299122]: }
Nov 29 03:07:02 np0005539550 systemd[1]: libpod-cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274.scope: Deactivated successfully.
Nov 29 03:07:02 np0005539550 podman[299254]: 2025-11-29 08:07:02.659348826 +0000 UTC m=+0.026766652 container died cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:07:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:02.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.205 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403623.2050989, 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.205 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.352 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.356 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403623.2052753, 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.356 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.381 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.384 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:03 np0005539550 nova_compute[257631]: 2025-11-29 08:07:03.415 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-41c7d42fafbed3c70b30c8a8bf43636b9d808d9e8b1e4b996b9c5130a3e03428-merged.mount: Deactivated successfully.
Nov 29 03:07:03 np0005539550 podman[299254]: 2025-11-29 08:07:03.899264734 +0000 UTC m=+1.266682560 container remove cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_moser, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:07:03 np0005539550 systemd[1]: libpod-conmon-cafa8aea64c169c5806cccb4245a330bd6968eebbd58b1aa6f88d2be88ef8274.scope: Deactivated successfully.
Nov 29 03:07:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:07:04 np0005539550 podman[299270]: 2025-11-29 08:07:03.911023269 +0000 UTC m=+1.259327086 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:04 np0005539550 podman[299270]: 2025-11-29 08:07:04.105726324 +0000 UTC m=+1.454030111 container create f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.127 257641 DEBUG nova.compute.manager [req-9b9cbf2e-5543-4566-aa28-6638b96c74ed req-f1262c95-25d6-4097-875b-63258a057fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.128 257641 DEBUG oslo_concurrency.lockutils [req-9b9cbf2e-5543-4566-aa28-6638b96c74ed req-f1262c95-25d6-4097-875b-63258a057fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.129 257641 DEBUG oslo_concurrency.lockutils [req-9b9cbf2e-5543-4566-aa28-6638b96c74ed req-f1262c95-25d6-4097-875b-63258a057fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.129 257641 DEBUG oslo_concurrency.lockutils [req-9b9cbf2e-5543-4566-aa28-6638b96c74ed req-f1262c95-25d6-4097-875b-63258a057fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.129 257641 DEBUG nova.compute.manager [req-9b9cbf2e-5543-4566-aa28-6638b96c74ed req-f1262c95-25d6-4097-875b-63258a057fb9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Processing event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.131 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.136 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403624.1359174, 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.137 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.140 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.143 257641 INFO nova.virt.libvirt.driver [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Instance spawned successfully.#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.144 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.169 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.175 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.175 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.176 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.176 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.176 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.177 257641 DEBUG nova.virt.libvirt.driver [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.180 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.206 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.235 257641 INFO nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Took 18.89 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.236 257641 DEBUG nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.291 257641 INFO nova.compute.manager [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Took 19.88 seconds to build instance.#033[00m
Nov 29 03:07:04 np0005539550 nova_compute[257631]: 2025-11-29 08:07:04.310 257641 DEBUG oslo_concurrency.lockutils [None req-fce6abf6-2007-4825-a236-4f8db8b5ffc1 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:07:04 np0005539550 systemd[1]: Started libpod-conmon-f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b.scope.
Nov 29 03:07:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:07:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fbbcd3bc3b73a95b8e4d0ea6dea0a7f2b48f8971dc3b41161095d5f1c46ec41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5e612a2f-4619-4a00-9097-f05c2b5476e9 does not exist
Nov 29 03:07:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0bf7945d-6b49-48ce-b36c-2affa3866c4d does not exist
Nov 29 03:07:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 70602a43-ad1c-4a31-aad8-9538d338a95b does not exist
Nov 29 03:07:04 np0005539550 podman[299270]: 2025-11-29 08:07:04.621516364 +0000 UTC m=+1.969820171 container init f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:07:04 np0005539550 podman[299270]: 2025-11-29 08:07:04.628759046 +0000 UTC m=+1.977062833 container start f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:04 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [NOTICE]   (299342) : New worker (299364) forked
Nov 29 03:07:04 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [NOTICE]   (299342) : Loading success.
Nov 29 03:07:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:04.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:07:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 0 op/s
Nov 29 03:07:05 np0005539550 nova_compute[257631]: 2025-11-29 08:07:05.885 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.077 257641 DEBUG nova.objects.instance [None req-c82c85ac-cbb6-4035-b43f-74a4f9d84d42 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:06.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.100 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403626.1002765, 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.101 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.127 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.131 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.150 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.287 257641 DEBUG nova.compute.manager [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.287 257641 DEBUG oslo_concurrency.lockutils [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.288 257641 DEBUG oslo_concurrency.lockutils [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.288 257641 DEBUG oslo_concurrency.lockutils [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.288 257641 DEBUG nova.compute.manager [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] No waiting events found dispatching network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.288 257641 WARNING nova.compute.manager [req-9d8d770d-9542-416a-b7c4-0b18cdc58308 req-86b9367b-fc5f-4823-b1b6-dca8107c59b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received unexpected event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 for instance with vm_state active and task_state suspending.#033[00m
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.329 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:06Z|00199|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:07:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:06.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:06 np0005539550 nova_compute[257631]: 2025-11-29 08:07:06.740 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 758 KiB/s rd, 12 KiB/s wr, 34 op/s
Nov 29 03:07:07 np0005539550 podman[299408]: 2025-11-29 08:07:07.318958368 +0000 UTC m=+0.060144289 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 29 03:07:07 np0005539550 podman[299409]: 2025-11-29 08:07:07.340991141 +0000 UTC m=+0.069031813 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:07:07 np0005539550 nova_compute[257631]: 2025-11-29 08:07:07.568 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:07.568 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:07.570 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:07:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000050s ======
Nov 29 03:07:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:08.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 29 03:07:08 np0005539550 kernel: tap796f4fca-ba (unregistering): left promiscuous mode
Nov 29 03:07:08 np0005539550 NetworkManager[49039]: <info>  [1764403628.1002] device (tap796f4fca-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.162 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:08Z|00200|binding|INFO|Releasing lport 796f4fca-ba4d-44b1-9886-29e41980cd49 from this chassis (sb_readonly=0)
Nov 29 03:07:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:08Z|00201|binding|INFO|Setting lport 796f4fca-ba4d-44b1-9886-29e41980cd49 down in Southbound
Nov 29 03:07:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:08Z|00202|binding|INFO|Removing iface tap796f4fca-ba ovn-installed in OVS
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.165 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:07:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:08.171 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:3b:27 10.100.0.11'], port_security=['fa:16:3e:f6:3b:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '09dcdda4-714d-4aaf-a92b-e6a80cea0b28', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=796f4fca-ba4d-44b1-9886-29e41980cd49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:08.172 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 796f4fca-ba4d-44b1-9886-29e41980cd49 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:07:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:08.173 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:08.174 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4d81b5ea-1495-45aa-b117-963741f43eef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:08.174 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.182 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:08 np0005539550 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000042.scope: Deactivated successfully.
Nov 29 03:07:08 np0005539550 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000042.scope: Consumed 3.114s CPU time.
Nov 29 03:07:08 np0005539550 systemd-machined[216673]: Machine qemu-31-instance-00000042 terminated.
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.286 257641 DEBUG nova.compute.manager [None req-c82c85ac-cbb6-4035-b43f-74a4f9d84d42 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.511 257641 DEBUG nova.compute.manager [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-vif-unplugged-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.511 257641 DEBUG oslo_concurrency.lockutils [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.511 257641 DEBUG oslo_concurrency.lockutils [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.512 257641 DEBUG oslo_concurrency.lockutils [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.512 257641 DEBUG nova.compute.manager [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] No waiting events found dispatching network-vif-unplugged-796f4fca-ba4d-44b1-9886-29e41980cd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:08 np0005539550 nova_compute[257631]: 2025-11-29 08:07:08.512 257641 WARNING nova.compute.manager [req-8af3eb0d-eda4-4296-a18f-507c4317233d req-a30487de-880a-4d0b-b288-3b6e458d8217 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received unexpected event network-vif-unplugged-796f4fca-ba4d-44b1-9886-29e41980cd49 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [NOTICE]   (299342) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [NOTICE]   (299342) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [WARNING]  (299342) : Exiting Master process...
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [WARNING]  (299342) : Exiting Master process...
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [ALERT]    (299342) : Current worker (299364) exited with code 143 (Terminated)
Nov 29 03:07:08 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[299337]: [WARNING]  (299342) : All workers exited. Exiting... (0)
Nov 29 03:07:08 np0005539550 systemd[1]: libpod-f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b.scope: Deactivated successfully.
Nov 29 03:07:08 np0005539550 podman[299471]: 2025-11-29 08:07:08.560979209 +0000 UTC m=+0.295757161 container died f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:07:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:08.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:07:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Nov 29 03:07:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6fbbcd3bc3b73a95b8e4d0ea6dea0a7f2b48f8971dc3b41161095d5f1c46ec41-merged.mount: Deactivated successfully.
Nov 29 03:07:09 np0005539550 podman[299471]: 2025-11-29 08:07:09.54477884 +0000 UTC m=+1.279556792 container cleanup f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:09 np0005539550 systemd[1]: libpod-conmon-f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b.scope: Deactivated successfully.
Nov 29 03:07:09 np0005539550 podman[299513]: 2025-11-29 08:07:09.941127904 +0000 UTC m=+0.373416919 container remove f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.947 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e97f2950-26f6-4c68-8633-74dbefeb62d4]: (4, ('Sat Nov 29 08:07:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b)\nf5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b\nSat Nov 29 08:07:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (f5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b)\nf5e3422f12d0578c85d3ec5ba5999f57a1baf2417ed739c8dbdb7702d7cd244b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.949 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a86234b9-53a7-4f68-b038-0082a8352baa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.950 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:09 np0005539550 nova_compute[257631]: 2025-11-29 08:07:09.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:09 np0005539550 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:07:09 np0005539550 nova_compute[257631]: 2025-11-29 08:07:09.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.975 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[189e01bc-30f0-432e-98fe-c5ab182c6a13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.984 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[57e9f496-07c9-4709-875d-40cad32342d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.985 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fda3bb96-139b-40ff-9026-7c07f28251af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:09.999 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8cfadebe-8641-4ddb-ac89-e0b648f3ac24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666972, 'reachable_time': 44942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299533, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:10.001 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:10.002 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdd8770-ddca-4806-a11f-90df8bd49294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:10 np0005539550 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:07:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:10.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.661 257641 DEBUG nova.compute.manager [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.662 257641 DEBUG oslo_concurrency.lockutils [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.662 257641 DEBUG oslo_concurrency.lockutils [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.662 257641 DEBUG oslo_concurrency.lockutils [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.662 257641 DEBUG nova.compute.manager [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] No waiting events found dispatching network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.662 257641 WARNING nova.compute.manager [req-90cf1e70-2c37-480b-9592-c12d8669751d req-54b8376c-35e0-4f9f-998b-cd3f973b03ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received unexpected event network-vif-plugged-796f4fca-ba4d-44b1-9886-29e41980cd49 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:07:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:10.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:10 np0005539550 nova_compute[257631]: 2025-11-29 08:07:10.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.247 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.248 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.248 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.248 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.249 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.250 257641 INFO nova.compute.manager [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Terminating instance#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.251 257641 DEBUG nova.compute.manager [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.256 257641 INFO nova.virt.libvirt.driver [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Instance destroyed successfully.#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.257 257641 DEBUG nova.objects.instance [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.296 257641 DEBUG nova.virt.libvirt.vif [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-692868014',display_name='tempest-DeleteServersTestJSON-server-692868014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-692868014',id=66,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-s4r0xwct',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:08Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=09dcdda4-714d-4aaf-a92b-e6a80cea0b28,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.296 257641 DEBUG nova.network.os_vif_util [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "796f4fca-ba4d-44b1-9886-29e41980cd49", "address": "fa:16:3e:f6:3b:27", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap796f4fca-ba", "ovs_interfaceid": "796f4fca-ba4d-44b1-9886-29e41980cd49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.297 257641 DEBUG nova.network.os_vif_util [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.297 257641 DEBUG os_vif [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.299 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap796f4fca-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.304 257641 INFO os_vif [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:3b:27,bridge_name='br-int',has_traffic_filtering=True,id=796f4fca-ba4d-44b1-9886-29e41980cd49,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap796f4fca-ba')#033[00m
Nov 29 03:07:11 np0005539550 nova_compute[257631]: 2025-11-29 08:07:11.330 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:11.572 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:12.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:12.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 607 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.709 257641 INFO nova.virt.libvirt.driver [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Deleting instance files /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28_del#033[00m
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.710 257641 INFO nova.virt.libvirt.driver [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Deletion of /var/lib/nova/instances/09dcdda4-714d-4aaf-a92b-e6a80cea0b28_del complete#033[00m
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.819 257641 INFO nova.compute.manager [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Took 2.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.820 257641 DEBUG oslo.service.loopingcall [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.820 257641 DEBUG nova.compute.manager [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:07:13 np0005539550 nova_compute[257631]: 2025-11-29 08:07:13.820 257641 DEBUG nova.network.neutron [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:07:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:14.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 68 MiB data, 602 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.306 257641 DEBUG nova.network.neutron [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.343 257641 INFO nova.compute.manager [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Took 1.52 seconds to deallocate network for instance.#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.442 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.442 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.498 257641 DEBUG nova.compute.manager [req-e92f20bb-8849-4157-9453-5dcf7a96d6d8 req-f3670c95-4e8e-49a1-8000-e772be8992b2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Received event network-vif-deleted-796f4fca-ba4d-44b1-9886-29e41980cd49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.509 257641 DEBUG oslo_concurrency.processutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177352421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.955 257641 DEBUG oslo_concurrency.processutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.961 257641 DEBUG nova.compute.provider_tree [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:15 np0005539550 nova_compute[257631]: 2025-11-29 08:07:15.978 257641 DEBUG nova.scheduler.client.report [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.002 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.005 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.005 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.006 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.006 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.067 257641 INFO nova.scheduler.client.report [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance 09dcdda4-714d-4aaf-a92b-e6a80cea0b28#033[00m
Nov 29 03:07:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:16.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.142 257641 DEBUG oslo_concurrency.lockutils [None req-2221ae6d-e218-4974-bf19-d229dff3562e 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "09dcdda4-714d-4aaf-a92b-e6a80cea0b28" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.332 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2526878204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.453 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.613 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.615 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4533MB free_disk=20.974720001220703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.616 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.616 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.792 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.792 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:07:16 np0005539550 nova_compute[257631]: 2025-11-29 08:07:16.831 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 41 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 98 op/s
Nov 29 03:07:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/906535081' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:17 np0005539550 nova_compute[257631]: 2025-11-29 08:07:17.264 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:17 np0005539550 nova_compute[257631]: 2025-11-29 08:07:17.269 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:17 np0005539550 nova_compute[257631]: 2025-11-29 08:07:17.286 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:17 np0005539550 nova_compute[257631]: 2025-11-29 08:07:17.312 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:07:17 np0005539550 nova_compute[257631]: 2025-11-29 08:07:17.313 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:18.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:18.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:18.940 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:18.940 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:18.941 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 41 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 65 op/s
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.307 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.308 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.308 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.435 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.435 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.436 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:07:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:19 np0005539550 nova_compute[257631]: 2025-11-29 08:07:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:20.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:20.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 41 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 501 KiB/s rd, 1.2 KiB/s wr, 41 op/s
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.268 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.269 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.294 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.303 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:21 np0005539550 podman[299675]: 2025-11-29 08:07:21.334804566 +0000 UTC m=+0.075351911 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.334 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.382 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.382 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.388 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.389 257641 INFO nova.compute.claims [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.503 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751944837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.942 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.949 257641 DEBUG nova.compute.provider_tree [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:21 np0005539550 nova_compute[257631]: 2025-11-29 08:07:21.971 257641 DEBUG nova.scheduler.client.report [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.024 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.025 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.094 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.094 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:07:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:22.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.118 257641 INFO nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.140 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.255 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.257 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.257 257641 INFO nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Creating image(s)#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.282 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.310 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.335 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.339 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.404 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.404 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.405 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.405 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.430 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.433 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fb74110a-d0bc-4766-a5c7-5def485f388f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.463 257641 DEBUG nova.policy [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '104aea18c5154615b602f032bdb49681', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '90c23935e0214785a9dc5061b91cf29c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:07:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:22.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:22 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.850 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 fb74110a-d0bc-4766-a5c7-5def485f388f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:22.999 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.000 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.047 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] resizing rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:07:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 41 MiB data, 586 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.181 257641 DEBUG nova.objects.instance [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.197 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.198 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Ensure instance console log exists: /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.198 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.199 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.199 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.289 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403628.286864, 09dcdda4-714d-4aaf-a92b-e6a80cea0b28 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.290 257641 INFO nova.compute.manager [-] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.329 257641 DEBUG nova.compute.manager [None req-263f7a06-9939-425d-866d-7c3ebc0a13fd - - - - - -] [instance: 09dcdda4-714d-4aaf-a92b-e6a80cea0b28] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:23 np0005539550 nova_compute[257631]: 2025-11-29 08:07:23.483 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Successfully created port: b6dd53f5-33a7-4fde-86d5-27a3e78dc566 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:07:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:24.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.733 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Successfully updated port: b6dd53f5-33a7-4fde-86d5-27a3e78dc566 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.755 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.756 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.756 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.887 257641 DEBUG nova.compute.manager [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-changed-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.887 257641 DEBUG nova.compute.manager [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Refreshing instance network info cache due to event network-changed-b6dd53f5-33a7-4fde-86d5-27a3e78dc566. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:07:24 np0005539550 nova_compute[257631]: 2025-11-29 08:07:24.888 257641 DEBUG oslo_concurrency.lockutils [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:25 np0005539550 nova_compute[257631]: 2025-11-29 08:07:25.007 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:07:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 73 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.3 MiB/s wr, 30 op/s
Nov 29 03:07:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:26.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.235 257641 DEBUG nova.network.neutron [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.253 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.254 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance network_info: |[{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.254 257641 DEBUG oslo_concurrency.lockutils [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.254 257641 DEBUG nova.network.neutron [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Refreshing network info cache for port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.257 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start _get_guest_xml network_info=[{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.261 257641 WARNING nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.266 257641 DEBUG nova.virt.libvirt.host [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.267 257641 DEBUG nova.virt.libvirt.host [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.271 257641 DEBUG nova.virt.libvirt.host [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.271 257641 DEBUG nova.virt.libvirt.host [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.272 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.273 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.273 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.273 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.274 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.274 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.274 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.274 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.275 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.275 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.275 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.276 257641 DEBUG nova.virt.hardware [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.278 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.337 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366246146' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.755 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.799 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:26 np0005539550 nova_compute[257631]: 2025-11-29 08:07:26.806 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 88 MiB data, 600 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 29 03:07:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2250539851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.261 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.262 257641 DEBUG nova.virt.libvirt.vif [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:22Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.263 257641 DEBUG nova.network.os_vif_util [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.264 257641 DEBUG nova.network.os_vif_util [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.265 257641 DEBUG nova.objects.instance [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.295 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <uuid>fb74110a-d0bc-4766-a5c7-5def485f388f</uuid>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <name>instance-00000043</name>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:name>tempest-DeleteServersTestJSON-server-911002665</nova:name>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:07:26</nova:creationTime>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <nova:port uuid="b6dd53f5-33a7-4fde-86d5-27a3e78dc566">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="serial">fb74110a-d0bc-4766-a5c7-5def485f388f</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="uuid">fb74110a-d0bc-4766-a5c7-5def485f388f</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fb74110a-d0bc-4766-a5c7-5def485f388f_disk">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:21:b7:fc"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <target dev="tapb6dd53f5-33"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/console.log" append="off"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:07:27 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:07:27 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:07:27 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:07:27 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.297 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Preparing to wait for external event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.297 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.297 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.298 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.298 257641 DEBUG nova.virt.libvirt.vif [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:22Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.299 257641 DEBUG nova.network.os_vif_util [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.299 257641 DEBUG nova.network.os_vif_util [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.300 257641 DEBUG os_vif [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.300 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.301 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.301 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.305 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6dd53f5-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.306 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6dd53f5-33, col_values=(('external_ids', {'iface-id': 'b6dd53f5-33a7-4fde-86d5-27a3e78dc566', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:b7:fc', 'vm-uuid': 'fb74110a-d0bc-4766-a5c7-5def485f388f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.307 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539550 NetworkManager[49039]: <info>  [1764403647.3086] manager: (tapb6dd53f5-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.310 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.315 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.316 257641 INFO os_vif [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33')#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.457 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.458 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.458 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:21:b7:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.458 257641 INFO nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Using config drive#033[00m
Nov 29 03:07:27 np0005539550 nova_compute[257631]: 2025-11-29 08:07:27.483 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:28.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.293 257641 INFO nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Creating config drive at /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.299 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6hkscq25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.433 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6hkscq25" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.466 257641 DEBUG nova.storage.rbd_utils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] rbd image fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.473 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.693 257641 DEBUG oslo_concurrency.processutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.694 257641 INFO nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Deleting local config drive /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:07:28 np0005539550 virtqemud[256287]: End of file while reading data: Input/output error
Nov 29 03:07:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:28.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:28 np0005539550 kernel: tapb6dd53f5-33: entered promiscuous mode
Nov 29 03:07:28 np0005539550 NetworkManager[49039]: <info>  [1764403648.7627] manager: (tapb6dd53f5-33): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Nov 29 03:07:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:28Z|00203|binding|INFO|Claiming lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for this chassis.
Nov 29 03:07:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:28Z|00204|binding|INFO|b6dd53f5-33a7-4fde-86d5-27a3e78dc566: Claiming fa:16:3e:21:b7:fc 10.100.0.12
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.791 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:b7:fc 10.100.0.12'], port_security=['fa:16:3e:21:b7:fc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fb74110a-d0bc-4766-a5c7-5def485f388f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=b6dd53f5-33a7-4fde-86d5-27a3e78dc566) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.793 158978 INFO neutron.agent.ovn.metadata.agent [-] Port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.795 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:07:28 np0005539550 systemd-udevd[300022]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:28Z|00205|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 ovn-installed in OVS
Nov 29 03:07:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:28Z|00206|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 up in Southbound
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539550 nova_compute[257631]: 2025-11-29 08:07:28.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.809 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8c9832-de11-49ad-a706-5268358d8d9f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.810 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.812 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.812 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[36e70f99-ece4-4043-98b3-2a7d0ff355f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99e37f55-b5e9-45b3-993a-e51ba836820b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 NetworkManager[49039]: <info>  [1764403648.8161] device (tapb6dd53f5-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:28 np0005539550 NetworkManager[49039]: <info>  [1764403648.8174] device (tapb6dd53f5-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:28 np0005539550 systemd-machined[216673]: New machine qemu-32-instance-00000043.
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.827 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a87137-198c-469c-8c95-1ccc0dbe1cf2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 systemd[1]: Started Virtual Machine qemu-32-instance-00000043.
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.846 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d227d34c-9709-4bed-a80c-aebf4b8e5f41]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.879 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8a223f9d-74b8-47cd-a55e-fd6ace7baff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.884 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d70bb3-ce48-4478-b394-b310441eefd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 NetworkManager[49039]: <info>  [1764403648.8862] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.918 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5771e8-1d52-4439-a35c-9bd5509a9367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.921 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f152fd9e-ee53-4ae5-bffb-d0735f31cf55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 NetworkManager[49039]: <info>  [1764403648.9453] device (tapa8be8715-20): carrier: link connected
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.950 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[43b387a0-fda2-4ba0-a24a-b54dfaac38d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[39661990-bd4a-4a4f-b899-d8011c918315]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669661, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300058, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:28.984 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[14f4b23c-ce04-41c4-8ff0-2673f7ed6a93]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669661, 'tstamp': 669661}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300059, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.001 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[51a44e0f-3257-42c1-92cb-582acf4f6cf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669661, 'reachable_time': 26991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300060, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.029 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cd03a439-939c-49a7-820f-1942242dd700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 141 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 4.0 MiB/s wr, 70 op/s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.087 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[851fdc6b-481e-43a8-b0aa-056b57091511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.089 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.089 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.090 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.092 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539550 NetworkManager[49039]: <info>  [1764403649.0927] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Nov 29 03:07:29 np0005539550 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.096 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.094 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.098 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:29Z|00207|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.116 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.118 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.119 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6e523b-48c0-436a-b0cb-38cccfb6ac0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.121 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:29.122 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.169 257641 DEBUG nova.network.neutron [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updated VIF entry in instance network info cache for port b6dd53f5-33a7-4fde-86d5-27a3e78dc566. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.169 257641 DEBUG nova.network.neutron [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.212 257641 DEBUG oslo_concurrency.lockutils [req-a774a0ca-d09d-4e91-94c4-81ade599a535 req-350cff32-a4af-4566-82fe-7d9e7abef4a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.227 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403649.226414, fb74110a-d0bc-4766-a5c7-5def485f388f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.228 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.268 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.272 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403649.2270293, fb74110a-d0bc-4766-a5c7-5def485f388f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.272 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.320 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.324 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:29 np0005539550 nova_compute[257631]: 2025-11-29 08:07:29.343 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:29 np0005539550 podman[300134]: 2025-11-29 08:07:29.502981289 +0000 UTC m=+0.056155290 container create 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:29 np0005539550 systemd[1]: Started libpod-conmon-8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4.scope.
Nov 29 03:07:29 np0005539550 podman[300134]: 2025-11-29 08:07:29.469012407 +0000 UTC m=+0.022186428 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:07:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bc07595490c0080311cce7513339b59df3ac44932a0de4a40ae3d8193da5a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:29 np0005539550 podman[300134]: 2025-11-29 08:07:29.603181243 +0000 UTC m=+0.156355264 container init 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:07:29 np0005539550 podman[300134]: 2025-11-29 08:07:29.609373078 +0000 UTC m=+0.162547079 container start 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:07:29 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [NOTICE]   (300154) : New worker (300156) forked
Nov 29 03:07:29 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [NOTICE]   (300154) : Loading success.
Nov 29 03:07:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:30.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.179 257641 DEBUG nova.compute.manager [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.180 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.180 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.180 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.180 257641 DEBUG nova.compute.manager [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Processing event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.181 257641 DEBUG nova.compute.manager [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.181 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.181 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.181 257641 DEBUG oslo_concurrency.lockutils [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.181 257641 DEBUG nova.compute.manager [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.182 257641 WARNING nova.compute.manager [req-6f13f111-f7ab-496c-b481-a9409256fe81 req-0ef967ee-3bc8-4d3b-b22d-ef39e759e892 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.182 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.186 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403650.1860816, fb74110a-d0bc-4766-a5c7-5def485f388f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.186 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.188 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.191 257641 INFO nova.virt.libvirt.driver [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance spawned successfully.#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.191 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.215 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.218 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.243 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.244 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.244 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.244 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.245 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.245 257641 DEBUG nova.virt.libvirt.driver [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.248 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.326 257641 INFO nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Took 8.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.327 257641 DEBUG nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.417 257641 INFO nova.compute.manager [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Took 9.06 seconds to build instance.#033[00m
Nov 29 03:07:30 np0005539550 nova_compute[257631]: 2025-11-29 08:07:30.445 257641 DEBUG oslo_concurrency.lockutils [None req-4ebbdc5f-c942-4967-8e75-e4f01ade63bf 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:30.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 180 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 5.3 MiB/s wr, 76 op/s
Nov 29 03:07:31 np0005539550 nova_compute[257631]: 2025-11-29 08:07:31.339 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:32.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:32 np0005539550 nova_compute[257631]: 2025-11-29 08:07:32.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:32.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 180 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 47 KiB/s rd, 5.3 MiB/s wr, 76 op/s
Nov 29 03:07:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:34.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:34.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 180 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.3 MiB/s wr, 143 op/s
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.522 257641 DEBUG nova.compute.manager [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.627 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.628 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.656 257641 DEBUG nova.objects.instance [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_requests' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.674 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.675 257641 INFO nova.compute.claims [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.676 257641 DEBUG nova.objects.instance [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.687 257641 DEBUG nova.objects.instance [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'pci_devices' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.724 257641 INFO nova.compute.resource_tracker [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating resource usage from migration b3c3681b-78f9-4a9d-9e5a-f696a97c657b#033[00m
Nov 29 03:07:35 np0005539550 nova_compute[257631]: 2025-11-29 08:07:35.775 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:36.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:07:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150629086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.233 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.239 257641 DEBUG nova.compute.provider_tree [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.254 257641 DEBUG nova.scheduler.client.report [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.275 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.276 257641 INFO nova.compute.manager [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Migrating#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.341 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.359 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.359 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:36 np0005539550 nova_compute[257631]: 2025-11-29 08:07:36.360 257641 DEBUG nova.network.neutron [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:36.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 181 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.0 MiB/s wr, 185 op/s
Nov 29 03:07:37 np0005539550 nova_compute[257631]: 2025-11-29 08:07:37.311 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:37 np0005539550 nova_compute[257631]: 2025-11-29 08:07:37.711 257641 DEBUG nova.network.neutron [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:37 np0005539550 nova_compute[257631]: 2025-11-29 08:07:37.732 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:37 np0005539550 nova_compute[257631]: 2025-11-29 08:07:37.834 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:07:37 np0005539550 nova_compute[257631]: 2025-11-29 08:07:37.838 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:07:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:38.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:38 np0005539550 podman[300245]: 2025-11-29 08:07:38.318027529 +0000 UTC m=+0.052844286 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:07:38 np0005539550 podman[300244]: 2025-11-29 08:07:38.352042453 +0000 UTC m=+0.088339667 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:07:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:38.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 181 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 248 op/s
Nov 29 03:07:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:40.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:07:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:40.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:07:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 181 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.4 MiB/s wr, 232 op/s
Nov 29 03:07:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 29 03:07:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 29 03:07:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 29 03:07:41 np0005539550 nova_compute[257631]: 2025-11-29 08:07:41.343 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 29 03:07:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 29 03:07:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 29 03:07:42 np0005539550 nova_compute[257631]: 2025-11-29 08:07:42.315 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:42.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 181 MiB data, 650 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 37 KiB/s wr, 237 op/s
Nov 29 03:07:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 29 03:07:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:43Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:21:b7:fc 10.100.0.12
Nov 29 03:07:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:43Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:21:b7:fc 10.100.0.12
Nov 29 03:07:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 29 03:07:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 29 03:07:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:44.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 29 03:07:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 29 03:07:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 29 03:07:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:44.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 213 MiB data, 674 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 239 op/s
Nov 29 03:07:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 29 03:07:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 29 03:07:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 29 03:07:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:46.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:46 np0005539550 nova_compute[257631]: 2025-11-29 08:07:46.345 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 29 03:07:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 29 03:07:46 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 29 03:07:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:46.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 269 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 13 MiB/s wr, 335 op/s
Nov 29 03:07:47 np0005539550 nova_compute[257631]: 2025-11-29 08:07:47.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 29 03:07:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 29 03:07:47 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 29 03:07:47 np0005539550 nova_compute[257631]: 2025-11-29 08:07:47.880 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:07:47 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Nov 29 03:07:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:48.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 275 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 13 MiB/s wr, 484 op/s
Nov 29 03:07:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:50 np0005539550 kernel: tapb6dd53f5-33 (unregistering): left promiscuous mode
Nov 29 03:07:50 np0005539550 NetworkManager[49039]: <info>  [1764403670.1527] device (tapb6dd53f5-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:07:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:50Z|00208|binding|INFO|Releasing lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 from this chassis (sb_readonly=0)
Nov 29 03:07:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:50Z|00209|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 down in Southbound
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.161 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:50Z|00210|binding|INFO|Removing iface tapb6dd53f5-33 ovn-installed in OVS
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.168 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:b7:fc 10.100.0.12'], port_security=['fa:16:3e:21:b7:fc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fb74110a-d0bc-4766-a5c7-5def485f388f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=b6dd53f5-33a7-4fde-86d5-27a3e78dc566) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.170 158978 INFO neutron.agent.ovn.metadata.agent [-] Port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.171 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.173 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49ae7c40-8670-45f3-9d11-2645870c7a7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.174 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.198 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539550 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000043.scope: Consumed 14.114s CPU time.
Nov 29 03:07:50 np0005539550 systemd-machined[216673]: Machine qemu-32-instance-00000043 terminated.
Nov 29 03:07:50 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [NOTICE]   (300154) : haproxy version is 2.8.14-c23fe91
Nov 29 03:07:50 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [NOTICE]   (300154) : path to executable is /usr/sbin/haproxy
Nov 29 03:07:50 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [WARNING]  (300154) : Exiting Master process...
Nov 29 03:07:50 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [ALERT]    (300154) : Current worker (300156) exited with code 143 (Terminated)
Nov 29 03:07:50 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300149]: [WARNING]  (300154) : All workers exited. Exiting... (0)
Nov 29 03:07:50 np0005539550 systemd[1]: libpod-8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539550 podman[300317]: 2025-11-29 08:07:50.320043665 +0000 UTC m=+0.044022735 container died 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:07:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d0bc07595490c0080311cce7513339b59df3ac44932a0de4a40ae3d8193da5a1-merged.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539550 podman[300317]: 2025-11-29 08:07:50.353995397 +0000 UTC m=+0.077974457 container cleanup 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:07:50 np0005539550 systemd[1]: libpod-conmon-8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4.scope: Deactivated successfully.
Nov 29 03:07:50 np0005539550 podman[300348]: 2025-11-29 08:07:50.421299346 +0000 UTC m=+0.045721658 container remove 8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.427 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5be309d0-3f8e-41a8-a968-c5bd09c60194]: (4, ('Sat Nov 29 08:07:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4)\n8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4\nSat Nov 29 08:07:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4)\n8bc482593c718748afd67d941d4fe64f151adf17fce1180a568a537f688b24e4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.430 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c42785da-fed5-4e4d-877b-f1775cfbdcc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.431 257641 DEBUG nova.compute.manager [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.432 257641 DEBUG oslo_concurrency.lockutils [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.432 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.432 257641 DEBUG oslo_concurrency.lockutils [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.432 257641 DEBUG oslo_concurrency.lockutils [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.432 257641 DEBUG nova.compute.manager [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.432 257641 WARNING nova.compute.manager [req-5d0697e6-bb16-444c-bbb2-04203782f418 req-ff58017a-7279-49d0-8e7e-99ca2dca1755 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.434 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.452 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.455 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6142794c-06d9-428c-96db-7add2205ed77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.467 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ec16f4a0-c137-4c56-b0b3-199d2dd9190d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.468 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[48c85a5a-5ae8-4fe1-9488-25a16e248a55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.484 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07986b77-98d9-4348-b732-1d6f34e93ac7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669653, 'reachable_time': 22656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300374, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.487 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:07:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:50.487 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[8e23d45f-e2e5-4183-a14d-1306f57d45ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 29 03:07:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 29 03:07:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 29 03:07:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:50.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.896 257641 INFO nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.901 257641 INFO nova.virt.libvirt.driver [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance destroyed successfully.#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.902 257641 DEBUG nova.virt.libvirt.vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:35Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.902 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.903 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.903 257641 DEBUG os_vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.905 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6dd53f5-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.906 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.908 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.911 257641 INFO os_vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33')#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.915 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:50 np0005539550 nova_compute[257631]: 2025-11-29 08:07:50.915 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:07:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 221 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 8.7 MiB/s wr, 453 op/s
Nov 29 03:07:51 np0005539550 nova_compute[257631]: 2025-11-29 08:07:51.348 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 29 03:07:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539550 podman[300399]: 2025-11-29 08:07:52.016979538 +0000 UTC m=+0.115747215 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:07:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:07:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:52.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:07:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 29 03:07:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 29 03:07:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:52.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:52 np0005539550 nova_compute[257631]: 2025-11-29 08:07:52.864 257641 DEBUG nova.network.neutron [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:07:52 np0005539550 nova_compute[257631]: 2025-11-29 08:07:52.985 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:52 np0005539550 nova_compute[257631]: 2025-11-29 08:07:52.986 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:52 np0005539550 nova_compute[257631]: 2025-11-29 08:07:52.986 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 221 MiB data, 712 MiB used, 20 GiB / 21 GiB avail; 755 KiB/s rd, 4.0 MiB/s wr, 171 op/s
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.158 257641 DEBUG nova.compute.manager [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.159 257641 DEBUG oslo_concurrency.lockutils [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.159 257641 DEBUG oslo_concurrency.lockutils [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.159 257641 DEBUG oslo_concurrency.lockutils [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.160 257641 DEBUG nova.compute.manager [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.160 257641 WARNING nova.compute.manager [req-f3c5b15f-4480-49fb-a2cb-ccde2ce2943c req-b02dc0d1-a3fc-47a8-997c-bfeaeff88c20 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.223 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.224 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:07:53 np0005539550 nova_compute[257631]: 2025-11-29 08:07:53.224 257641 DEBUG nova.network.neutron [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:07:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:54.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:54 np0005539550 nova_compute[257631]: 2025-11-29 08:07:54.762 257641 DEBUG nova.network.neutron [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:07:54 np0005539550 nova_compute[257631]: 2025-11-29 08:07:54.794 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.050 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.053 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.053 257641 INFO nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Creating image(s)#033[00m
Nov 29 03:07:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 272 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 13 MiB/s wr, 399 op/s
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.105 257641 DEBUG nova.storage.rbd_utils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] creating snapshot(nova-resize) on rbd image(fb74110a-d0bc-4766-a5c7-5def485f388f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:07:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 29 03:07:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 29 03:07:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.753 257641 DEBUG nova.objects.instance [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'trusted_certs' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.882 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.883 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Ensure instance console log exists: /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.884 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.884 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.884 257641 DEBUG oslo_concurrency.lockutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.887 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start _get_guest_xml network_info=[{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.891 257641 WARNING nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.899 257641 DEBUG nova.virt.libvirt.host [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.900 257641 DEBUG nova.virt.libvirt.host [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.903 257641 DEBUG nova.virt.libvirt.host [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.903 257641 DEBUG nova.virt.libvirt.host [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.904 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.905 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.905 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.905 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.906 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.906 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.906 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.906 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.907 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.907 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.907 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.907 257641 DEBUG nova.virt.hardware [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.908 257641 DEBUG nova.objects.instance [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'vcpu_model' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:55 np0005539550 nova_compute[257631]: 2025-11-29 08:07:55.925 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:56.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1213694125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.370 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.419 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 29 03:07:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:07:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153691770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.863 257641 DEBUG oslo_concurrency.processutils [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.865 257641 DEBUG nova.virt.libvirt.vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:52Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.865 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.866 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.870 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <uuid>fb74110a-d0bc-4766-a5c7-5def485f388f</uuid>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <name>instance-00000043</name>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:name>tempest-DeleteServersTestJSON-server-911002665</nova:name>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:07:55</nova:creationTime>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:user uuid="104aea18c5154615b602f032bdb49681">tempest-DeleteServersTestJSON-294503786-project-member</nova:user>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:project uuid="90c23935e0214785a9dc5061b91cf29c">tempest-DeleteServersTestJSON-294503786</nova:project>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <nova:port uuid="b6dd53f5-33a7-4fde-86d5-27a3e78dc566">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="serial">fb74110a-d0bc-4766-a5c7-5def485f388f</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="uuid">fb74110a-d0bc-4766-a5c7-5def485f388f</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fb74110a-d0bc-4766-a5c7-5def485f388f_disk">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/fb74110a-d0bc-4766-a5c7-5def485f388f_disk.config">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:21:b7:fc"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <target dev="tapb6dd53f5-33"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f/console.log" append="off"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:07:56 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:07:56 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:07:56 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:07:56 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.872 257641 DEBUG nova.virt.libvirt.vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:07:52Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.873 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-DeleteServersTestJSON-1820701608-network", "vif_mac": "fa:16:3e:21:b7:fc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.873 257641 DEBUG nova.network.os_vif_util [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.874 257641 DEBUG os_vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.875 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.875 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.876 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.878 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6dd53f5-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.878 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6dd53f5-33, col_values=(('external_ids', {'iface-id': 'b6dd53f5-33a7-4fde-86d5-27a3e78dc566', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:b7:fc', 'vm-uuid': 'fb74110a-d0bc-4766-a5c7-5def485f388f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:56 np0005539550 NetworkManager[49039]: <info>  [1764403676.9301] manager: (tapb6dd53f5-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.938 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:56 np0005539550 nova_compute[257631]: 2025-11-29 08:07:56.938 257641 INFO os_vif [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33')#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.002 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.003 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.003 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] No VIF found with MAC fa:16:3e:21:b7:fc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.004 257641 INFO nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Using config drive#033[00m
Nov 29 03:07:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 314 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 15 MiB/s wr, 376 op/s
Nov 29 03:07:57 np0005539550 kernel: tapb6dd53f5-33: entered promiscuous mode
Nov 29 03:07:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:57Z|00211|binding|INFO|Claiming lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for this chassis.
Nov 29 03:07:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:57Z|00212|binding|INFO|b6dd53f5-33a7-4fde-86d5-27a3e78dc566: Claiming fa:16:3e:21:b7:fc 10.100.0.12
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.1169] manager: (tapb6dd53f5-33): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.116 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.122 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:b7:fc 10.100.0.12'], port_security=['fa:16:3e:21:b7:fc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fb74110a-d0bc-4766-a5c7-5def485f388f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=b6dd53f5-33a7-4fde-86d5-27a3e78dc566) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.123 158978 INFO neutron.agent.ovn.metadata.agent [-] Port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 bound to our chassis#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.125 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9#033[00m
Nov 29 03:07:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:57Z|00213|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 ovn-installed in OVS
Nov 29 03:07:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:57Z|00214|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 up in Southbound
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.134 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.137 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9292fbb2-f844-483d-b430-64068de07bb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.138 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa8be8715-21 in ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.139 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa8be8715-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.140 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b24eb46d-0f88-40c4-bffc-573cc39c9e6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.140 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e37376-64c4-4e23-9256-f19d112d84cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 systemd-udevd[300621]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.154 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[42ccef9b-edfa-4489-a7f2-d54834841e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 systemd-machined[216673]: New machine qemu-33-instance-00000043.
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.1620] device (tapb6dd53f5-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.1627] device (tapb6dd53f5-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:07:57 np0005539550 systemd[1]: Started Virtual Machine qemu-33-instance-00000043.
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.168 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[299c8ef9-e1ba-4728-8873-a84cd13868ba]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.200 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3a666271-e6ff-4b17-b2a4-3f6a00e1887d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 systemd-udevd[300625]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.2088] manager: (tapa8be8715-20): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.208 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[771c1828-2edc-403c-af7a-df55ef6d84e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.238 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1f5ac6a5-9669-44f1-bdf0-0922186742d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.240 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5dbfe579-df4f-4d81-830e-faad33ddd6f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.2629] device (tapa8be8715-20): carrier: link connected
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.268 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[67fca261-ae1d-4e74-9087-858a715aec99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b4351ed7-4601-4c18-b4fa-7e8dba15d986]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672492, 'reachable_time': 21300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300653, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.299 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4b958817-0bef-411e-ad18-db5f15bd9a66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:f3b4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 672492, 'tstamp': 672492}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300654, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.317 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7275c0cf-2fee-432c-9796-043a618c99a7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa8be8715-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:f3:b4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672492, 'reachable_time': 21300, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300655, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.345 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[90bc69af-a3c7-4dd2-b300-366617565f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.379 257641 DEBUG nova.compute.manager [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.379 257641 DEBUG oslo_concurrency.lockutils [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.380 257641 DEBUG oslo_concurrency.lockutils [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.380 257641 DEBUG oslo_concurrency.lockutils [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.381 257641 DEBUG nova.compute.manager [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.381 257641 WARNING nova.compute.manager [req-143fb9b6-18a4-4d00-b2fd-ece5e28b6e30 req-13a624b3-ea58-4006-badf-9a36265a537c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.396 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[48ec6a6f-55a8-443f-b46c-a0dcc0747498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.397 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.398 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.398 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8be8715-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539550 kernel: tapa8be8715-20: entered promiscuous mode
Nov 29 03:07:57 np0005539550 NetworkManager[49039]: <info>  [1764403677.4005] manager: (tapa8be8715-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.402 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa8be8715-20, col_values=(('external_ids', {'iface-id': '307ce936-d5dc-4357-90d6-2b0b2d3d1113'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:07:57Z|00215|binding|INFO|Releasing lport 307ce936-d5dc-4357-90d6-2b0b2d3d1113 from this chassis (sb_readonly=0)
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.406 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.407 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[98900a59-f77a-4cbd-b463-13ae91cb2231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.407 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.pid.haproxy
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a8be8715-2b74-42ca-9713-7fc1f4a33bc9
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:07:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:07:57.408 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'env', 'PROCESS_TAG=haproxy-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a8be8715-2b74-42ca-9713-7fc1f4a33bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.525 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for fb74110a-d0bc-4766-a5c7-5def485f388f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.531 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403677.5251918, fb74110a-d0bc-4766-a5c7-5def485f388f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.531 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.534 257641 DEBUG nova.compute.manager [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.537 257641 INFO nova.virt.libvirt.driver [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance running successfully.#033[00m
Nov 29 03:07:57 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.538 257641 DEBUG nova.virt.libvirt.guest [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.539 257641 DEBUG nova.virt.libvirt.driver [None req-607bbaa0-5881-4a4c-84a7-53a734a391b3 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.569 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.573 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.621 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.621 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403677.5253627, fb74110a-d0bc-4766-a5c7-5def485f388f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.622 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.661 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:07:57 np0005539550 nova_compute[257631]: 2025-11-29 08:07:57.664 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:07:57 np0005539550 podman[300729]: 2025-11-29 08:07:57.749946907 +0000 UTC m=+0.052668082 container create 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:07:57 np0005539550 systemd[1]: Started libpod-conmon-5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a.scope.
Nov 29 03:07:57 np0005539550 podman[300729]: 2025-11-29 08:07:57.717605675 +0000 UTC m=+0.020326870 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:07:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/507014b255929b54358363c817014f129cfe6caa4afaa65e5237d679868a09e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:07:57 np0005539550 podman[300729]: 2025-11-29 08:07:57.854305215 +0000 UTC m=+0.157026410 container init 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:07:57 np0005539550 podman[300729]: 2025-11-29 08:07:57.860027749 +0000 UTC m=+0.162748924 container start 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:07:57 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [NOTICE]   (300749) : New worker (300751) forked
Nov 29 03:07:57 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [NOTICE]   (300749) : Loading success.
Nov 29 03:07:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:07:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:07:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:07:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:07:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:07:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:07:58.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 206 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 13 MiB/s wr, 466 op/s
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:07:59
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', 'backups', 'vms']
Nov 29 03:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.691 257641 DEBUG nova.compute.manager [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.691 257641 DEBUG oslo_concurrency.lockutils [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.692 257641 DEBUG oslo_concurrency.lockutils [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.692 257641 DEBUG oslo_concurrency.lockutils [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.692 257641 DEBUG nova.compute.manager [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:07:59 np0005539550 nova_compute[257631]: 2025-11-29 08:07:59.692 257641 WARNING nova.compute.manager [req-6e6ff485-5478-4c6e-93ca-8eae5272c65e req-12af6ac7-a619-4796-a41d-93458daada78 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:08:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:00 np0005539550 nova_compute[257631]: 2025-11-29 08:08:00.727 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:00 np0005539550 nova_compute[257631]: 2025-11-29 08:08:00.728 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:00 np0005539550 nova_compute[257631]: 2025-11-29 08:08:00.728 257641 DEBUG nova.compute.manager [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:08:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:00.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 167 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 10 MiB/s wr, 430 op/s
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.389 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.509 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.510 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquired lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.510 257641 DEBUG nova.network.neutron [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.511 257641 DEBUG nova.objects.instance [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'info_cache' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 29 03:08:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 29 03:08:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 29 03:08:01 np0005539550 nova_compute[257631]: 2025-11-29 08:08:01.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:02.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:02.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 167 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 25 KiB/s wr, 189 op/s
Nov 29 03:08:03 np0005539550 nova_compute[257631]: 2025-11-29 08:08:03.798 257641 DEBUG nova.network.neutron [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [{"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:03 np0005539550 nova_compute[257631]: 2025-11-29 08:08:03.824 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Releasing lock "refresh_cache-fb74110a-d0bc-4766-a5c7-5def485f388f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:03 np0005539550 nova_compute[257631]: 2025-11-29 08:08:03.825 257641 DEBUG nova.objects.instance [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'migration_context' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:03 np0005539550 nova_compute[257631]: 2025-11-29 08:08:03.926 257641 DEBUG nova.storage.rbd_utils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] removing snapshot(nova-resize) on rbd image(fb74110a-d0bc-4766-a5c7-5def485f388f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:08:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 29 03:08:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:04.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 29 03:08:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.245 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.247 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.333 257641 DEBUG oslo_concurrency.processutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1861211098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:04.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.770 257641 DEBUG oslo_concurrency.processutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.776 257641 DEBUG nova.compute.provider_tree [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.793 257641 DEBUG nova.scheduler.client.report [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:04 np0005539550 nova_compute[257631]: 2025-11-29 08:08:04.849 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.000 257641 INFO nova.scheduler.client.report [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocation for migration b3c3681b-78f9-4a9d-9e5a-f696a97c657b#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.069 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 167 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 44 KiB/s wr, 257 op/s
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.152 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.152 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.152 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.153 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.153 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.155 257641 INFO nova.compute.manager [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Terminating instance#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.156 257641 DEBUG nova.compute.manager [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:05 np0005539550 kernel: tapb6dd53f5-33 (unregistering): left promiscuous mode
Nov 29 03:08:05 np0005539550 NetworkManager[49039]: <info>  [1764403685.1946] device (tapb6dd53f5-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:05Z|00216|binding|INFO|Releasing lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 from this chassis (sb_readonly=0)
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.210 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:05Z|00217|binding|INFO|Setting lport b6dd53f5-33a7-4fde-86d5-27a3e78dc566 down in Southbound
Nov 29 03:08:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:05Z|00218|binding|INFO|Removing iface tapb6dd53f5-33 ovn-installed in OVS
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.213 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.223 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:b7:fc 10.100.0.12'], port_security=['fa:16:3e:21:b7:fc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fb74110a-d0bc-4766-a5c7-5def485f388f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '90c23935e0214785a9dc5061b91cf29c', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f717601c-d15f-4a2d-a56a-85c60baf3a44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc7b8639-cf64-4f98-aa54-bbd2c9e5fa46, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=b6dd53f5-33a7-4fde-86d5-27a3e78dc566) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.225 158978 INFO neutron.agent.ovn.metadata.agent [-] Port b6dd53f5-33a7-4fde-86d5-27a3e78dc566 in datapath a8be8715-2b74-42ca-9713-7fc1f4a33bc9 unbound from our chassis#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.226 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.227 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[719b4948-d285-4be4-a900-8f871a6fbda2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.229 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 namespace which is not needed anymore#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000043.scope: Deactivated successfully.
Nov 29 03:08:05 np0005539550 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000043.scope: Consumed 8.214s CPU time.
Nov 29 03:08:05 np0005539550 systemd-machined[216673]: Machine qemu-33-instance-00000043 terminated.
Nov 29 03:08:05 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [NOTICE]   (300749) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:05 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [NOTICE]   (300749) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:05 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [WARNING]  (300749) : Exiting Master process...
Nov 29 03:08:05 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [ALERT]    (300749) : Current worker (300751) exited with code 143 (Terminated)
Nov 29 03:08:05 np0005539550 neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9[300745]: [WARNING]  (300749) : All workers exited. Exiting... (0)
Nov 29 03:08:05 np0005539550 systemd[1]: libpod-5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a.scope: Deactivated successfully.
Nov 29 03:08:05 np0005539550 podman[300947]: 2025-11-29 08:08:05.381985059 +0000 UTC m=+0.043767710 container died 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.396 257641 INFO nova.virt.libvirt.driver [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Instance destroyed successfully.#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.397 257641 DEBUG nova.objects.instance [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lazy-loading 'resources' on Instance uuid fb74110a-d0bc-4766-a5c7-5def485f388f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.411 257641 DEBUG nova.virt.libvirt.vif [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:07:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-911002665',display_name='tempest-DeleteServersTestJSON-server-911002665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-911002665',id=67,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:07:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='90c23935e0214785a9dc5061b91cf29c',ramdisk_id='',reservation_id='r-uokat01h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-294503786',owner_user_name='tempest-DeleteServersTestJSON-294503786-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:07:57Z,user_data=None,user_id='104aea18c5154615b602f032bdb49681',uuid=fb74110a-d0bc-4766-a5c7-5def485f388f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.412 257641 DEBUG nova.network.os_vif_util [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converting VIF {"id": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "address": "fa:16:3e:21:b7:fc", "network": {"id": "a8be8715-2b74-42ca-9713-7fc1f4a33bc9", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1820701608-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "90c23935e0214785a9dc5061b91cf29c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6dd53f5-33", "ovs_interfaceid": "b6dd53f5-33a7-4fde-86d5-27a3e78dc566", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.413 257641 DEBUG nova.network.os_vif_util [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.413 257641 DEBUG os_vif [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-507014b255929b54358363c817014f129cfe6caa4afaa65e5237d679868a09e4-merged.mount: Deactivated successfully.
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.417 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.417 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6dd53f5-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.421 257641 DEBUG nova.compute.manager [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.421 257641 DEBUG oslo_concurrency.lockutils [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.422 257641 DEBUG oslo_concurrency.lockutils [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.422 257641 DEBUG oslo_concurrency.lockutils [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.422 257641 DEBUG nova.compute.manager [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.422 257641 WARNING nova.compute.manager [req-3eb1d9c1-846e-4098-acfb-6edfa7c9efe0 req-7b63cef8-a0aa-47ee-a306-9a6da079b806 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-unplugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:05 np0005539550 podman[300947]: 2025-11-29 08:08:05.424875075 +0000 UTC m=+0.086657726 container cleanup 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.429 257641 INFO os_vif [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:b7:fc,bridge_name='br-int',has_traffic_filtering=True,id=b6dd53f5-33a7-4fde-86d5-27a3e78dc566,network=Network(a8be8715-2b74-42ca-9713-7fc1f4a33bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6dd53f5-33')#033[00m
Nov 29 03:08:05 np0005539550 systemd[1]: libpod-conmon-5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a.scope: Deactivated successfully.
Nov 29 03:08:05 np0005539550 podman[300985]: 2025-11-29 08:08:05.490547272 +0000 UTC m=+0.041200975 container remove 5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.498 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8caa1689-1892-4378-905f-4c22cac16b15]: (4, ('Sat Nov 29 08:08:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a)\n5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a\nSat Nov 29 08:08:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 (5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a)\n5910e7e034388fc13c14bc2ea008b4a3da744a43c03752cc6ca14062d00bf65a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.500 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d8a08b-144a-45ed-a87e-2d9c38f28e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.501 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8be8715-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:05 np0005539550 kernel: tapa8be8715-20: left promiscuous mode
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 nova_compute[257631]: 2025-11-29 08:08:05.520 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.523 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[33e1253e-ad0e-4f6f-bbf6-de3345cc1699]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.538 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[70f54a3e-6597-4388-8f3f-69c5c784eefc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.540 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b88121-a7e0-4fdf-a23f-fce60a419b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.559 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6244b5d4-16d2-4dff-94ba-fd9d112b1606]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 672486, 'reachable_time': 40022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301032, 'error': None, 'target': 'ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.562 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a8be8715-2b74-42ca-9713-7fc1f4a33bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:05.562 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7eedc8f6-a623-4e75-bd9c-31d8ebebbf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:05 np0005539550 systemd[1]: run-netns-ovnmeta\x2da8be8715\x2d2b74\x2d42ca\x2d9713\x2d7fc1f4a33bc9.mount: Deactivated successfully.
Nov 29 03:08:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:08:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:08:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:08:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:08:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:08:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:06.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c1e14536-e048-4fe8-858b-4a9eea1d344e does not exist
Nov 29 03:08:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ab0317c-bddc-4f8c-836c-6c872afd3fe2 does not exist
Nov 29 03:08:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 565d49cb-dac9-4650-afe8-7676dd916022 does not exist
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.390 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.472 257641 INFO nova.virt.libvirt.driver [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Deleting instance files /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f_del#033[00m
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.472 257641 INFO nova.virt.libvirt.driver [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Deletion of /var/lib/nova/instances/fb74110a-d0bc-4766-a5c7-5def485f388f_del complete#033[00m
Nov 29 03:08:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.678 257641 INFO nova.compute.manager [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Took 1.52 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.679 257641 DEBUG oslo.service.loopingcall [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.679 257641 DEBUG nova.compute.manager [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:06 np0005539550 nova_compute[257631]: 2025-11-29 08:08:06.679 257641 DEBUG nova.network.neutron [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:06.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.790139496 +0000 UTC m=+0.039688307 container create b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:08:06 np0005539550 systemd[1]: Started libpod-conmon-b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb.scope.
Nov 29 03:08:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.773430946 +0000 UTC m=+0.022979777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.878470382 +0000 UTC m=+0.128019193 container init b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.88758918 +0000 UTC m=+0.137137991 container start b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.890821141 +0000 UTC m=+0.140369952 container attach b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:08:06 np0005539550 sharp_jones[301210]: 167 167
Nov 29 03:08:06 np0005539550 systemd[1]: libpod-b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb.scope: Deactivated successfully.
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.895372906 +0000 UTC m=+0.144921737 container died b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:08:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5f5008f732c0ad74c6d5e77718c8b7485a789d9ee8610a1525c104649084a2e9-merged.mount: Deactivated successfully.
Nov 29 03:08:06 np0005539550 podman[301194]: 2025-11-29 08:08:06.959400352 +0000 UTC m=+0.208949183 container remove b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:08:06 np0005539550 systemd[1]: libpod-conmon-b191b84ac535d2b9baaf3b1d3ab6f70e0e36dc2c5a591df2f03809cdb31721fb.scope: Deactivated successfully.
Nov 29 03:08:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 203 MiB data, 672 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.5 MiB/s wr, 221 op/s
Nov 29 03:08:07 np0005539550 podman[301236]: 2025-11-29 08:08:07.110629536 +0000 UTC m=+0.039710787 container create 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:08:07 np0005539550 systemd[1]: Started libpod-conmon-0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194.scope.
Nov 29 03:08:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:07 np0005539550 podman[301236]: 2025-11-29 08:08:07.092919592 +0000 UTC m=+0.022000873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:07 np0005539550 podman[301236]: 2025-11-29 08:08:07.187163646 +0000 UTC m=+0.116244917 container init 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:08:07 np0005539550 podman[301236]: 2025-11-29 08:08:07.195902705 +0000 UTC m=+0.124983956 container start 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:08:07 np0005539550 podman[301236]: 2025-11-29 08:08:07.199646309 +0000 UTC m=+0.128727580 container attach 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:08:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.609 257641 DEBUG nova.compute.manager [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.610 257641 DEBUG oslo_concurrency.lockutils [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.611 257641 DEBUG oslo_concurrency.lockutils [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.612 257641 DEBUG oslo_concurrency.lockutils [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.612 257641 DEBUG nova.compute.manager [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] No waiting events found dispatching network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:07 np0005539550 nova_compute[257631]: 2025-11-29 08:08:07.612 257641 WARNING nova.compute.manager [req-8fc09d88-f43c-4635-bc77-504d7e8228da req-d58adb2e-030f-491a-bc47-2c700c0e7904 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received unexpected event network-vif-plugged-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:08 np0005539550 eloquent_elbakyan[301252]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:08:08 np0005539550 eloquent_elbakyan[301252]: --> relative data size: 1.0
Nov 29 03:08:08 np0005539550 eloquent_elbakyan[301252]: --> All data devices are unavailable
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.126 257641 DEBUG nova.network.neutron [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:08 np0005539550 systemd[1]: libpod-0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194.scope: Deactivated successfully.
Nov 29 03:08:08 np0005539550 podman[301236]: 2025-11-29 08:08:08.131822176 +0000 UTC m=+1.060903427 container died 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:08:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.153 257641 INFO nova.compute.manager [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Took 1.47 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0e5d3e0643a9af1e7bff2fb67c87613672d2def7327b01f82c3aba75dcfb0472-merged.mount: Deactivated successfully.
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:08:08 np0005539550 podman[301236]: 2025-11-29 08:08:08.195119324 +0000 UTC m=+1.124200575 container remove 0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_elbakyan, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:08:08 np0005539550 systemd[1]: libpod-conmon-0f9f1ce740a2d42d805f789203648df366f0f4afa00f0c4e26969a7999a82194.scope: Deactivated successfully.
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.206 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.207 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.214 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:08 np0005539550 podman[301330]: 2025-11-29 08:08:08.420717684 +0000 UTC m=+0.053518364 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.452 257641 INFO nova.scheduler.client.report [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Deleted allocations for instance fb74110a-d0bc-4766-a5c7-5def485f388f#033[00m
Nov 29 03:08:08 np0005539550 podman[301371]: 2025-11-29 08:08:08.488659168 +0000 UTC m=+0.054016016 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:08:08 np0005539550 nova_compute[257631]: 2025-11-29 08:08:08.614 257641 DEBUG oslo_concurrency.lockutils [None req-2df026e6-8879-4831-a24f-98fd8f10dbec 104aea18c5154615b602f032bdb49681 90c23935e0214785a9dc5061b91cf29c - - default default] Lock "fb74110a-d0bc-4766-a5c7-5def485f388f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.461s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.829070348 +0000 UTC m=+0.042324952 container create 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:08:08 np0005539550 systemd[1]: Started libpod-conmon-5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996.scope.
Nov 29 03:08:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.895241909 +0000 UTC m=+0.108496533 container init 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.901952757 +0000 UTC m=+0.115207361 container start 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.905154307 +0000 UTC m=+0.118408911 container attach 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.811394865 +0000 UTC m=+0.024649499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:08 np0005539550 exciting_feistel[301478]: 167 167
Nov 29 03:08:08 np0005539550 systemd[1]: libpod-5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996.scope: Deactivated successfully.
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.907629899 +0000 UTC m=+0.120884503 container died 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:08:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b9868690fb9b86421040e040b42b7993807586e7d48e9b30be311ed994624c8a-merged.mount: Deactivated successfully.
Nov 29 03:08:08 np0005539550 podman[301461]: 2025-11-29 08:08:08.946012912 +0000 UTC m=+0.159267516 container remove 5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:08 np0005539550 systemd[1]: libpod-conmon-5d73fb23fd412b5a3e3a12e4541752ec27149973c0d029d68738f99991e72996.scope: Deactivated successfully.
Nov 29 03:08:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 164 MiB data, 654 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.7 MiB/s wr, 222 op/s
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.09697243 +0000 UTC m=+0.039842571 container create 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:08:09 np0005539550 systemd[1]: Started libpod-conmon-08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802.scope.
Nov 29 03:08:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb4e67eacebd7667b8ef97000569e0270feea93e33c8ec6f7439ac0196f9c2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb4e67eacebd7667b8ef97000569e0270feea93e33c8ec6f7439ac0196f9c2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb4e67eacebd7667b8ef97000569e0270feea93e33c8ec6f7439ac0196f9c2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb4e67eacebd7667b8ef97000569e0270feea93e33c8ec6f7439ac0196f9c2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.08066108 +0000 UTC m=+0.023531241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.1850845 +0000 UTC m=+0.127954661 container init 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.192496276 +0000 UTC m=+0.135366417 container start 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.196189769 +0000 UTC m=+0.139059930 container attach 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:08:09 np0005539550 nova_compute[257631]: 2025-11-29 08:08:09.685 257641 DEBUG nova.compute.manager [req-0da354c8-69df-451f-aa86-7fbee57c6270 req-ed4f7fc1-cedf-4f55-9969-ca03280b4aeb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Received event network-vif-deleted-b6dd53f5-33a7-4fde-86d5-27a3e78dc566 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]: {
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:    "0": [
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:        {
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "devices": [
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "/dev/loop3"
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            ],
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "lv_name": "ceph_lv0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "lv_size": "7511998464",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "name": "ceph_lv0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "tags": {
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.cluster_name": "ceph",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.crush_device_class": "",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.encrypted": "0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.osd_id": "0",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.type": "block",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:                "ceph.vdo": "0"
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            },
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "type": "block",
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:            "vg_name": "ceph_vg0"
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:        }
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]:    ]
Nov 29 03:08:09 np0005539550 festive_khayyam[301520]: }
Nov 29 03:08:09 np0005539550 systemd[1]: libpod-08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802.scope: Deactivated successfully.
Nov 29 03:08:09 np0005539550 podman[301504]: 2025-11-29 08:08:09.97550861 +0000 UTC m=+0.918378751 container died 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:08:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3eb4e67eacebd7667b8ef97000569e0270feea93e33c8ec6f7439ac0196f9c2a-merged.mount: Deactivated successfully.
Nov 29 03:08:10 np0005539550 podman[301504]: 2025-11-29 08:08:10.035577007 +0000 UTC m=+0.978447148 container remove 08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_khayyam, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:08:10 np0005539550 systemd[1]: libpod-conmon-08db6b36528f233e96cb5b3fa6f7c0f62d8dc198227975d90087fb66afa5c802.scope: Deactivated successfully.
Nov 29 03:08:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:10.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:10 np0005539550 nova_compute[257631]: 2025-11-29 08:08:10.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.621677201 +0000 UTC m=+0.040570219 container create ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:08:10 np0005539550 systemd[1]: Started libpod-conmon-ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840.scope.
Nov 29 03:08:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.604252454 +0000 UTC m=+0.023145492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.700743074 +0000 UTC m=+0.119636102 container init ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.706273643 +0000 UTC m=+0.125166641 container start ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.709559526 +0000 UTC m=+0.128452544 container attach ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:08:10 np0005539550 wizardly_germain[301698]: 167 167
Nov 29 03:08:10 np0005539550 systemd[1]: libpod-ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840.scope: Deactivated successfully.
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.71131973 +0000 UTC m=+0.130212738 container died ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:08:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cbba5b70837d55e75a48995d870704716a4fec331b735ddd383fe99891b66738-merged.mount: Deactivated successfully.
Nov 29 03:08:10 np0005539550 podman[301682]: 2025-11-29 08:08:10.744899762 +0000 UTC m=+0.163792770 container remove ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:08:10 np0005539550 systemd[1]: libpod-conmon-ae6ac129c0c9a23546e46e35b7686107c772013aa0ad7b5c9161b605fe7f7840.scope: Deactivated successfully.
Nov 29 03:08:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:10.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:10 np0005539550 podman[301721]: 2025-11-29 08:08:10.927957135 +0000 UTC m=+0.039716658 container create 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:08:10 np0005539550 systemd[1]: Started libpod-conmon-9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f.scope.
Nov 29 03:08:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0235956e512d874c97dbaf0c2df18ad857fc26f0c83af3b8513a5e105e456179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0235956e512d874c97dbaf0c2df18ad857fc26f0c83af3b8513a5e105e456179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0235956e512d874c97dbaf0c2df18ad857fc26f0c83af3b8513a5e105e456179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0235956e512d874c97dbaf0c2df18ad857fc26f0c83af3b8513a5e105e456179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:10.999791247 +0000 UTC m=+0.111550790 container init 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:11.006585127 +0000 UTC m=+0.118344650 container start 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:10.913261346 +0000 UTC m=+0.025020869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:11.010102926 +0000 UTC m=+0.121862469 container attach 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:08:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 134 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 210 op/s
Nov 29 03:08:11 np0005539550 nova_compute[257631]: 2025-11-29 08:08:11.391 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:11.427 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:11 np0005539550 nova_compute[257631]: 2025-11-29 08:08:11.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:11.429 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:08:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 29 03:08:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 29 03:08:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]: {
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:        "osd_id": 0,
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:        "type": "bluestore"
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]:    }
Nov 29 03:08:11 np0005539550 focused_proskuriakova[301737]: }
Nov 29 03:08:11 np0005539550 systemd[1]: libpod-9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f.scope: Deactivated successfully.
Nov 29 03:08:11 np0005539550 conmon[301737]: conmon 9a2e38aaf0d5e2a0384f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f.scope/container/memory.events
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:11.901751955 +0000 UTC m=+1.013511478 container died 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:08:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0235956e512d874c97dbaf0c2df18ad857fc26f0c83af3b8513a5e105e456179-merged.mount: Deactivated successfully.
Nov 29 03:08:11 np0005539550 podman[301721]: 2025-11-29 08:08:11.951685458 +0000 UTC m=+1.063444971 container remove 9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:08:11 np0005539550 systemd[1]: libpod-conmon-9a2e38aaf0d5e2a0384f7d0fb4d113147022b5b1605693352eb5ac963257678f.scope: Deactivated successfully.
Nov 29 03:08:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:08:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:08:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d74717fb-d2eb-442f-ae84-13d50f345da9 does not exist
Nov 29 03:08:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 65b05333-d31c-4389-a20a-3a672f34b8f4 does not exist
Nov 29 03:08:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ac1e11f3-1c98-4dd6-b5cc-ebd0b826b192 does not exist
Nov 29 03:08:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:12.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:08:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 134 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 149 op/s
Nov 29 03:08:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:14.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.053 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 171 op/s
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:15.431 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.951 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.951 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:08:15 np0005539550 nova_compute[257631]: 2025-11-29 08:08:15.951 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:16.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2359000411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.381 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.393 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.542 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.543 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=20.945255279541016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.544 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.544 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.604 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.605 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:08:16 np0005539550 nova_compute[257631]: 2025-11-29 08:08:16.625 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:16.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 29 03:08:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 29 03:08:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248941610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:17 np0005539550 nova_compute[257631]: 2025-11-29 08:08:17.055 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:17 np0005539550 nova_compute[257631]: 2025-11-29 08:08:17.060 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 157 MiB data, 671 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Nov 29 03:08:17 np0005539550 nova_compute[257631]: 2025-11-29 08:08:17.084 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:17 np0005539550 nova_compute[257631]: 2025-11-29 08:08:17.116 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:08:17 np0005539550 nova_compute[257631]: 2025-11-29 08:08:17.117 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 29 03:08:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 29 03:08:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:18.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:18.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 29 03:08:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 29 03:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:18.940 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:18.941 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:18.941 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 204 MiB data, 696 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.4 MiB/s wr, 367 op/s
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031597876535455053 of space, bias 1.0, pg target 0.9479362960636516 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0027793772101246976 of space, bias 1.0, pg target 0.8338131630374093 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:08:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:08:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 29 03:08:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 29 03:08:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.112 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.112 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.112 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.113 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.128 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.129 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.129 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.129 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.130 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:20.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.394 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403685.3924398, fb74110a-d0bc-4766-a5c7-5def485f388f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.395 257641 INFO nova.compute.manager [-] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.418 257641 DEBUG nova.compute.manager [None req-6ad5f0d8-a1bd-40d7-9fbf-370b4441bef1 - - - - - -] [instance: fb74110a-d0bc-4766-a5c7-5def485f388f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:20 np0005539550 nova_compute[257631]: 2025-11-29 08:08:20.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:20.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 204 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.4 MiB/s wr, 288 op/s
Nov 29 03:08:21 np0005539550 nova_compute[257631]: 2025-11-29 08:08:21.394 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:22 np0005539550 podman[301922]: 2025-11-29 08:08:22.352753609 +0000 UTC m=+0.079696561 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:08:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:22.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:22 np0005539550 nova_compute[257631]: 2025-11-29 08:08:22.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539550 nova_compute[257631]: 2025-11-29 08:08:22.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:22 np0005539550 nova_compute[257631]: 2025-11-29 08:08:22.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:08:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 204 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 204 op/s
Nov 29 03:08:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:24.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 138 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.8 MiB/s wr, 232 op/s
Nov 29 03:08:25 np0005539550 nova_compute[257631]: 2025-11-29 08:08:25.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:26.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:26 np0005539550 nova_compute[257631]: 2025-11-29 08:08:26.397 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 29 03:08:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 29 03:08:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 29 03:08:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 52 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 336 KiB/s wr, 123 op/s
Nov 29 03:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 41 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 294 KiB/s wr, 125 op/s
Nov 29 03:08:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:30.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.719 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.719 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:30.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.801 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.862934) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710863037, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2388, "num_deletes": 270, "total_data_size": 3830009, "memory_usage": 3882688, "flush_reason": "Manual Compaction"}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710887010, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3744075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33890, "largest_seqno": 36277, "table_properties": {"data_size": 3733166, "index_size": 7019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23617, "raw_average_key_size": 21, "raw_value_size": 3711127, "raw_average_value_size": 3370, "num_data_blocks": 300, "num_entries": 1101, "num_filter_entries": 1101, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403542, "oldest_key_time": 1764403542, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 24124 microseconds, and 10273 cpu microseconds.
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.887073) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3744075 bytes OK
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.887094) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.889066) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.889097) EVENT_LOG_v1 {"time_micros": 1764403710889091, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.889127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3819959, prev total WAL file size 3819959, number of live WAL files 2.
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.890390) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3656KB)], [74(8700KB)]
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710890476, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12652969, "oldest_snapshot_seqno": -1}
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.955 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.955 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6917 keys, 10686417 bytes, temperature: kUnknown
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710961847, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10686417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10639701, "index_size": 28300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178689, "raw_average_key_size": 25, "raw_value_size": 10515013, "raw_average_value_size": 1520, "num_data_blocks": 1124, "num_entries": 6917, "num_filter_entries": 6917, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403710, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.962168) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10686417 bytes
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.963559) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.0 rd, 149.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.9) OK, records in: 7461, records dropped: 544 output_compression: NoCompression
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.963848) EVENT_LOG_v1 {"time_micros": 1764403710963590, "job": 42, "event": "compaction_finished", "compaction_time_micros": 71493, "compaction_time_cpu_micros": 32804, "output_level": 6, "num_output_files": 1, "total_output_size": 10686417, "num_input_records": 7461, "num_output_records": 6917, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.963 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:08:30 np0005539550 nova_compute[257631]: 2025-11-29 08:08:30.964 257641 INFO nova.compute.claims [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710965158, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403710968039, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.890260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.968102) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.968107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.968108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.968110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:30 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:08:30.968111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:08:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 41 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 89 op/s
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.220 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.399 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3232403658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.656 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.662 257641 DEBUG nova.compute.provider_tree [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.681 257641 DEBUG nova.scheduler.client.report [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.715 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.716 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.983 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:08:31 np0005539550 nova_compute[257631]: 2025-11-29 08:08:31.983 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.017 257641 INFO nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.041 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:08:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.272 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.272 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.307 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.352 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.353 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.353 257641 INFO nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Creating image(s)#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.376 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.402 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.431 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.434 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.479 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.479 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.485 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.485 257641 INFO nova.compute.claims [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.489 257641 DEBUG nova.policy [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a010a95085342c5ae9a02f15b334fad', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c57433fd3834430904b1908f24f3f2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.494 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.495 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.495 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.496 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.526 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.532 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a57a7edf-d886-474e-84d1-03a4c655e64e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:32 np0005539550 nova_compute[257631]: 2025-11-29 08:08:32.781 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:32.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 41 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 19 KiB/s wr, 89 op/s
Nov 29 03:08:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3563426882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.225 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.231 257641 DEBUG nova.compute.provider_tree [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.353 257641 DEBUG nova.scheduler.client.report [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.572 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.573 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.735 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a57a7edf-d886-474e-84d1-03a4c655e64e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.771 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.772 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.827 257641 INFO nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.833 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] resizing rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:08:33 np0005539550 nova_compute[257631]: 2025-11-29 08:08:33.922 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Successfully created port: a6d55e23-5b85-46b2-9b38-cad1e55da096 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.004 257641 DEBUG nova.objects.instance [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'migration_context' on Instance uuid a57a7edf-d886-474e-84d1-03a4c655e64e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.006 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.045 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.046 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Ensure instance console log exists: /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.046 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.046 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.046 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.126 257641 DEBUG nova.policy [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ab0114aca6149af994da2b9052c1368', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8384e5887c0948f5876c019d50057152', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.130 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.131 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.131 257641 INFO nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating image(s)#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.155 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.182 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.207 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.210 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.270 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.270 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.271 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.271 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.297 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.301 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c24a72ed-78bb-4305-abf0-04f30042a9ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:34.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:34 np0005539550 nova_compute[257631]: 2025-11-29 08:08:34.990 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 c24a72ed-78bb-4305-abf0-04f30042a9ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.065 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:08:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 74 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.5 MiB/s wr, 49 op/s
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.474 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.554 257641 DEBUG nova.objects.instance [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.590 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.590 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Ensure instance console log exists: /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:35 np0005539550 nova_compute[257631]: 2025-11-29 08:08:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.208 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Successfully updated port: a6d55e23-5b85-46b2-9b38-cad1e55da096 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.292 257641 DEBUG nova.compute.manager [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-changed-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.292 257641 DEBUG nova.compute.manager [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Refreshing instance network info cache due to event network-changed-a6d55e23-5b85-46b2-9b38-cad1e55da096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.292 257641 DEBUG oslo_concurrency.lockutils [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.293 257641 DEBUG oslo_concurrency.lockutils [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.293 257641 DEBUG nova.network.neutron [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Refreshing network info cache for port a6d55e23-5b85-46b2-9b38-cad1e55da096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.306 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.462 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Successfully created port: d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.531 257641 DEBUG nova.network.neutron [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.955 257641 DEBUG nova.network.neutron [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.983 257641 DEBUG oslo_concurrency.lockutils [req-b236aaca-2c58-4eb4-98d3-e6e4778932ad req-1ab4f9a8-6959-4f3b-bb4b-fcdaae937cfb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.983 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquired lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:36 np0005539550 nova_compute[257631]: 2025-11-29 08:08:36.984 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 112 MiB data, 635 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 3.1 MiB/s wr, 49 op/s
Nov 29 03:08:37 np0005539550 nova_compute[257631]: 2025-11-29 08:08:37.205 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:37 np0005539550 nova_compute[257631]: 2025-11-29 08:08:37.998 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Successfully updated port: d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.020 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.021 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.021 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.159 257641 DEBUG nova.compute.manager [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-changed-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.159 257641 DEBUG nova.compute.manager [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Refreshing instance network info cache due to event network-changed-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.159 257641 DEBUG oslo_concurrency.lockutils [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:08:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:38.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.198 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.504 257641 DEBUG nova.network.neutron [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Updating instance_info_cache with network_info: [{"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.637 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Releasing lock "refresh_cache-a57a7edf-d886-474e-84d1-03a4c655e64e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.637 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance network_info: |[{"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.639 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Start _get_guest_xml network_info=[{"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.643 257641 WARNING nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.649 257641 DEBUG nova.virt.libvirt.host [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.650 257641 DEBUG nova.virt.libvirt.host [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.656 257641 DEBUG nova.virt.libvirt.host [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.656 257641 DEBUG nova.virt.libvirt.host [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.657 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.657 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.658 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.658 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.658 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.659 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.659 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.659 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.660 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.660 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.660 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.660 257641 DEBUG nova.virt.hardware [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:08:38 np0005539550 nova_compute[257631]: 2025-11-29 08:08:38.663 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:38.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141259066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 134 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.5 MiB/s wr, 66 op/s
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.095 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.123 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.129 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:39 np0005539550 podman[302448]: 2025-11-29 08:08:39.321684281 +0000 UTC m=+0.053499133 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:08:39 np0005539550 podman[302429]: 2025-11-29 08:08:39.325885307 +0000 UTC m=+0.055952875 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.474 257641 DEBUG nova.network.neutron [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Updating instance_info_cache with network_info: [{"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.548 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.549 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance network_info: |[{"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.549 257641 DEBUG oslo_concurrency.lockutils [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.550 257641 DEBUG nova.network.neutron [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Refreshing network info cache for port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.552 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start _get_guest_xml network_info=[{"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.557 257641 WARNING nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:08:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589913402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.562 257641 DEBUG nova.virt.libvirt.host [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.564 257641 DEBUG nova.virt.libvirt.host [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.568 257641 DEBUG nova.virt.libvirt.host [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.568 257641 DEBUG nova.virt.libvirt.host [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.570 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.570 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.570 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.571 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.571 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.571 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.571 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.572 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.572 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.572 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.572 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.572 257641 DEBUG nova.virt.hardware [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.576 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.609 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.611 257641 DEBUG nova.virt.libvirt.vif [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1631047494',id=72,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-iul0d2r8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:32Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=a57a7edf-d886-474e-84d1-03a4c655e64e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.612 257641 DEBUG nova.network.os_vif_util [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.612 257641 DEBUG nova.network.os_vif_util [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.614 257641 DEBUG nova.objects.instance [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'pci_devices' on Instance uuid a57a7edf-d886-474e-84d1-03a4c655e64e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.629 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <uuid>a57a7edf-d886-474e-84d1-03a4c655e64e</uuid>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <name>instance-00000048</name>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1631047494</nova:name>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:08:38</nova:creationTime>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:user uuid="1a010a95085342c5ae9a02f15b334fad">tempest-ImagesOneServerNegativeTestJSON-167104479-project-member</nova:user>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:project uuid="5c57433fd3834430904b1908f24f3f2f">tempest-ImagesOneServerNegativeTestJSON-167104479</nova:project>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <nova:port uuid="a6d55e23-5b85-46b2-9b38-cad1e55da096">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="serial">a57a7edf-d886-474e-84d1-03a4c655e64e</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="uuid">a57a7edf-d886-474e-84d1-03a4c655e64e</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a57a7edf-d886-474e-84d1-03a4c655e64e_disk">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:21:47:78"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <target dev="tapa6d55e23-5b"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/console.log" append="off"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:08:39 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:08:39 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:08:39 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:08:39 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.630 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Preparing to wait for external event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.630 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.630 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.630 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.631 257641 DEBUG nova.virt.libvirt.vif [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1631047494',id=72,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-iul0d2r8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:32Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=a57a7edf-d886-474e-84d1-03a4c655e64e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.632 257641 DEBUG nova.network.os_vif_util [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.632 257641 DEBUG nova.network.os_vif_util [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.632 257641 DEBUG os_vif [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.634 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.634 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.638 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.638 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6d55e23-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.639 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6d55e23-5b, col_values=(('external_ids', {'iface-id': 'a6d55e23-5b85-46b2-9b38-cad1e55da096', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:47:78', 'vm-uuid': 'a57a7edf-d886-474e-84d1-03a4c655e64e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.640 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:39 np0005539550 NetworkManager[49039]: <info>  [1764403719.6415] manager: (tapa6d55e23-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.643 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.646 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.648 257641 INFO os_vif [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b')#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.706 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.706 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.707 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] No VIF found with MAC fa:16:3e:21:47:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.707 257641 INFO nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Using config drive#033[00m
Nov 29 03:08:39 np0005539550 nova_compute[257631]: 2025-11-29 08:08:39.755 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909474644' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.043 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.079 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.086 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:40.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.442 257641 INFO nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Creating config drive at /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.448 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gxhf46c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:08:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387627607' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.529 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.532 257641 DEBUG nova.virt.libvirt.vif [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:34Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.533 257641 DEBUG nova.network.os_vif_util [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.534 257641 DEBUG nova.network.os_vif_util [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.536 257641 DEBUG nova.objects.instance [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.579 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <uuid>c24a72ed-78bb-4305-abf0-04f30042a9ad</uuid>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <name>instance-00000049</name>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-68286740</nova:name>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:08:39</nova:creationTime>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <nova:port uuid="d57beffb-4cf5-44cf-ba6d-8f78a18b0f51">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="serial">c24a72ed-78bb-4305-abf0-04f30042a9ad</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="uuid">c24a72ed-78bb-4305-abf0-04f30042a9ad</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/c24a72ed-78bb-4305-abf0-04f30042a9ad_disk">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:2f:09:6b"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <target dev="tapd57beffb-4c"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/console.log" append="off"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:08:40 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:08:40 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:08:40 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:08:40 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.580 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Preparing to wait for external event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.581 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.581 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.581 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.582 257641 DEBUG nova.virt.libvirt.vif [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:34Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.582 257641 DEBUG nova.network.os_vif_util [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.583 257641 DEBUG nova.network.os_vif_util [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.583 257641 DEBUG os_vif [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.584 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.585 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.585 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.586 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1gxhf46c" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.615 257641 DEBUG nova.storage.rbd_utils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] rbd image a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.619 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.655 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.655 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd57beffb-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.656 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd57beffb-4c, col_values=(('external_ids', {'iface-id': 'd57beffb-4cf5-44cf-ba6d-8f78a18b0f51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:09:6b', 'vm-uuid': 'c24a72ed-78bb-4305-abf0-04f30042a9ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:40 np0005539550 NetworkManager[49039]: <info>  [1764403720.6589] manager: (tapd57beffb-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.661 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:40 np0005539550 nova_compute[257631]: 2025-11-29 08:08:40.665 257641 INFO os_vif [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c')#033[00m
Nov 29 03:08:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:40.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.017 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.018 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.018 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:2f:09:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.019 257641 INFO nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Using config drive#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.050 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 134 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.134 257641 DEBUG oslo_concurrency.processutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config a57a7edf-d886-474e-84d1-03a4c655e64e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.135 257641 INFO nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Deleting local config drive /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.1933] manager: (tapa6d55e23-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/98)
Nov 29 03:08:41 np0005539550 kernel: tapa6d55e23-5b: entered promiscuous mode
Nov 29 03:08:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:41Z|00219|binding|INFO|Claiming lport a6d55e23-5b85-46b2-9b38-cad1e55da096 for this chassis.
Nov 29 03:08:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:41Z|00220|binding|INFO|a6d55e23-5b85-46b2-9b38-cad1e55da096: Claiming fa:16:3e:21:47:78 10.100.0.13
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.198 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.206 257641 DEBUG nova.network.neutron [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Updated VIF entry in instance network info cache for port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.207 257641 DEBUG nova.network.neutron [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Updating instance_info_cache with network_info: [{"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:41 np0005539550 systemd-udevd[302646]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:41 np0005539550 systemd-machined[216673]: New machine qemu-34-instance-00000048.
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.2612] device (tapa6d55e23-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.2624] device (tapa6d55e23-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:41 np0005539550 systemd[1]: Started Virtual Machine qemu-34-instance-00000048.
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.279 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:41Z|00221|binding|INFO|Setting lport a6d55e23-5b85-46b2-9b38-cad1e55da096 ovn-installed in OVS
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:41Z|00222|binding|INFO|Setting lport a6d55e23-5b85-46b2-9b38-cad1e55da096 up in Southbound
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.344 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:47:78 10.100.0.13'], port_security=['fa:16:3e:21:47:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a57a7edf-d886-474e-84d1-03a4c655e64e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a6d55e23-5b85-46b2-9b38-cad1e55da096) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.345 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a6d55e23-5b85-46b2-9b38-cad1e55da096 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d bound to our chassis#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.347 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.360 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2fcda179-4df2-4074-9553-7af9c566bbd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.362 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0de30c6a-81 in ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.364 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0de30c6a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.364 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2b46d3-4908-4891-a1c8-cb0f2296882a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.365 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a2b231ac-3607-47fe-8590-9c3dab6434a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.371 257641 DEBUG oslo_concurrency.lockutils [req-c2cc6abb-7047-4625-9c68-84279dd07a54 req-2cb48ae6-f4ef-4560-af2e-4491d3d89c1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-c24a72ed-78bb-4305-abf0-04f30042a9ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.379 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[9b60d15f-1750-4464-be20-6e965ab857cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.397 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3add060a-f27c-4df4-b9e4-4ed7b3d95855]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.426 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ef53c2-de72-4c2c-bc35-ddf31d87a18c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.4342] manager: (tap0de30c6a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/99)
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.435 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f62181e0-7822-47ad-9221-67c15aa4603b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.471 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf2d48a-7e88-40bf-93e7-0890e10b332b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.474 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e58a5d99-ae51-45b8-a625-68098852d251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.4980] device (tap0de30c6a-80): carrier: link connected
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.503 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8666eea2-4daa-491d-b4b0-b73f6988731b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.521 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[36f35a32-6f32-4011-bd74-4dae9ba26bd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676916, 'reachable_time': 19562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302684, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.536 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[499d425a-5425-471b-9583-4de14190b909]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:215a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676916, 'tstamp': 676916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302685, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.557 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[627499f4-2a8e-4af0-b3eb-33ff96675f64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0de30c6a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:21:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676916, 'reachable_time': 19562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302686, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.584 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[034b1322-68b0-4934-b275-0ad0937f17b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.646 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f14b85e1-7ad2-4428-9e93-fce48f0e9076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.647 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.648 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.648 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0de30c6a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:41 np0005539550 NetworkManager[49039]: <info>  [1764403721.6901] manager: (tap0de30c6a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.690 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 kernel: tap0de30c6a-80: entered promiscuous mode
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.694 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.695 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0de30c6a-80, col_values=(('external_ids', {'iface-id': 'db5f456f-a9cd-44e0-9bf4-deda3979e911'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:41Z|00223|binding|INFO|Releasing lport db5f456f-a9cd-44e0-9bf4-deda3979e911 from this chassis (sb_readonly=0)
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.700 257641 DEBUG nova.compute.manager [req-4ef1090c-b209-4fed-87f1-fbbfe13579ff req-b0529b55-ff8d-4c44-a6b3-3658bbfa50a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.700 257641 DEBUG oslo_concurrency.lockutils [req-4ef1090c-b209-4fed-87f1-fbbfe13579ff req-b0529b55-ff8d-4c44-a6b3-3658bbfa50a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.700 257641 DEBUG oslo_concurrency.lockutils [req-4ef1090c-b209-4fed-87f1-fbbfe13579ff req-b0529b55-ff8d-4c44-a6b3-3658bbfa50a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.701 257641 DEBUG oslo_concurrency.lockutils [req-4ef1090c-b209-4fed-87f1-fbbfe13579ff req-b0529b55-ff8d-4c44-a6b3-3658bbfa50a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.701 257641 DEBUG nova.compute.manager [req-4ef1090c-b209-4fed-87f1-fbbfe13579ff req-b0529b55-ff8d-4c44-a6b3-3658bbfa50a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Processing event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.712 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.716 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.717 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.718 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b70f4d2f-9af0-461b-9f24-1f2cb7feef35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.718 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/0de30c6a-82ca-4f9f-a37d-5949a70a385d.pid.haproxy
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 0de30c6a-82ca-4f9f-a37d-5949a70a385d
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:41.719 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'env', 'PROCESS_TAG=haproxy-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0de30c6a-82ca-4f9f-a37d-5949a70a385d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.726 257641 INFO nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating config drive at /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.730 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvwbw554 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.805 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403721.8052175, a57a7edf-d886-474e-84d1-03a4c655e64e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.806 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.808 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.812 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.815 257641 INFO nova.virt.libvirt.driver [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance spawned successfully.#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.816 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.861 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuvwbw554" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.891 257641 DEBUG nova.storage.rbd_utils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.895 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:41 np0005539550 nova_compute[257631]: 2025-11-29 08:08:41.994 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.000 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.004 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.005 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.005 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.006 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.006 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.007 257641 DEBUG nova.virt.libvirt.driver [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.021 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.022 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403721.8054905, a57a7edf-d886-474e-84d1-03a4c655e64e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.022 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:08:42 np0005539550 podman[302806]: 2025-11-29 08:08:42.118740944 +0000 UTC m=+0.050735153 container create 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.150 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.154 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403721.8112025, a57a7edf-d886-474e-84d1-03a4c655e64e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.155 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.158 257641 DEBUG oslo_concurrency.processutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.159 257641 INFO nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deleting local config drive /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config because it was imported into RBD.#033[00m
Nov 29 03:08:42 np0005539550 systemd[1]: Started libpod-conmon-282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7.scope.
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.173 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.178 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:42 np0005539550 podman[302806]: 2025-11-29 08:08:42.095175873 +0000 UTC m=+0.027170102 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e40733c12183c8df6fc2c67f442a9c968fa68c78d9276d8b8ed0135077490be2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:42 np0005539550 podman[302806]: 2025-11-29 08:08:42.205351267 +0000 UTC m=+0.137345496 container init 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:08:42 np0005539550 podman[302806]: 2025-11-29 08:08:42.212159858 +0000 UTC m=+0.144154067 container start 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:08:42 np0005539550 kernel: tapd57beffb-4c: entered promiscuous mode
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.2338] manager: (tapd57beffb-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Nov 29 03:08:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:42Z|00224|binding|INFO|Claiming lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for this chassis.
Nov 29 03:08:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:42Z|00225|binding|INFO|d57beffb-4cf5-44cf-ba6d-8f78a18b0f51: Claiming fa:16:3e:2f:09:6b 10.100.0.8
Nov 29 03:08:42 np0005539550 systemd-udevd[302677]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [NOTICE]   (302829) : New worker (302836) forked
Nov 29 03:08:42 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [NOTICE]   (302829) : Loading success.
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.2515] device (tapd57beffb-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.2531] device (tapd57beffb-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:08:42 np0005539550 systemd-machined[216673]: New machine qemu-35-instance-00000049.
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.300 257641 INFO nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.301 257641 DEBUG nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.302 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.303 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.305 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:08:42 np0005539550 systemd[1]: Started Virtual Machine qemu-35-instance-00000049.
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.315 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06b50320-b9ef-4ab2-8435-d0b034028362]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.316 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.316 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:42Z|00226|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 ovn-installed in OVS
Nov 29 03:08:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:42Z|00227|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 up in Southbound
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.325 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.325 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1c4cdfe9-9e2c-4665-ab86-f572f55a4781]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.327 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b737da5a-89cd-4eb1-97a6-a32a1f049eb9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.342 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[926274c6-5a51-4874-a462-e0bdf847c432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.359 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0396aada-d069-49a4-a29d-8c033c5bbc37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.394 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f69576ac-3d72-4202-8fea-439b675c6824]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.4020] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.402 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[987f07d9-8c1e-43f8-b92e-7bfaa2556a5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.437 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[49428423-49f7-47f8-889f-5eeb47f93899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.440 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa7a75d-fcc6-488d-873f-6e2927de7361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.4628] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.467 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[72ef903a-2433-4ccb-88b1-d2784e344704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.473 257641 INFO nova.compute.manager [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Took 11.55 seconds to build instance.#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.484 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a51c7fc5-29fb-4507-a9d8-5e79269b072f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677012, 'reachable_time': 34986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302864, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.495 257641 DEBUG oslo_concurrency.lockutils [None req-2f431128-8e97-4e61-867e-5e748885924e 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.498 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1645de-6a6d-4b10-b2dd-33be032dd592]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677012, 'tstamp': 677012}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302865, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.516 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[216d7099-fbce-40a1-a7da-1ed3c9bf144c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677012, 'reachable_time': 34986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302866, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.543 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[26dc5ae5-b76f-448e-b789-62f1ef912086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.595 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[817c9c32-aee0-446d-8a9c-028b384dd7f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.596 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 NetworkManager[49039]: <info>  [1764403722.6019] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.602 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:42Z|00228|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:08:42 np0005539550 nova_compute[257631]: 2025-11-29 08:08:42.619 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.620 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.621 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb099ce-75f4-4130-a32f-903a3298af90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.622 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:08:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:42.624 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:08:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:43 np0005539550 podman[302905]: 2025-11-29 08:08:43.042473768 +0000 UTC m=+0.052945539 container create d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:08:43 np0005539550 systemd[1]: Started libpod-conmon-d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6.scope.
Nov 29 03:08:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 134 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Nov 29 03:08:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:08:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605664eb80f70e966471ad0c177be92012755baea1bbedc3a36a7db5c1538197/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:08:43 np0005539550 podman[302905]: 2025-11-29 08:08:43.017778369 +0000 UTC m=+0.028250160 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:08:43 np0005539550 podman[302905]: 2025-11-29 08:08:43.116036894 +0000 UTC m=+0.126508675 container init d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:08:43 np0005539550 podman[302905]: 2025-11-29 08:08:43.122936047 +0000 UTC m=+0.133407808 container start d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:08:43 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [NOTICE]   (302958) : New worker (302960) forked
Nov 29 03:08:43 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [NOTICE]   (302958) : Loading success.
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.151 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403723.1513493, c24a72ed-78bb-4305-abf0-04f30042a9ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.152 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Started (Lifecycle Event)#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.208 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.212 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403723.1515641, c24a72ed-78bb-4305-abf0-04f30042a9ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.213 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.284 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.287 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.371 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.908 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.908 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.908 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.909 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.909 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] No waiting events found dispatching network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.909 257641 WARNING nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received unexpected event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.909 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.909 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.910 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.910 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.910 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Processing event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.910 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.911 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.911 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.911 257641 DEBUG oslo_concurrency.lockutils [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.911 257641 DEBUG nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.912 257641 WARNING nova.compute.manager [req-4d05f6a7-cd51-4441-906f-19bd98bca64a req-2e4e95a5-3a9a-4403-9797-6a923a987902 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.912 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.915 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403723.9155016, c24a72ed-78bb-4305-abf0-04f30042a9ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.915 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.938 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.941 257641 INFO nova.virt.libvirt.driver [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance spawned successfully.#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.942 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.975 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.979 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.979 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.980 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.980 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.980 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.981 257641 DEBUG nova.virt.libvirt.driver [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:08:43 np0005539550 nova_compute[257631]: 2025-11-29 08:08:43.987 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.066 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.110 257641 INFO nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Took 9.98 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.111 257641 DEBUG nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.323 257641 INFO nova.compute.manager [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Took 11.86 seconds to build instance.#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.491 257641 DEBUG oslo_concurrency.lockutils [None req-53306800-1592-4201-92bb-5e32cbf96cdc 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.638 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.639 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.640 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.640 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.641 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.643 257641 INFO nova.compute.manager [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Terminating instance#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.645 257641 DEBUG nova.compute.manager [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:08:44 np0005539550 kernel: tapa6d55e23-5b (unregistering): left promiscuous mode
Nov 29 03:08:44 np0005539550 NetworkManager[49039]: <info>  [1764403724.7036] device (tapa6d55e23-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:08:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:44Z|00229|binding|INFO|Releasing lport a6d55e23-5b85-46b2-9b38-cad1e55da096 from this chassis (sb_readonly=0)
Nov 29 03:08:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:44Z|00230|binding|INFO|Setting lport a6d55e23-5b85-46b2-9b38-cad1e55da096 down in Southbound
Nov 29 03:08:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:44Z|00231|binding|INFO|Removing iface tapa6d55e23-5b ovn-installed in OVS
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.715 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:44 np0005539550 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000048.scope: Deactivated successfully.
Nov 29 03:08:44 np0005539550 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000048.scope: Consumed 3.369s CPU time.
Nov 29 03:08:44 np0005539550 systemd-machined[216673]: Machine qemu-34-instance-00000048 terminated.
Nov 29 03:08:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:44.788 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:47:78 10.100.0.13'], port_security=['fa:16:3e:21:47:78 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'a57a7edf-d886-474e-84d1-03a4c655e64e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c57433fd3834430904b1908f24f3f2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebe351b1-d353-46d5-990d-7ccc905f95cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07d7455e-493a-4184-8d60-e2fd6ef2393b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a6d55e23-5b85-46b2-9b38-cad1e55da096) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:08:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:44.789 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a6d55e23-5b85-46b2-9b38-cad1e55da096 in datapath 0de30c6a-82ca-4f9f-a37d-5949a70a385d unbound from our chassis#033[00m
Nov 29 03:08:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:44.790 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0de30c6a-82ca-4f9f-a37d-5949a70a385d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:08:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:44.791 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d04b298f-df7c-4ae4-9974-51c596fa4c37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:44.792 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d namespace which is not needed anymore#033[00m
Nov 29 03:08:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:44.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.888 257641 INFO nova.virt.libvirt.driver [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Instance destroyed successfully.#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.890 257641 DEBUG nova.objects.instance [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lazy-loading 'resources' on Instance uuid a57a7edf-d886-474e-84d1-03a4c655e64e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.912 257641 DEBUG nova.virt.libvirt.vif [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1631047494',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1631047494',id=72,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c57433fd3834430904b1908f24f3f2f',ramdisk_id='',reservation_id='r-iul0d2r8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-167104479',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-167104479-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:08:42Z,user_data=None,user_id='1a010a95085342c5ae9a02f15b334fad',uuid=a57a7edf-d886-474e-84d1-03a4c655e64e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.913 257641 DEBUG nova.network.os_vif_util [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converting VIF {"id": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "address": "fa:16:3e:21:47:78", "network": {"id": "0de30c6a-82ca-4f9f-a37d-5949a70a385d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-870403960-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c57433fd3834430904b1908f24f3f2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6d55e23-5b", "ovs_interfaceid": "a6d55e23-5b85-46b2-9b38-cad1e55da096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.913 257641 DEBUG nova.network.os_vif_util [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.914 257641 DEBUG os_vif [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.916 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6d55e23-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.951 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:44 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [NOTICE]   (302829) : haproxy version is 2.8.14-c23fe91
Nov 29 03:08:44 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [NOTICE]   (302829) : path to executable is /usr/sbin/haproxy
Nov 29 03:08:44 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [ALERT]    (302829) : Current worker (302836) exited with code 143 (Terminated)
Nov 29 03:08:44 np0005539550 neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d[302819]: [WARNING]  (302829) : All workers exited. Exiting... (0)
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:44 np0005539550 systemd[1]: libpod-282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7.scope: Deactivated successfully.
Nov 29 03:08:44 np0005539550 nova_compute[257631]: 2025-11-29 08:08:44.959 257641 INFO os_vif [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:47:78,bridge_name='br-int',has_traffic_filtering=True,id=a6d55e23-5b85-46b2-9b38-cad1e55da096,network=Network(0de30c6a-82ca-4f9f-a37d-5949a70a385d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6d55e23-5b')#033[00m
Nov 29 03:08:44 np0005539550 podman[302996]: 2025-11-29 08:08:44.964134789 +0000 UTC m=+0.077675550 container died 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:08:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7-userdata-shm.mount: Deactivated successfully.
Nov 29 03:08:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e40733c12183c8df6fc2c67f442a9c968fa68c78d9276d8b8ed0135077490be2-merged.mount: Deactivated successfully.
Nov 29 03:08:45 np0005539550 podman[302996]: 2025-11-29 08:08:45.008602955 +0000 UTC m=+0.122143716 container cleanup 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:08:45 np0005539550 systemd[1]: libpod-conmon-282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7.scope: Deactivated successfully.
Nov 29 03:08:45 np0005539550 podman[303050]: 2025-11-29 08:08:45.08652476 +0000 UTC m=+0.044442736 container remove 282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.092 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e7c8ff7-a1ef-4677-aff6-7fcf2c9140d5]: (4, ('Sat Nov 29 08:08:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7)\n282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7\nSat Nov 29 08:08:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d (282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7)\n282eb1221494722bd447742cbf7237464b25bb72cbdd421587da07bf9f4511f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 134 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 115 op/s
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.095 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d4abbbb5-2d44-4a43-bd24-e4465048829f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.096 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0de30c6a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:08:45 np0005539550 nova_compute[257631]: 2025-11-29 08:08:45.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539550 kernel: tap0de30c6a-80: left promiscuous mode
Nov 29 03:08:45 np0005539550 nova_compute[257631]: 2025-11-29 08:08:45.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.118 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[94454538-157a-4f5a-8f42-b644b8972b5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.134 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82d4f382-78be-4ef0-87a3-d5e7152bc39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.135 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[09ec10d4-ab52-4eff-aa7a-1e490b0986d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.150 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a06f610f-505b-43cb-add2-506ee7e5be26]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676908, 'reachable_time': 17086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303065, 'error': None, 'target': 'ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:45 np0005539550 systemd[1]: run-netns-ovnmeta\x2d0de30c6a\x2d82ca\x2d4f9f\x2da37d\x2d5949a70a385d.mount: Deactivated successfully.
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.156 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0de30c6a-82ca-4f9f-a37d-5949a70a385d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:08:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:08:45.156 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2dcb91-537e-4bd3-ba3b-a0d4242cce26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:08:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:46 np0005539550 nova_compute[257631]: 2025-11-29 08:08:46.404 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:46.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:46 np0005539550 nova_compute[257631]: 2025-11-29 08:08:46.900 257641 INFO nova.virt.libvirt.driver [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Deleting instance files /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e_del#033[00m
Nov 29 03:08:46 np0005539550 nova_compute[257631]: 2025-11-29 08:08:46.901 257641 INFO nova.virt.libvirt.driver [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Deletion of /var/lib/nova/instances/a57a7edf-d886-474e-84d1-03a4c655e64e_del complete#033[00m
Nov 29 03:08:47 np0005539550 nova_compute[257631]: 2025-11-29 08:08:47.068 257641 INFO nova.compute.manager [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Took 2.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:08:47 np0005539550 nova_compute[257631]: 2025-11-29 08:08:47.069 257641 DEBUG oslo.service.loopingcall [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:08:47 np0005539550 nova_compute[257631]: 2025-11-29 08:08:47.069 257641 DEBUG nova.compute.manager [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:08:47 np0005539550 nova_compute[257631]: 2025-11-29 08:08:47.069 257641 DEBUG nova.network.neutron [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:08:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 134 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 165 op/s
Nov 29 03:08:47 np0005539550 nova_compute[257631]: 2025-11-29 08:08:47.974 257641 INFO nova.compute.manager [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Rebuilding instance#033[00m
Nov 29 03:08:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:48.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.212 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.404 257641 DEBUG nova.compute.manager [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.674 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_requests' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.721 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.736 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.747 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.761 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:08:48 np0005539550 nova_compute[257631]: 2025-11-29 08:08:48.765 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:08:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:48.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 101 MiB data, 646 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 905 KiB/s wr, 187 op/s
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.582 257641 DEBUG nova.compute.manager [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-unplugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.582 257641 DEBUG oslo_concurrency.lockutils [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.582 257641 DEBUG oslo_concurrency.lockutils [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.582 257641 DEBUG oslo_concurrency.lockutils [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.583 257641 DEBUG nova.compute.manager [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] No waiting events found dispatching network-vif-unplugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.583 257641 DEBUG nova.compute.manager [req-f5e46274-325a-4007-a79b-656bf8aa886e req-5dd8031d-4296-4bb1-a5db-04cee48d55d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-unplugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.836 257641 DEBUG nova.network.neutron [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.892 257641 INFO nova.compute.manager [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Took 2.82 seconds to deallocate network for instance.#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.940 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.941 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:49 np0005539550 nova_compute[257631]: 2025-11-29 08:08:49.990 257641 DEBUG nova.compute.manager [req-90e49f4c-6847-4218-9a9c-a437b9e465f4 req-a4271ef8-c800-448f-8d1a-a2ec4b57dd2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-deleted-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.038 257641 DEBUG oslo_concurrency.processutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:08:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:08:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2830173033' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.503 257641 DEBUG oslo_concurrency.processutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.514 257641 DEBUG nova.compute.provider_tree [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.531 257641 DEBUG nova.scheduler.client.report [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.552 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.576 257641 INFO nova.scheduler.client.report [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Deleted allocations for instance a57a7edf-d886-474e-84d1-03a4c655e64e#033[00m
Nov 29 03:08:50 np0005539550 nova_compute[257631]: 2025-11-29 08:08:50.643 257641 DEBUG oslo_concurrency.lockutils [None req-96e39d1e-7df5-4424-8bc6-c99a8d225499 1a010a95085342c5ae9a02f15b334fad 5c57433fd3834430904b1908f24f3f2f - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:50.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 88 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 174 op/s
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.753 257641 DEBUG nova.compute.manager [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.753 257641 DEBUG oslo_concurrency.lockutils [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.754 257641 DEBUG oslo_concurrency.lockutils [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.754 257641 DEBUG oslo_concurrency.lockutils [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a57a7edf-d886-474e-84d1-03a4c655e64e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.754 257641 DEBUG nova.compute.manager [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] No waiting events found dispatching network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:08:51 np0005539550 nova_compute[257631]: 2025-11-29 08:08:51.754 257641 WARNING nova.compute.manager [req-7fa05e4f-0191-4420-b191-b1672401058e req-dd314856-a375-4aa1-afc2-d2bbb3764c5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Received unexpected event network-vif-plugged-a6d55e23-5b85-46b2-9b38-cad1e55da096 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:08:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:52.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:52 np0005539550 podman[303142]: 2025-11-29 08:08:52.49927097 +0000 UTC m=+0.089164218 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:08:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 88 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 173 op/s
Nov 29 03:08:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:54 np0005539550 nova_compute[257631]: 2025-11-29 08:08:54.958 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 88 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 173 op/s
Nov 29 03:08:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:56 np0005539550 nova_compute[257631]: 2025-11-29 08:08:56.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:08:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:08:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:08:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 88 MiB data, 639 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.9 KiB/s wr, 112 op/s
Nov 29 03:08:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:57Z|00232|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:08:57 np0005539550 nova_compute[257631]: 2025-11-29 08:08:57.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:58Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:09:6b 10.100.0.8
Nov 29 03:08:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:08:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:09:6b 10.100.0.8
Nov 29 03:08:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:08:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:08:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:08:58 np0005539550 nova_compute[257631]: 2025-11-29 08:08:58.820 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:08:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:08:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:08:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:08:58.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 109 MiB data, 660 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.7 MiB/s wr, 92 op/s
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:08:59
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'backups']
Nov 29 03:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:08:59 np0005539550 nova_compute[257631]: 2025-11-29 08:08:59.887 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403724.88626, a57a7edf-d886-474e-84d1-03a4c655e64e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:08:59 np0005539550 nova_compute[257631]: 2025-11-29 08:08:59.888 257641 INFO nova.compute.manager [-] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:08:59 np0005539550 nova_compute[257631]: 2025-11-29 08:08:59.914 257641 DEBUG nova.compute.manager [None req-f7741928-6235-49ea-ba8f-367bc23a8699 - - - - - -] [instance: a57a7edf-d886-474e-84d1-03a4c655e64e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:08:59 np0005539550 nova_compute[257631]: 2025-11-29 08:08:59.960 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:00.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 29 03:09:01 np0005539550 kernel: tapd57beffb-4c (unregistering): left promiscuous mode
Nov 29 03:09:01 np0005539550 NetworkManager[49039]: <info>  [1764403741.1485] device (tapd57beffb-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00233|binding|INFO|Releasing lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 from this chassis (sb_readonly=0)
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.158 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00234|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 down in Southbound
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00235|binding|INFO|Removing iface tapd57beffb-4c ovn-installed in OVS
Nov 29 03:09:01 np0005539550 systemd[1]: Starting dnf makecache...
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.167 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.170 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.172 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.174 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2a7c09-7232-4c9c-b633-1a9d001cf626]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.175 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.182 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 29 03:09:01 np0005539550 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d00000049.scope: Consumed 14.305s CPU time.
Nov 29 03:09:01 np0005539550 systemd-machined[216673]: Machine qemu-35-instance-00000049 terminated.
Nov 29 03:09:01 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [NOTICE]   (302958) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:01 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [NOTICE]   (302958) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:01 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [WARNING]  (302958) : Exiting Master process...
Nov 29 03:09:01 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [ALERT]    (302958) : Current worker (302960) exited with code 143 (Terminated)
Nov 29 03:09:01 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[302951]: [WARNING]  (302958) : All workers exited. Exiting... (0)
Nov 29 03:09:01 np0005539550 systemd[1]: libpod-d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6.scope: Deactivated successfully.
Nov 29 03:09:01 np0005539550 podman[303199]: 2025-11-29 08:09:01.362678674 +0000 UTC m=+0.053735829 container died d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:09:01 np0005539550 kernel: tapd57beffb-4c: entered promiscuous mode
Nov 29 03:09:01 np0005539550 NetworkManager[49039]: <info>  [1764403741.3815] manager: (tapd57beffb-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/104)
Nov 29 03:09:01 np0005539550 systemd-udevd[303181]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:01 np0005539550 kernel: tapd57beffb-4c (unregistering): left promiscuous mode
Nov 29 03:09:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-605664eb80f70e966471ad0c177be92012755baea1bbedc3a36a7db5c1538197-merged.mount: Deactivated successfully.
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.398 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00236|binding|INFO|Claiming lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for this chassis.
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00237|binding|INFO|d57beffb-4cf5-44cf-ba6d-8f78a18b0f51: Claiming fa:16:3e:2f:09:6b 10.100.0.8
Nov 29 03:09:01 np0005539550 podman[303199]: 2025-11-29 08:09:01.403682593 +0000 UTC m=+0.094739758 container cleanup d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.410 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:01 np0005539550 dnf[303173]: Metadata cache refreshed recently.
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00238|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 ovn-installed in OVS
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00239|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 up in Southbound
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00240|binding|INFO|Releasing lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 from this chassis (sb_readonly=1)
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00241|if_status|INFO|Dropped 2 log messages in last 911 seconds (most recently, 911 seconds ago) due to excessive rate
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00242|if_status|INFO|Not setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 down as sb is readonly
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00243|binding|INFO|Removing iface tapd57beffb-4c ovn-installed in OVS
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00244|binding|INFO|Releasing lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 from this chassis (sb_readonly=0)
Nov 29 03:09:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:01Z|00245|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 down in Southbound
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.449 257641 DEBUG nova.compute.manager [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.450 257641 DEBUG oslo_concurrency.lockutils [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.450 257641 DEBUG oslo_concurrency.lockutils [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.450 257641 DEBUG oslo_concurrency.lockutils [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.451 257641 DEBUG nova.compute.manager [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.451 257641 WARNING nova.compute.manager [req-1d34e268-e8d6-4417-80d0-832b370f89db req-86b139bd-a320-407c-ba55-1f00ef58dceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.455 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.458 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:01 np0005539550 systemd[1]: libpod-conmon-d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6.scope: Deactivated successfully.
Nov 29 03:09:01 np0005539550 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 03:09:01 np0005539550 systemd[1]: Finished dnf makecache.
Nov 29 03:09:01 np0005539550 podman[303234]: 2025-11-29 08:09:01.485957037 +0000 UTC m=+0.050652472 container remove d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.491 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7d991888-e58f-4823-a786-b05aa7eab8ec]: (4, ('Sat Nov 29 08:09:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6)\nd4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6\nSat Nov 29 08:09:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (d4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6)\nd4be085870e1b2bcb8eeeb64096c7190ef8457f90f6eb90ddd9f1d5dc06a61e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.494 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4e5d81-89d8-45b5-b53c-a3e7ef054a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.495 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.517 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55ea609f-f7da-489d-a744-703641501af1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.531 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06d59b94-1921-493f-972f-31497a43acab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e4e9526f-5c9a-496d-9b47-742605ec2ae9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.548 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5131d7cf-8911-4eab-a277-0624acfdf835]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677005, 'reachable_time': 17537, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303254, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.552 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.553 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[eba3e9ae-95de-471e-833b-6601da8741e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.554 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.555 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.556 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[981fcd61-2a4b-48aa-b00e-d473ebc28a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.558 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.560 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:01.561 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5239e2c2-047a-481b-adaf-152de9eb1806]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.839 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.845 257641 INFO nova.virt.libvirt.driver [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance destroyed successfully.#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.851 257641 INFO nova.virt.libvirt.driver [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance destroyed successfully.#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.851 257641 DEBUG nova.virt.libvirt.vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:08:47Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.852 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.852 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.853 257641 DEBUG os_vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.855 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.856 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd57beffb-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.857 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:01 np0005539550 nova_compute[257631]: 2025-11-29 08:09:01.861 257641 INFO os_vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c')#033[00m
Nov 29 03:09:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.313 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deleting instance files /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad_del#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.314 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deletion of /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad_del complete#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.466 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.467 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating image(s)#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.494 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.526 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.558 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.561 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.624 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.625 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.626 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.626 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.658 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.662 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de c24a72ed-78bb-4305-abf0-04f30042a9ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:02.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:02 np0005539550 nova_compute[257631]: 2025-11-29 08:09:02.952 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de c24a72ed-78bb-4305-abf0-04f30042a9ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.024 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 121 MiB data, 682 MiB used, 20 GiB / 21 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.137 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.138 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Ensure instance console log exists: /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.138 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.139 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.139 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.141 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start _get_guest_xml network_info=[{"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.144 257641 WARNING nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.156 257641 DEBUG nova.virt.libvirt.host [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.157 257641 DEBUG nova.virt.libvirt.host [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.161 257641 DEBUG nova.virt.libvirt.host [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.162 257641 DEBUG nova.virt.libvirt.host [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.163 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.163 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.163 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.163 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.164 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.164 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.164 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.164 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.164 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.165 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.165 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.165 257641 DEBUG nova.virt.hardware [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.165 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.184 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615997777' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.613 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.648 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.652 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.681 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.681 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.682 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.682 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.682 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.683 257641 WARNING nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.683 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.683 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.684 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.684 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.684 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.684 257641 WARNING nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.685 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.685 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.685 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.685 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.686 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.686 257641 WARNING nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.686 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.687 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.688 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.688 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.688 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.688 257641 WARNING nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.688 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.689 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.689 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.689 257641 DEBUG oslo_concurrency.lockutils [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.689 257641 DEBUG nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:03 np0005539550 nova_compute[257631]: 2025-11-29 08:09:03.689 257641 WARNING nova.compute.manager [req-313a2981-b7b0-419f-b0f5-026d3afe2629 req-c5f3c034-7275-4e99-8ea1-f22333401040 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:09:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99812787' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.082 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.083 257641 DEBUG nova.virt.libvirt.vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:02Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.084 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.084 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.087 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <uuid>c24a72ed-78bb-4305-abf0-04f30042a9ad</uuid>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <name>instance-00000049</name>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-68286740</nova:name>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:09:03</nova:creationTime>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <nova:port uuid="d57beffb-4cf5-44cf-ba6d-8f78a18b0f51">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="serial">c24a72ed-78bb-4305-abf0-04f30042a9ad</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="uuid">c24a72ed-78bb-4305-abf0-04f30042a9ad</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/c24a72ed-78bb-4305-abf0-04f30042a9ad_disk">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:2f:09:6b"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <target dev="tapd57beffb-4c"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/console.log" append="off"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:09:04 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:09:04 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:09:04 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:09:04 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.088 257641 DEBUG nova.compute.manager [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Preparing to wait for external event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.088 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.088 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.088 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.089 257641 DEBUG nova.virt.libvirt.vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:08:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:02Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.089 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.089 257641 DEBUG nova.network.os_vif_util [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.090 257641 DEBUG os_vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.090 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.091 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.091 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.093 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.094 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd57beffb-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.094 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd57beffb-4c, col_values=(('external_ids', {'iface-id': 'd57beffb-4cf5-44cf-ba6d-8f78a18b0f51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:09:6b', 'vm-uuid': 'c24a72ed-78bb-4305-abf0-04f30042a9ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.096 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:04 np0005539550 NetworkManager[49039]: <info>  [1764403744.0968] manager: (tapd57beffb-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.097 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.101 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.102 257641 INFO os_vif [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c')#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.174 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.174 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.174 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:2f:09:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.175 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Using config drive#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.204 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.238 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.269 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'keypairs' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.725 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Creating config drive at /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.730 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_z64x2ow execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:04.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.865 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_z64x2ow" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.894 257641 DEBUG nova.storage.rbd_utils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:04 np0005539550 nova_compute[257631]: 2025-11-29 08:09:04.898 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.060 257641 DEBUG oslo_concurrency.processutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config c24a72ed-78bb-4305-abf0-04f30042a9ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.061 257641 INFO nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deleting local config drive /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 100 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.9 MiB/s wr, 98 op/s
Nov 29 03:09:05 np0005539550 kernel: tapd57beffb-4c: entered promiscuous mode
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.1198] manager: (tapd57beffb-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/106)
Nov 29 03:09:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:05Z|00246|binding|INFO|Claiming lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for this chassis.
Nov 29 03:09:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:05Z|00247|binding|INFO|d57beffb-4cf5-44cf-ba6d-8f78a18b0f51: Claiming fa:16:3e:2f:09:6b 10.100.0.8
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.120 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.129 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.130 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.132 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:09:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:05Z|00248|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 ovn-installed in OVS
Nov 29 03:09:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:05Z|00249|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 up in Southbound
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.142 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[909a6274-10e0-4b7c-970e-761885b245b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.143 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.144 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.144 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[10ad63f7-879b-4353-90ce-cda88a10cb48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.145 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.145 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b2d046-ddec-4c76-b56a-6a7f9e58187b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 systemd-machined[216673]: New machine qemu-36-instance-00000049.
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.160 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[a6719eba-d13e-4b77-9ade-6b07cefc9489]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.173 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e62327e0-b6ab-47e3-abda-faaae5ca2c17]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 systemd[1]: Started Virtual Machine qemu-36-instance-00000049.
Nov 29 03:09:05 np0005539550 systemd-udevd[303583]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.204 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2051e62f-460e-4e59-ab9b-89534f681090]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.2071] device (tapd57beffb-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.2078] device (tapd57beffb-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.2103] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/107)
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.209 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[74d2935c-de83-401c-b63f-107020ba321b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 systemd-udevd[303587]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.236 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[607f053e-e8da-4349-aae9-dd8e3f73789e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.240 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e01894b1-8df0-446f-a879-9b4fe8ee50f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.2608] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.267 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a75ef3ae-9c73-4da2-804d-deeb047534f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.285 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6df09d-e759-4f3f-aeab-844739314080]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679292, 'reachable_time': 24384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303611, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.306 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[583a5a41-037b-487c-b251-b902073cd521]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 679292, 'tstamp': 679292}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303612, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.324 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2054cacb-59e7-41bd-a463-c44776a45928]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679292, 'reachable_time': 24384, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303613, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.359 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ca561bf5-189c-4cda-9dd9-495817dd3bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.415 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1f760cd5-4579-4230-b57f-bad733b5a3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.417 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.417 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.417 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:05 np0005539550 NetworkManager[49039]: <info>  [1764403745.4202] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Nov 29 03:09:05 np0005539550 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.420 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.423 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:05Z|00250|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.425 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.426 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5a593d93-f0b8-4a64-82d3-ca1b6d999c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.427 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:05.429 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:05 np0005539550 podman[303684]: 2025-11-29 08:09:05.788793736 +0000 UTC m=+0.054799186 container create 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.816 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for c24a72ed-78bb-4305-abf0-04f30042a9ad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.817 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403745.8158336, c24a72ed-78bb-4305-abf0-04f30042a9ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.817 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:05 np0005539550 systemd[1]: Started libpod-conmon-8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b.scope.
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.849 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:05 np0005539550 podman[303684]: 2025-11-29 08:09:05.758740252 +0000 UTC m=+0.024745692 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.856 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403745.8161385, c24a72ed-78bb-4305-abf0-04f30042a9ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.856 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c934672acab0dd12696cac369a08fe4f83ca25ec041e54e72a1d2d9202365b04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.882 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.885 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:05 np0005539550 podman[303684]: 2025-11-29 08:09:05.896660533 +0000 UTC m=+0.162665973 container init 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:09:05 np0005539550 nova_compute[257631]: 2025-11-29 08:09:05.903 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:09:05 np0005539550 podman[303684]: 2025-11-29 08:09:05.904242173 +0000 UTC m=+0.170247593 container start 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:09:05 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [NOTICE]   (303704) : New worker (303706) forked
Nov 29 03:09:05 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [NOTICE]   (303704) : Loading success.
Nov 29 03:09:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:06.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:06 np0005539550 nova_compute[257631]: 2025-11-29 08:09:06.455 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:06.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 88 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 352 KiB/s rd, 3.9 MiB/s wr, 124 op/s
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:09:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:08.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:08.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:09:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:09:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 88 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 127 op/s
Nov 29 03:09:09 np0005539550 nova_compute[257631]: 2025-11-29 08:09:09.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:10.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:10 np0005539550 podman[303718]: 2025-11-29 08:09:10.320379492 +0000 UTC m=+0.054760494 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:10 np0005539550 podman[303719]: 2025-11-29 08:09:10.320516236 +0000 UTC m=+0.055505614 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:09:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:10.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.059 257641 DEBUG nova.compute.manager [req-ff27f8d4-9c89-4258-bc9e-1261ae2f5dce req-24f6ffbd-4aa9-4fd3-8033-740170e2f9cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.060 257641 DEBUG oslo_concurrency.lockutils [req-ff27f8d4-9c89-4258-bc9e-1261ae2f5dce req-24f6ffbd-4aa9-4fd3-8033-740170e2f9cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.060 257641 DEBUG oslo_concurrency.lockutils [req-ff27f8d4-9c89-4258-bc9e-1261ae2f5dce req-24f6ffbd-4aa9-4fd3-8033-740170e2f9cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.061 257641 DEBUG oslo_concurrency.lockutils [req-ff27f8d4-9c89-4258-bc9e-1261ae2f5dce req-24f6ffbd-4aa9-4fd3-8033-740170e2f9cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.061 257641 DEBUG nova.compute.manager [req-ff27f8d4-9c89-4258-bc9e-1261ae2f5dce req-24f6ffbd-4aa9-4fd3-8033-740170e2f9cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Processing event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.062 257641 DEBUG nova.compute.manager [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.066 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403751.0666966, c24a72ed-78bb-4305-abf0-04f30042a9ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.067 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.068 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.071 257641 INFO nova.virt.libvirt.driver [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance spawned successfully.#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.072 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.090 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.096 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.096 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.097 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.098 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.098 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.099 257641 DEBUG nova.virt.libvirt.driver [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 88 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 101 op/s
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.104 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.139 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.161 257641 DEBUG nova.compute.manager [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.228 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.229 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.229 257641 DEBUG nova.objects.instance [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.296 257641 DEBUG oslo_concurrency.lockutils [None req-b617cc33-29a1-44ee-aa87-cfcc4307d424 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.067s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:11 np0005539550 nova_compute[257631]: 2025-11-29 08:09:11.503 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 88 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.146 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.149 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.150 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.150 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.151 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.152 257641 INFO nova.compute.manager [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Terminating instance#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.154 257641 DEBUG nova.compute.manager [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.191 257641 DEBUG nova.compute.manager [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.191 257641 DEBUG oslo_concurrency.lockutils [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.192 257641 DEBUG oslo_concurrency.lockutils [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.192 257641 DEBUG oslo_concurrency.lockutils [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:13 np0005539550 kernel: tapd57beffb-4c (unregistering): left promiscuous mode
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.192 257641 DEBUG nova.compute.manager [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.193 257641 WARNING nova.compute.manager [req-1245f83b-72d6-4e3b-a0e3-40039cddc9f2 req-d5cc8e83-b4fe-4a7e-aa5f-07020e68fccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:13 np0005539550 NetworkManager[49039]: <info>  [1764403753.1967] device (tapd57beffb-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:13Z|00251|binding|INFO|Releasing lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 from this chassis (sb_readonly=0)
Nov 29 03:09:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:13Z|00252|binding|INFO|Setting lport d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 down in Southbound
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.204 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:13Z|00253|binding|INFO|Removing iface tapd57beffb-4c ovn-installed in OVS
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.206 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.212 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:09:6b 10.100.0.8'], port_security=['fa:16:3e:2f:09:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c24a72ed-78bb-4305-abf0-04f30042a9ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.214 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.216 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.217 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc362bf-7a1d-455a-ba61-8ce052153853]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.217 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.238 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000049.scope: Deactivated successfully.
Nov 29 03:09:13 np0005539550 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000049.scope: Consumed 2.784s CPU time.
Nov 29 03:09:13 np0005539550 systemd-machined[216673]: Machine qemu-36-instance-00000049 terminated.
Nov 29 03:09:13 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [NOTICE]   (303704) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:13 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [NOTICE]   (303704) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:13 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [WARNING]  (303704) : Exiting Master process...
Nov 29 03:09:13 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [ALERT]    (303704) : Current worker (303706) exited with code 143 (Terminated)
Nov 29 03:09:13 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[303700]: [WARNING]  (303704) : All workers exited. Exiting... (0)
Nov 29 03:09:13 np0005539550 systemd[1]: libpod-8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b.scope: Deactivated successfully.
Nov 29 03:09:13 np0005539550 podman[303961]: 2025-11-29 08:09:13.363285482 +0000 UTC m=+0.043730378 container died 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.373 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.375 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.387 257641 INFO nova.virt.libvirt.driver [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Instance destroyed successfully.#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.387 257641 DEBUG nova.objects.instance [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'resources' on Instance uuid c24a72ed-78bb-4305-abf0-04f30042a9ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c934672acab0dd12696cac369a08fe4f83ca25ec041e54e72a1d2d9202365b04-merged.mount: Deactivated successfully.
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.407 257641 DEBUG nova.virt.libvirt.vif [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:08:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-68286740',display_name='tempest-ServerDiskConfigTestJSON-server-68286740',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-68286740',id=73,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-usdqo0zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:11Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=c24a72ed-78bb-4305-abf0-04f30042a9ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:13 np0005539550 podman[303961]: 2025-11-29 08:09:13.407662436 +0000 UTC m=+0.088107332 container cleanup 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.407 257641 DEBUG nova.network.os_vif_util [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "address": "fa:16:3e:2f:09:6b", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd57beffb-4c", "ovs_interfaceid": "d57beffb-4cf5-44cf-ba6d-8f78a18b0f51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.408 257641 DEBUG nova.network.os_vif_util [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.408 257641 DEBUG os_vif [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.410 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd57beffb-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.411 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.413 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.415 257641 INFO os_vif [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:09:6b,bridge_name='br-int',has_traffic_filtering=True,id=d57beffb-4cf5-44cf-ba6d-8f78a18b0f51,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd57beffb-4c')#033[00m
Nov 29 03:09:13 np0005539550 systemd[1]: libpod-conmon-8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b.scope: Deactivated successfully.
Nov 29 03:09:13 np0005539550 podman[304000]: 2025-11-29 08:09:13.479408495 +0000 UTC m=+0.045639965 container remove 8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.485 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78f08ce0-74a4-4bee-a110-3853224fecc4]: (4, ('Sat Nov 29 08:09:13 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b)\n8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b\nSat Nov 29 08:09:13 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b)\n8b2f03245e09d0c08082e0a682ac2dabd3e8d7410699ecf888cd2a7179a9fe1b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.486 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1614535e-cc7e-41eb-aba0-5a9b0b35271d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.487 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.489 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.503 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.506 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4c580f-94c1-4322-a43a-4df965c1164c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.522 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[956457c1-62a1-49c0-98d9-6090fadc5744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.523 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f5b089-26f5-46a2-a63e-44aa83ea744e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.537 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cee03e47-23da-4ee2-8597-eb2abe0b7630]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 679286, 'reachable_time': 42051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304033, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.542 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:13.543 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfd88e5-f117-439a-9562-a2a0d192594f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:09:13 np0005539550 nova_compute[257631]: 2025-11-29 08:09:13.937 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.130 257641 INFO nova.virt.libvirt.driver [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deleting instance files /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad_del#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.130 257641 INFO nova.virt.libvirt.driver [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deletion of /var/lib/nova/instances/c24a72ed-78bb-4305-abf0-04f30042a9ad_del complete#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.181 257641 INFO nova.compute.manager [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.182 257641 DEBUG oslo.service.loopingcall [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.182 257641 DEBUG nova.compute.manager [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:14 np0005539550 nova_compute[257631]: 2025-11-29 08:09:14.182 257641 DEBUG nova.network.neutron [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:14.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0c634f3f-4ce6-4788-921c-ee2c6a1b36bd does not exist
Nov 29 03:09:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4646ae65-190a-41ee-b136-83bfc136ba1a does not exist
Nov 29 03:09:15 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7f16ec28-5c12-46bb-92c1-640d4f3f46d2 does not exist
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 126 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 126 op/s
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.232 257641 DEBUG nova.network.neutron [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.252 257641 INFO nova.compute.manager [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Took 1.07 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.280 257641 DEBUG nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.280 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.281 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.281 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.281 257641 DEBUG nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.282 257641 DEBUG nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-unplugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.282 257641 DEBUG nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.282 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.283 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.283 257641 DEBUG oslo_concurrency.lockutils [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.283 257641 DEBUG nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] No waiting events found dispatching network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.283 257641 WARNING nova.compute.manager [req-ea5b866d-8e52-4eb6-8b9e-a69180d29827 req-b11d9fac-dde2-4fe3-b5ea-2042e65d8745 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received unexpected event network-vif-plugged-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.308 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.308 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:15.335 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.335 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:15.336 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.406 257641 DEBUG oslo_concurrency.processutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.687362478 +0000 UTC m=+0.037574434 container create 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:09:15 np0005539550 systemd[1]: Started libpod-conmon-5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4.scope.
Nov 29 03:09:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.671650704 +0000 UTC m=+0.021862680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.778125255 +0000 UTC m=+0.128337231 container init 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.785602472 +0000 UTC m=+0.135814428 container start 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.789157802 +0000 UTC m=+0.139369778 container attach 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:15 np0005539550 practical_margulis[304212]: 167 167
Nov 29 03:09:15 np0005539550 systemd[1]: libpod-5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4.scope: Deactivated successfully.
Nov 29 03:09:15 np0005539550 conmon[304212]: conmon 5d23495842cdd20afb27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4.scope/container/memory.events
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.794786863 +0000 UTC m=+0.144998819 container died 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:09:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2201328f5705f1ad26f64408afacb3d6f2feeb6f952bd3ae0efc1cfee8958bfd-merged.mount: Deactivated successfully.
Nov 29 03:09:15 np0005539550 podman[304196]: 2025-11-29 08:09:15.833630207 +0000 UTC m=+0.183842203 container remove 5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_margulis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:09:15 np0005539550 systemd[1]: libpod-conmon-5d23495842cdd20afb277d6af4c0ae1de3250125a2be1e7a034f2d0091bce8c4.scope: Deactivated successfully.
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595749492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.882 257641 DEBUG oslo_concurrency.processutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.891 257641 DEBUG nova.compute.provider_tree [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:15 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.922 257641 DEBUG nova.scheduler.client.report [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:15 np0005539550 nova_compute[257631]: 2025-11-29 08:09:15.949 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.002 257641 INFO nova.scheduler.client.report [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocations for instance c24a72ed-78bb-4305-abf0-04f30042a9ad#033[00m
Nov 29 03:09:16 np0005539550 podman[304237]: 2025-11-29 08:09:16.021096571 +0000 UTC m=+0.047932274 container create fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:09:16 np0005539550 systemd[1]: Started libpod-conmon-fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f.scope.
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.070 257641 DEBUG oslo_concurrency.lockutils [None req-1f3b8bcf-31e4-448c-8697-2afe01173137 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "c24a72ed-78bb-4305-abf0-04f30042a9ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:16 np0005539550 podman[304237]: 2025-11-29 08:09:16.004089094 +0000 UTC m=+0.030924827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:16 np0005539550 podman[304237]: 2025-11-29 08:09:16.106023931 +0000 UTC m=+0.132859654 container init fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:09:16 np0005539550 podman[304237]: 2025-11-29 08:09:16.111269293 +0000 UTC m=+0.138104996 container start fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:09:16 np0005539550 podman[304237]: 2025-11-29 08:09:16.114594856 +0000 UTC m=+0.141430559 container attach fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.679 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.680 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.697 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:16 np0005539550 awesome_blackwell[304253]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:09:16 np0005539550 awesome_blackwell[304253]: --> relative data size: 1.0
Nov 29 03:09:16 np0005539550 awesome_blackwell[304253]: --> All data devices are unavailable
Nov 29 03:09:16 np0005539550 systemd[1]: libpod-fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f.scope: Deactivated successfully.
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.944 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.946 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.952 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:16 np0005539550 nova_compute[257631]: 2025-11-29 08:09:16.953 257641 INFO nova.compute.claims [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:16 np0005539550 podman[304268]: 2025-11-29 08:09:16.969224067 +0000 UTC m=+0.026381642 container died fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:09:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2809ef05fd74b49edf17c43909a54694f5f8f265ba226fdb4169018edfa90a8a-merged.mount: Deactivated successfully.
Nov 29 03:09:17 np0005539550 podman[304268]: 2025-11-29 08:09:17.020882953 +0000 UTC m=+0.078040528 container remove fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:17 np0005539550 systemd[1]: libpod-conmon-fb6d8901d41390f26ddcf2c9bdccef5edcd8c4ed82911607f45d5722477c587f.scope: Deactivated successfully.
Nov 29 03:09:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 135 MiB data, 677 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.7 MiB/s wr, 165 op/s
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.153 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.365 257641 DEBUG nova.compute.manager [req-cdaf03af-966d-46e8-a8c5-31447009b17a req-c9e18b6a-18f8-43d2-9cb1-18c121e12cc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Received event network-vif-deleted-d57beffb-4cf5-44cf-ba6d-8f78a18b0f51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3989576070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.609 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.615 257641 DEBUG nova.compute.provider_tree [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.629 257641 DEBUG nova.scheduler.client.report [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.657067424 +0000 UTC m=+0.044358364 container create 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.657 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.658 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:17 np0005539550 systemd[1]: Started libpod-conmon-947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b.scope.
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.706 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.707 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.637273638 +0000 UTC m=+0.024564598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.731 257641 INFO nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.734111527 +0000 UTC m=+0.121402487 container init 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.742166819 +0000 UTC m=+0.129457759 container start 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.745873012 +0000 UTC m=+0.133164252 container attach 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:17 np0005539550 brave_edison[304461]: 167 167
Nov 29 03:09:17 np0005539550 systemd[1]: libpod-947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b.scope: Deactivated successfully.
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.747989135 +0000 UTC m=+0.135280085 container died 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.751 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b79fb122af00e71aa3ab37750f49fc1bdf84f4d8b44c38f5c955a044b7799b28-merged.mount: Deactivated successfully.
Nov 29 03:09:17 np0005539550 podman[304444]: 2025-11-29 08:09:17.783048485 +0000 UTC m=+0.170339425 container remove 947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_edison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:09:17 np0005539550 systemd[1]: libpod-conmon-947727b890970cb528360137222bec45d80c82f7c6ff1261e34e9646f33aca5b.scope: Deactivated successfully.
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.848 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.850 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.850 257641 INFO nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Creating image(s)#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.878 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.903 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.933 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.936 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:17 np0005539550 podman[304502]: 2025-11-29 08:09:17.937751106 +0000 UTC m=+0.043633886 container create b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.967 257641 DEBUG nova.policy [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '623e430e64a04e20a3224da48323ec68', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3915075f6af64d22aacc0d811789b57a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:17 np0005539550 nova_compute[257631]: 2025-11-29 08:09:17.976 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:17 np0005539550 systemd[1]: Started libpod-conmon-b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461.scope.
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.006 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.007 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.007 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.007 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.008 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d86fec1bf5b749023f8197b4d175eeab44c519a76fbb439727920f4c42d6e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d86fec1bf5b749023f8197b4d175eeab44c519a76fbb439727920f4c42d6e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d86fec1bf5b749023f8197b4d175eeab44c519a76fbb439727920f4c42d6e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0d86fec1bf5b749023f8197b4d175eeab44c519a76fbb439727920f4c42d6e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:17.919549519 +0000 UTC m=+0.025432329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:18.034676428 +0000 UTC m=+0.140559228 container init b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.034 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.036 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.036 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.037 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:18.044169026 +0000 UTC m=+0.150051806 container start b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:18.049618662 +0000 UTC m=+0.155501472 container attach b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.066 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.070 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 103eab90-5b7f-4550-a756-2a9bd5f68194_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.348 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 103eab90-5b7f-4550-a756-2a9bd5f68194_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.417 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.422 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] resizing rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893805271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.457 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.532 257641 DEBUG nova.objects.instance [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lazy-loading 'migration_context' on Instance uuid 103eab90-5b7f-4550-a756-2a9bd5f68194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.549 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.550 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Ensure instance console log exists: /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.551 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.552 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.552 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.695 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.697 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4469MB free_disk=20.96881103515625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.697 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.697 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.770 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 103eab90-5b7f-4550-a756-2a9bd5f68194 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.770 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.770 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:09:18 np0005539550 nova_compute[257631]: 2025-11-29 08:09:18.820 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]: {
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:    "0": [
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:        {
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "devices": [
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "/dev/loop3"
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            ],
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "lv_name": "ceph_lv0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "lv_size": "7511998464",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "name": "ceph_lv0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "tags": {
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.cluster_name": "ceph",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.crush_device_class": "",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.encrypted": "0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.osd_id": "0",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.type": "block",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:                "ceph.vdo": "0"
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            },
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "type": "block",
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:            "vg_name": "ceph_vg0"
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:        }
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]:    ]
Nov 29 03:09:18 np0005539550 quizzical_heisenberg[304557]: }
Nov 29 03:09:18 np0005539550 systemd[1]: libpod-b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461.scope: Deactivated successfully.
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:18.849975121 +0000 UTC m=+0.955857941 container died b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:09:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f0d86fec1bf5b749023f8197b4d175eeab44c519a76fbb439727920f4c42d6e8-merged.mount: Deactivated successfully.
Nov 29 03:09:18 np0005539550 podman[304502]: 2025-11-29 08:09:18.927576488 +0000 UTC m=+1.033459278 container remove b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:09:18 np0005539550 systemd[1]: libpod-conmon-b58a68edead8588c01ad8191b3ecb7d37e3bee20aa28dcbdfe223f892e529461.scope: Deactivated successfully.
Nov 29 03:09:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:18.941 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:18.942 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:18.942 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 173 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.8 MiB/s wr, 181 op/s
Nov 29 03:09:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1292604426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.278 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Successfully created port: d0301393-0ea0-451b-8472-fdc6250c56a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.283 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.289 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.307 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.341 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:09:19 np0005539550 nova_compute[257631]: 2025-11-29 08:09:19.342 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.578995041 +0000 UTC m=+0.041151414 container create f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:09:19 np0005539550 systemd[1]: Started libpod-conmon-f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad.scope.
Nov 29 03:09:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.562250141 +0000 UTC m=+0.024406534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.658319911 +0000 UTC m=+0.120476304 container init f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.665323097 +0000 UTC m=+0.127479470 container start f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.66866273 +0000 UTC m=+0.130819103 container attach f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:09:19 np0005539550 sad_golick[304890]: 167 167
Nov 29 03:09:19 np0005539550 systemd[1]: libpod-f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad.scope: Deactivated successfully.
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.671216204 +0000 UTC m=+0.133372577 container died f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:09:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-063747f334245a23ec682df5c72ba0f6a0928eda1cdab17617853580fc7770e9-merged.mount: Deactivated successfully.
Nov 29 03:09:19 np0005539550 podman[304873]: 2025-11-29 08:09:19.706783807 +0000 UTC m=+0.168940180 container remove f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:09:19 np0005539550 systemd[1]: libpod-conmon-f819aaa237b9ee8cece3e62f6b8245b838f8c146b2cae9919ee1b48ff7b87dad.scope: Deactivated successfully.
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016961253024381166 of space, bias 1.0, pg target 0.508837590731435 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:09:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:09:19 np0005539550 podman[304913]: 2025-11-29 08:09:19.943980808 +0000 UTC m=+0.071239279 container create 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:19 np0005539550 systemd[1]: Started libpod-conmon-94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070.scope.
Nov 29 03:09:20 np0005539550 podman[304913]: 2025-11-29 08:09:19.913555454 +0000 UTC m=+0.040814005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:09:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0610a21e6613b5336c099df08b4ca8fcedd4fe726ff7d72db0edb4291387a3bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0610a21e6613b5336c099df08b4ca8fcedd4fe726ff7d72db0edb4291387a3bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0610a21e6613b5336c099df08b4ca8fcedd4fe726ff7d72db0edb4291387a3bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0610a21e6613b5336c099df08b4ca8fcedd4fe726ff7d72db0edb4291387a3bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:20 np0005539550 podman[304913]: 2025-11-29 08:09:20.033194446 +0000 UTC m=+0.160452917 container init 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:20 np0005539550 podman[304913]: 2025-11-29 08:09:20.044377896 +0000 UTC m=+0.171636357 container start 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:09:20 np0005539550 podman[304913]: 2025-11-29 08:09:20.047629378 +0000 UTC m=+0.174887839 container attach 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.046 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Successfully updated port: d0301393-0ea0-451b-8472-fdc6250c56a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.066 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.066 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquired lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.067 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.136 257641 DEBUG nova.compute.manager [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-changed-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.136 257641 DEBUG nova.compute.manager [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Refreshing instance network info cache due to event network-changed-d0301393-0ea0-451b-8472-fdc6250c56a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.140 257641 DEBUG oslo_concurrency.lockutils [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.285 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.286 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.286 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.311 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.311 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.312 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.312 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.420 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:09:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:20.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 nova_compute[257631]: 2025-11-29 08:09:20.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]: {
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:        "osd_id": 0,
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:        "type": "bluestore"
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]:    }
Nov 29 03:09:20 np0005539550 vigorous_chaplygin[304931]: }
Nov 29 03:09:20 np0005539550 systemd[1]: libpod-94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070.scope: Deactivated successfully.
Nov 29 03:09:20 np0005539550 conmon[304931]: conmon 94b3fe7555a39d5e9306 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070.scope/container/memory.events
Nov 29 03:09:20 np0005539550 podman[304913]: 2025-11-29 08:09:20.958577692 +0000 UTC m=+1.085836153 container died 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:09:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0610a21e6613b5336c099df08b4ca8fcedd4fe726ff7d72db0edb4291387a3bd-merged.mount: Deactivated successfully.
Nov 29 03:09:21 np0005539550 podman[304913]: 2025-11-29 08:09:21.011710195 +0000 UTC m=+1.138968656 container remove 94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:09:21 np0005539550 systemd[1]: libpod-conmon-94b3fe7555a39d5e93065d23bde9ff6dee0942c36b33ff6247b5e90f1b3f6070.scope: Deactivated successfully.
Nov 29 03:09:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:09:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:09:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8c051f98-0274-4327-bd52-4a49f250058c does not exist
Nov 29 03:09:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 507f7b63-0e20-4ebf-b376-8f07e4fd77fe does not exist
Nov 29 03:09:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c6e95fe6-51c3-40db-8ada-deb48d045482 does not exist
Nov 29 03:09:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 226 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 217 op/s
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.574 257641 DEBUG nova.network.neutron [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updating instance_info_cache with network_info: [{"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.605 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Releasing lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.605 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Instance network_info: |[{"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.605 257641 DEBUG oslo_concurrency.lockutils [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.606 257641 DEBUG nova.network.neutron [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Refreshing network info cache for port d0301393-0ea0-451b-8472-fdc6250c56a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.608 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Start _get_guest_xml network_info=[{"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.613 257641 WARNING nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.620 257641 DEBUG nova.virt.libvirt.host [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.621 257641 DEBUG nova.virt.libvirt.host [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.625 257641 DEBUG nova.virt.libvirt.host [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.626 257641 DEBUG nova.virt.libvirt.host [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.627 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.627 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.628 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.628 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.628 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.628 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.628 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.629 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.629 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.629 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.629 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.629 257641 DEBUG nova.virt.hardware [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:09:21 np0005539550 nova_compute[257631]: 2025-11-29 08:09:21.633 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4185188092' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.118 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.145 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.149 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:22.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:22.338 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:09:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/151948229' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.564 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.567 257641 DEBUG nova.virt.libvirt.vif [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDuoImmGlr2ljuzK21E62mghIO3PiaBWY3sLNDR+fM8pncOO4d77sC7RKjPHRhMVLbpNvx8LePqPu6SzStw/orq5X6ip3CzYjWHpyXzP6Ykowc/d9Mu/JQ7Qxo+WylzZg==',key_name='tempest-keypair-553026050',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3915075f6af64d22aacc0d811789b57a',ramdisk_id='',reservation_id='r-ky6ootgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-2037332346',owner_user_name='tempest-ServersTestFqdnHostnames-2037332346-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='623e430e64a04e20a3224da48323ec68',uuid=103eab90-5b7f-4550-a756-2a9bd5f68194,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.567 257641 DEBUG nova.network.os_vif_util [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converting VIF {"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.570 257641 DEBUG nova.network.os_vif_util [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.571 257641 DEBUG nova.objects.instance [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lazy-loading 'pci_devices' on Instance uuid 103eab90-5b7f-4550-a756-2a9bd5f68194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.587 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <uuid>103eab90-5b7f-4550-a756-2a9bd5f68194</uuid>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <name>instance-0000004c</name>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:name>guest-instance-1.domain.com</nova:name>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:09:21</nova:creationTime>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:user uuid="623e430e64a04e20a3224da48323ec68">tempest-ServersTestFqdnHostnames-2037332346-project-member</nova:user>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:project uuid="3915075f6af64d22aacc0d811789b57a">tempest-ServersTestFqdnHostnames-2037332346</nova:project>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <nova:port uuid="d0301393-0ea0-451b-8472-fdc6250c56a6">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="serial">103eab90-5b7f-4550-a756-2a9bd5f68194</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="uuid">103eab90-5b7f-4550-a756-2a9bd5f68194</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/103eab90-5b7f-4550-a756-2a9bd5f68194_disk">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:53:0c:8a"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <target dev="tapd0301393-0e"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/console.log" append="off"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:09:22 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:09:22 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:09:22 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:09:22 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.587 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Preparing to wait for external event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.588 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.588 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.588 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.589 257641 DEBUG nova.virt.libvirt.vif [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDuoImmGlr2ljuzK21E62mghIO3PiaBWY3sLNDR+fM8pncOO4d77sC7RKjPHRhMVLbpNvx8LePqPu6SzStw/orq5X6ip3CzYjWHpyXzP6Ykowc/d9Mu/JQ7Qxo+WylzZg==',key_name='tempest-keypair-553026050',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3915075f6af64d22aacc0d811789b57a',ramdisk_id='',reservation_id='r-ky6ootgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-2037332346',owner_user_name='tempest-ServersTestFqdnHostnames-2037332346-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='623e430e64a04e20a3224da48323ec68',uuid=103eab90-5b7f-4550-a756-2a9bd5f68194,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.589 257641 DEBUG nova.network.os_vif_util [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converting VIF {"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.590 257641 DEBUG nova.network.os_vif_util [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.590 257641 DEBUG os_vif [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.592 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.592 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.597 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0301393-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.598 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0301393-0e, col_values=(('external_ids', {'iface-id': 'd0301393-0ea0-451b-8472-fdc6250c56a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:0c:8a', 'vm-uuid': '103eab90-5b7f-4550-a756-2a9bd5f68194'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.627 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539550 NetworkManager[49039]: <info>  [1764403762.6288] manager: (tapd0301393-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.630 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.634 257641 INFO os_vif [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e')#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.704 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.704 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.704 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] No VIF found with MAC fa:16:3e:53:0c:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.705 257641 INFO nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Using config drive#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.730 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.891 257641 DEBUG nova.network.neutron [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updated VIF entry in instance network info cache for port d0301393-0ea0-451b-8472-fdc6250c56a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.892 257641 DEBUG nova.network.neutron [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updating instance_info_cache with network_info: [{"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:22 np0005539550 nova_compute[257631]: 2025-11-29 08:09:22.906 257641 DEBUG oslo_concurrency.lockutils [req-dfcdfec0-b078-4131-8bc4-fd5751f6ad80 req-2bf19dea-390d-4b89-ae3d-a532527fdfd0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.070 257641 INFO nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Creating config drive at /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.078 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkos9c0sz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 226 MiB data, 715 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 208 op/s
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.217 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkos9c0sz" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.266 257641 DEBUG nova.storage.rbd_utils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] rbd image 103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.270 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config 103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:23 np0005539550 podman[305104]: 2025-11-29 08:09:23.390625896 +0000 UTC m=+0.121289714 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.437 257641 DEBUG oslo_concurrency.processutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config 103eab90-5b7f-4550-a756-2a9bd5f68194_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.439 257641 INFO nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Deleting local config drive /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194/disk.config because it was imported into RBD.#033[00m
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.4922] manager: (tapd0301393-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Nov 29 03:09:23 np0005539550 kernel: tapd0301393-0e: entered promiscuous mode
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:23Z|00254|binding|INFO|Claiming lport d0301393-0ea0-451b-8472-fdc6250c56a6 for this chassis.
Nov 29 03:09:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:23Z|00255|binding|INFO|d0301393-0ea0-451b-8472-fdc6250c56a6: Claiming fa:16:3e:53:0c:8a 10.100.0.7
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.514 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:8a 10.100.0.7'], port_security=['fa:16:3e:53:0c:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '103eab90-5b7f-4550-a756-2a9bd5f68194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7f55469-acb0-4281-97a2-80c28aeadc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3915075f6af64d22aacc0d811789b57a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28340978-c56d-4ace-b20c-1036d3f1ec71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9788b87f-0491-4b35-b298-195725982ddc, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0301393-0ea0-451b-8472-fdc6250c56a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.516 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0301393-0ea0-451b-8472-fdc6250c56a6 in datapath f7f55469-acb0-4281-97a2-80c28aeadc93 bound to our chassis#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.518 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f7f55469-acb0-4281-97a2-80c28aeadc93#033[00m
Nov 29 03:09:23 np0005539550 systemd-udevd[305178]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:23 np0005539550 systemd-machined[216673]: New machine qemu-37-instance-0000004c.
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[83df5a17-1e25-4f71-8277-bc3b9b443802]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.533 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf7f55469-a1 in ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.535 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf7f55469-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.535 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f875be4e-451e-4e7a-a7b4-d3b2ceb0b6c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.536 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[495d26c1-9cfb-4fb2-9c2b-b72193cc5f0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.5447] device (tapd0301393-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.5462] device (tapd0301393-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.549 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d1726398-bfbf-475b-82bf-09f8905708c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 systemd[1]: Started Virtual Machine qemu-37-instance-0000004c.
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.572 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[958af3b5-ae8f-4f7c-8d7e-75f3a2cc0f9a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.598 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[80d48cba-754d-4ffa-af04-79f8d0fdbe05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 systemd-udevd[305183]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.6056] manager: (tapf7f55469-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.604 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4729e924-36a8-43a3-b236-246ca5b34ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:23Z|00256|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 ovn-installed in OVS
Nov 29 03:09:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:23Z|00257|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 up in Southbound
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.641 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[16979b8e-5102-48f5-944f-2b1a662251e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.645 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9925f253-557c-4612-8f63-bf209944e327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.6666] device (tapf7f55469-a0): carrier: link connected
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.673 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d88250ee-74a9-41c1-937f-c9d6f1dd06ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.689 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fae93917-fdbb-4805-a3e2-675892924510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf7f55469-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:50:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681133, 'reachable_time': 40341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305213, 'error': None, 'target': 'ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.706 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4d469452-8479-4dd6-aa5e-40ceac0ffc75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:500f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681133, 'tstamp': 681133}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305214, 'error': None, 'target': 'ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.724 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d013f4-e84d-420c-9df8-f45bc49f1ab8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf7f55469-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:50:0f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681133, 'reachable_time': 40341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305215, 'error': None, 'target': 'ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.749 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b111386-0396-4dfc-8ebe-80efc55d2b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.798 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d940e00-334a-4e52-9d8e-af722c5a6484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.799 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7f55469-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.800 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.801 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf7f55469-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 kernel: tapf7f55469-a0: entered promiscuous mode
Nov 29 03:09:23 np0005539550 NetworkManager[49039]: <info>  [1764403763.8063] manager: (tapf7f55469-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.809 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf7f55469-a0, col_values=(('external_ids', {'iface-id': '4f7833b2-b9a8-4fd3-b97c-cc5c14aad1a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.811 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:23Z|00258|binding|INFO|Releasing lport 4f7833b2-b9a8-4fd3-b97c-cc5c14aad1a7 from this chassis (sb_readonly=0)
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.812 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.812 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f7f55469-acb0-4281-97a2-80c28aeadc93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f7f55469-acb0-4281-97a2-80c28aeadc93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5341b831-7219-4d96-bde0-ac678f72abce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.815 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-f7f55469-acb0-4281-97a2-80c28aeadc93
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/f7f55469-acb0-4281-97a2-80c28aeadc93.pid.haproxy
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID f7f55469-acb0-4281-97a2-80c28aeadc93
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:09:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:23.816 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93', 'env', 'PROCESS_TAG=haproxy-f7f55469-acb0-4281-97a2-80c28aeadc93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f7f55469-acb0-4281-97a2-80c28aeadc93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.874 257641 DEBUG nova.compute.manager [req-d5be5b9b-2464-43b6-b701-e20675bb80a8 req-3e231164-d766-4bbb-8b31-7d1a881baa7d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.875 257641 DEBUG oslo_concurrency.lockutils [req-d5be5b9b-2464-43b6-b701-e20675bb80a8 req-3e231164-d766-4bbb-8b31-7d1a881baa7d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.875 257641 DEBUG oslo_concurrency.lockutils [req-d5be5b9b-2464-43b6-b701-e20675bb80a8 req-3e231164-d766-4bbb-8b31-7d1a881baa7d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.875 257641 DEBUG oslo_concurrency.lockutils [req-d5be5b9b-2464-43b6-b701-e20675bb80a8 req-3e231164-d766-4bbb-8b31-7d1a881baa7d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.875 257641 DEBUG nova.compute.manager [req-d5be5b9b-2464-43b6-b701-e20675bb80a8 req-3e231164-d766-4bbb-8b31-7d1a881baa7d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Processing event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:23 np0005539550 nova_compute[257631]: 2025-11-29 08:09:23.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:09:24 np0005539550 podman[305248]: 2025-11-29 08:09:24.17798006 +0000 UTC m=+0.049666367 container create 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:09:24 np0005539550 systemd[1]: Started libpod-conmon-8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4.scope.
Nov 29 03:09:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:09:24 np0005539550 podman[305248]: 2025-11-29 08:09:24.150940951 +0000 UTC m=+0.022627278 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:09:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3530fe84506bd6879263b449d0f7fffbc727a6a892508642af97b58b2c1d424f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:09:24 np0005539550 podman[305248]: 2025-11-29 08:09:24.260331056 +0000 UTC m=+0.132017393 container init 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:09:24 np0005539550 podman[305248]: 2025-11-29 08:09:24.266337486 +0000 UTC m=+0.138023793 container start 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:09:24 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [NOTICE]   (305267) : New worker (305269) forked
Nov 29 03:09:24 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [NOTICE]   (305267) : Loading success.
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.589 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.590 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403764.5896358, 103eab90-5b7f-4550-a756-2a9bd5f68194 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.590 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] VM Started (Lifecycle Event)#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.593 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.597 257641 INFO nova.virt.libvirt.driver [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Instance spawned successfully.#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.597 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.612 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.615 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.625 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.625 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.626 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.627 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.627 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.628 257641 DEBUG nova.virt.libvirt.driver [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.666 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.666 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403764.589708, 103eab90-5b7f-4550-a756-2a9bd5f68194 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.667 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.698 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.703 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403764.593028, 103eab90-5b7f-4550-a756-2a9bd5f68194 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.703 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.716 257641 INFO nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Took 6.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.717 257641 DEBUG nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.744 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.746 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.775 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.788 257641 INFO nova.compute.manager [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Took 7.98 seconds to build instance.#033[00m
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.803 257641 DEBUG oslo_concurrency.lockutils [None req-b630145e-2408-430c-985b-6fa60ebb8de7 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:24 np0005539550 nova_compute[257631]: 2025-11-29 08:09:24.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.1 MiB/s wr, 230 op/s
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.010 257641 DEBUG nova.compute.manager [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.011 257641 DEBUG oslo_concurrency.lockutils [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.011 257641 DEBUG oslo_concurrency.lockutils [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.012 257641 DEBUG oslo_concurrency.lockutils [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.012 257641 DEBUG nova.compute.manager [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.012 257641 WARNING nova.compute.manager [req-da3c2d01-f320-492d-901a-0cce8bdd3a61 req-fb9a031b-9333-4eef-a211-b204a03cf531 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received unexpected event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:09:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:26 np0005539550 nova_compute[257631]: 2025-11-29 08:09:26.509 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.4 MiB/s wr, 267 op/s
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.186 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539550 NetworkManager[49039]: <info>  [1764403767.1872] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Nov 29 03:09:27 np0005539550 NetworkManager[49039]: <info>  [1764403767.1881] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:27Z|00259|binding|INFO|Releasing lport 4f7833b2-b9a8-4fd3-b97c-cc5c14aad1a7 from this chassis (sb_readonly=0)
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.628 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.976 257641 DEBUG nova.compute.manager [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-changed-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.976 257641 DEBUG nova.compute.manager [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Refreshing instance network info cache due to event network-changed-d0301393-0ea0-451b-8472-fdc6250c56a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.977 257641 DEBUG oslo_concurrency.lockutils [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.977 257641 DEBUG oslo_concurrency.lockutils [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:09:27 np0005539550 nova_compute[257631]: 2025-11-29 08:09:27.978 257641 DEBUG nova.network.neutron [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Refreshing network info cache for port d0301393-0ea0-451b-8472-fdc6250c56a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:09:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:28 np0005539550 nova_compute[257631]: 2025-11-29 08:09:28.386 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403753.3847528, c24a72ed-78bb-4305-abf0-04f30042a9ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:28 np0005539550 nova_compute[257631]: 2025-11-29 08:09:28.387 257641 INFO nova.compute.manager [-] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:28 np0005539550 nova_compute[257631]: 2025-11-29 08:09:28.413 257641 DEBUG nova.compute.manager [None req-8275e405-7320-4441-803b-850ffb8ffcc4 - - - - - -] [instance: c24a72ed-78bb-4305-abf0-04f30042a9ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:09:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:09:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.4 MiB/s wr, 263 op/s
Nov 29 03:09:29 np0005539550 nova_compute[257631]: 2025-11-29 08:09:29.827 257641 DEBUG nova.network.neutron [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updated VIF entry in instance network info cache for port d0301393-0ea0-451b-8472-fdc6250c56a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:09:29 np0005539550 nova_compute[257631]: 2025-11-29 08:09:29.828 257641 DEBUG nova.network.neutron [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updating instance_info_cache with network_info: [{"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:29 np0005539550 nova_compute[257631]: 2025-11-29 08:09:29.847 257641 DEBUG oslo_concurrency.lockutils [req-52366747-6ea0-4c9d-a3d2-0b0950b55498 req-40bb26df-a18e-4562-b5d3-e7622732463e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-103eab90-5b7f-4550-a756-2a9bd5f68194" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:09:29 np0005539550 nova_compute[257631]: 2025-11-29 08:09:29.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:09:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:30.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:30.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.3 MiB/s wr, 259 op/s
Nov 29 03:09:31 np0005539550 nova_compute[257631]: 2025-11-29 08:09:31.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:32.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:32 np0005539550 nova_compute[257631]: 2025-11-29 08:09:32.631 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:32.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 41 KiB/s wr, 219 op/s
Nov 29 03:09:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:34.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 227 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 41 KiB/s wr, 219 op/s
Nov 29 03:09:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:36.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:36 np0005539550 nova_compute[257631]: 2025-11-29 08:09:36.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 238 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 925 KiB/s wr, 230 op/s
Nov 29 03:09:37 np0005539550 nova_compute[257631]: 2025-11-29 08:09:37.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:37Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:0c:8a 10.100.0.7
Nov 29 03:09:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:37Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:0c:8a 10.100.0.7
Nov 29 03:09:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:38.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:38.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 295 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 222 op/s
Nov 29 03:09:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:40.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:40.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 322 MiB data, 796 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 222 op/s
Nov 29 03:09:41 np0005539550 podman[305383]: 2025-11-29 08:09:41.336828611 +0000 UTC m=+0.066664163 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:41 np0005539550 podman[305384]: 2025-11-29 08:09:41.33679529 +0000 UTC m=+0.054952029 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:09:41 np0005539550 nova_compute[257631]: 2025-11-29 08:09:41.532 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:42 np0005539550 nova_compute[257631]: 2025-11-29 08:09:42.676 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:42.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 322 MiB data, 796 MiB used, 20 GiB / 21 GiB avail; 972 KiB/s rd, 6.3 MiB/s wr, 184 op/s
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.198 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.199 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.199 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.200 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.200 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.203 257641 INFO nova.compute.manager [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Terminating instance#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.205 257641 DEBUG nova.compute.manager [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:09:43 np0005539550 kernel: tapd0301393-0e (unregistering): left promiscuous mode
Nov 29 03:09:43 np0005539550 NetworkManager[49039]: <info>  [1764403783.2614] device (tapd0301393-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00260|binding|INFO|Releasing lport d0301393-0ea0-451b-8472-fdc6250c56a6 from this chassis (sb_readonly=0)
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00261|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 down in Southbound
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00262|binding|INFO|Removing iface tapd0301393-0e ovn-installed in OVS
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.280 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:8a 10.100.0.7'], port_security=['fa:16:3e:53:0c:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '103eab90-5b7f-4550-a756-2a9bd5f68194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7f55469-acb0-4281-97a2-80c28aeadc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3915075f6af64d22aacc0d811789b57a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28340978-c56d-4ace-b20c-1036d3f1ec71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9788b87f-0491-4b35-b298-195725982ddc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0301393-0ea0-451b-8472-fdc6250c56a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.281 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0301393-0ea0-451b-8472-fdc6250c56a6 in datapath f7f55469-acb0-4281-97a2-80c28aeadc93 unbound from our chassis#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.283 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f7f55469-acb0-4281-97a2-80c28aeadc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a9880822-0cec-440f-93bb-dbb2da2e342e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.285 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93 namespace which is not needed anymore#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Nov 29 03:09:43 np0005539550 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000004c.scope: Consumed 14.455s CPU time.
Nov 29 03:09:43 np0005539550 systemd-machined[216673]: Machine qemu-37-instance-0000004c terminated.
Nov 29 03:09:43 np0005539550 kernel: tapd0301393-0e: entered promiscuous mode
Nov 29 03:09:43 np0005539550 systemd-udevd[305428]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:09:43 np0005539550 NetworkManager[49039]: <info>  [1764403783.4274] manager: (tapd0301393-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Nov 29 03:09:43 np0005539550 kernel: tapd0301393-0e (unregistering): left promiscuous mode
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00263|binding|INFO|Claiming lport d0301393-0ea0-451b-8472-fdc6250c56a6 for this chassis.
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.429 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00264|binding|INFO|d0301393-0ea0-451b-8472-fdc6250c56a6: Claiming fa:16:3e:53:0c:8a 10.100.0.7
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.441 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:8a 10.100.0.7'], port_security=['fa:16:3e:53:0c:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '103eab90-5b7f-4550-a756-2a9bd5f68194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7f55469-acb0-4281-97a2-80c28aeadc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3915075f6af64d22aacc0d811789b57a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28340978-c56d-4ace-b20c-1036d3f1ec71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9788b87f-0491-4b35-b298-195725982ddc, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0301393-0ea0-451b-8472-fdc6250c56a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.449 257641 INFO nova.virt.libvirt.driver [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Instance destroyed successfully.#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.450 257641 DEBUG nova.objects.instance [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lazy-loading 'resources' on Instance uuid 103eab90-5b7f-4550-a756-2a9bd5f68194 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:43 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [NOTICE]   (305267) : haproxy version is 2.8.14-c23fe91
Nov 29 03:09:43 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [NOTICE]   (305267) : path to executable is /usr/sbin/haproxy
Nov 29 03:09:43 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [WARNING]  (305267) : Exiting Master process...
Nov 29 03:09:43 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [ALERT]    (305267) : Current worker (305269) exited with code 143 (Terminated)
Nov 29 03:09:43 np0005539550 neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93[305263]: [WARNING]  (305267) : All workers exited. Exiting... (0)
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00265|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 ovn-installed in OVS
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00266|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 up in Southbound
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00267|binding|INFO|Releasing lport d0301393-0ea0-451b-8472-fdc6250c56a6 from this chassis (sb_readonly=1)
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.456 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 systemd[1]: libpod-8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4.scope: Deactivated successfully.
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00268|if_status|INFO|Dropped 2 log messages in last 42 seconds (most recently, 42 seconds ago) due to excessive rate
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00269|if_status|INFO|Not setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 down as sb is readonly
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00270|binding|INFO|Removing iface tapd0301393-0e ovn-installed in OVS
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.459 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 podman[305448]: 2025-11-29 08:09:43.463573376 +0000 UTC m=+0.070224533 container died 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00271|binding|INFO|Releasing lport d0301393-0ea0-451b-8472-fdc6250c56a6 from this chassis (sb_readonly=0)
Nov 29 03:09:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:09:43Z|00272|binding|INFO|Setting lport d0301393-0ea0-451b-8472-fdc6250c56a6 down in Southbound
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.477 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:0c:8a 10.100.0.7'], port_security=['fa:16:3e:53:0c:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '103eab90-5b7f-4550-a756-2a9bd5f68194', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f7f55469-acb0-4281-97a2-80c28aeadc93', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3915075f6af64d22aacc0d811789b57a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28340978-c56d-4ace-b20c-1036d3f1ec71', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9788b87f-0491-4b35-b298-195725982ddc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0301393-0ea0-451b-8472-fdc6250c56a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.488 257641 DEBUG nova.virt.libvirt.vif [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=76,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDuoImmGlr2ljuzK21E62mghIO3PiaBWY3sLNDR+fM8pncOO4d77sC7RKjPHRhMVLbpNvx8LePqPu6SzStw/orq5X6ip3CzYjWHpyXzP6Ykowc/d9Mu/JQ7Qxo+WylzZg==',key_name='tempest-keypair-553026050',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:09:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3915075f6af64d22aacc0d811789b57a',ramdisk_id='',reservation_id='r-ky6ootgo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-2037332346',owner_user_name='tempest-ServersTestFqdnHostnames-2037332346-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:09:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='623e430e64a04e20a3224da48323ec68',uuid=103eab90-5b7f-4550-a756-2a9bd5f68194,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.489 257641 DEBUG nova.network.os_vif_util [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converting VIF {"id": "d0301393-0ea0-451b-8472-fdc6250c56a6", "address": "fa:16:3e:53:0c:8a", "network": {"id": "f7f55469-acb0-4281-97a2-80c28aeadc93", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1604396431-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3915075f6af64d22aacc0d811789b57a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0301393-0e", "ovs_interfaceid": "d0301393-0ea0-451b-8472-fdc6250c56a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.490 257641 DEBUG nova.network.os_vif_util [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.490 257641 DEBUG os_vif [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.492 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.493 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0301393-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.494 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.500 257641 INFO os_vif [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:0c:8a,bridge_name='br-int',has_traffic_filtering=True,id=d0301393-0ea0-451b-8472-fdc6250c56a6,network=Network(f7f55469-acb0-4281-97a2-80c28aeadc93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0301393-0e')#033[00m
Nov 29 03:09:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3530fe84506bd6879263b449d0f7fffbc727a6a892508642af97b58b2c1d424f-merged.mount: Deactivated successfully.
Nov 29 03:09:43 np0005539550 podman[305448]: 2025-11-29 08:09:43.515316114 +0000 UTC m=+0.121967251 container cleanup 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:09:43 np0005539550 systemd[1]: libpod-conmon-8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4.scope: Deactivated successfully.
Nov 29 03:09:43 np0005539550 podman[305492]: 2025-11-29 08:09:43.581373551 +0000 UTC m=+0.043201885 container remove 8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.586 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3ecb8780-fecc-4b76-96af-533e347bf105]: (4, ('Sat Nov 29 08:09:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93 (8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4)\n8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4\nSat Nov 29 08:09:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93 (8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4)\n8bd88c96e7b09eb698219d5698727399f44435732dfa7d502325c85427e10fa4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.588 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1e18fc4e-b53b-4587-9cba-7e7805b8fe08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.589 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf7f55469-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.592 257641 DEBUG nova.compute.manager [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-unplugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:43 np0005539550 kernel: tapf7f55469-a0: left promiscuous mode
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.592 257641 DEBUG oslo_concurrency.lockutils [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.593 257641 DEBUG oslo_concurrency.lockutils [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.593 257641 DEBUG oslo_concurrency.lockutils [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.593 257641 DEBUG nova.compute.manager [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-unplugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.594 257641 DEBUG nova.compute.manager [req-93c78305-1cb3-4663-a231-3321f73015d1 req-2919366c-2a26-4bec-8541-d82667d8f36f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-unplugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.607 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.610 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4dffed-2899-49bc-ac4e-fe01e7d06a34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.630 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8846b7-0729-47ea-a500-1f165a91802d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.631 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2bffa7-d812-43dd-9cd8-99eb3410566a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.647 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[57edcea0-e58c-4511-ae8e-c24406f26c93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681125, 'reachable_time': 40089, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305510, 'error': None, 'target': 'ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 systemd[1]: run-netns-ovnmeta\x2df7f55469\x2dacb0\x2d4281\x2d97a2\x2d80c28aeadc93.mount: Deactivated successfully.
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.652 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f7f55469-acb0-4281-97a2-80c28aeadc93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.652 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e9425608-8eb8-414a-9116-5323830a014d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.653 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0301393-0ea0-451b-8472-fdc6250c56a6 in datapath f7f55469-acb0-4281-97a2-80c28aeadc93 unbound from our chassis#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.655 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f7f55469-acb0-4281-97a2-80c28aeadc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.655 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[307098e7-e910-4e54-bfbc-9d12b9e0bcc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.656 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0301393-0ea0-451b-8472-fdc6250c56a6 in datapath f7f55469-acb0-4281-97a2-80c28aeadc93 unbound from our chassis#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.657 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f7f55469-acb0-4281-97a2-80c28aeadc93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:09:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:09:43.658 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0a1e23-76ee-4530-a93d-8caa48522f3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.929 257641 INFO nova.virt.libvirt.driver [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Deleting instance files /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194_del#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.930 257641 INFO nova.virt.libvirt.driver [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Deletion of /var/lib/nova/instances/103eab90-5b7f-4550-a756-2a9bd5f68194_del complete#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.984 257641 INFO nova.compute.manager [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.985 257641 DEBUG oslo.service.loopingcall [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.985 257641 DEBUG nova.compute.manager [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:09:43 np0005539550 nova_compute[257631]: 2025-11-29 08:09:43.985 257641 DEBUG nova.network.neutron [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:09:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 282 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.4 MiB/s wr, 200 op/s
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.706 257641 DEBUG nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.706 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.707 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.707 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.707 257641 DEBUG nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.707 257641 WARNING nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received unexpected event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.708 257641 DEBUG nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.708 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.708 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.708 257641 DEBUG oslo_concurrency.lockutils [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.708 257641 DEBUG nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:45 np0005539550 nova_compute[257631]: 2025-11-29 08:09:45.709 257641 WARNING nova.compute.manager [req-044c2d56-219e-4112-abda-8b664c90b0e9 req-70cbf250-a5d2-40a0-9583-20c7ba59b3c1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received unexpected event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.129 257641 DEBUG nova.network.neutron [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.145 257641 INFO nova.compute.manager [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Took 2.16 seconds to deallocate network for instance.#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.198 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.198 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.256 257641 DEBUG oslo_concurrency.processutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.534 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/196825430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.754 257641 DEBUG oslo_concurrency.processutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.759 257641 DEBUG nova.compute.provider_tree [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.776 257641 DEBUG nova.scheduler.client.report [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.806 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.840 257641 INFO nova.scheduler.client.report [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Deleted allocations for instance 103eab90-5b7f-4550-a756-2a9bd5f68194#033[00m
Nov 29 03:09:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:09:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:09:46 np0005539550 nova_compute[257631]: 2025-11-29 08:09:46.906 257641 DEBUG oslo_concurrency.lockutils [None req-95b97380-3fe4-41a4-a82f-375476dfa7e5 623e430e64a04e20a3224da48323ec68 3915075f6af64d22aacc0d811789b57a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 219 MiB data, 734 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.9 MiB/s wr, 242 op/s
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.799 257641 DEBUG nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.799 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.799 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.799 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.799 257641 DEBUG nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 WARNING nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received unexpected event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 DEBUG nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 DEBUG oslo_concurrency.lockutils [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "103eab90-5b7f-4550-a756-2a9bd5f68194-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 DEBUG nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] No waiting events found dispatching network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.800 257641 WARNING nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received unexpected event network-vif-plugged-d0301393-0ea0-451b-8472-fdc6250c56a6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:09:47 np0005539550 nova_compute[257631]: 2025-11-29 08:09:47.801 257641 DEBUG nova.compute.manager [req-9596fb43-7252-4d14-8fa8-3f38b48801fc req-2ef8a7de-447a-4666-9386-01c88346f187 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Received event network-vif-deleted-d0301393-0ea0-451b-8472-fdc6250c56a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:09:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:48 np0005539550 nova_compute[257631]: 2025-11-29 08:09:48.497 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 213 MiB data, 731 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.3 MiB/s wr, 262 op/s
Nov 29 03:09:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 213 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Nov 29 03:09:51 np0005539550 nova_compute[257631]: 2025-11-29 08:09:51.590 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 213 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.9 MiB/s wr, 132 op/s
Nov 29 03:09:53 np0005539550 nova_compute[257631]: 2025-11-29 08:09:53.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:54 np0005539550 podman[305592]: 2025-11-29 08:09:54.341714764 +0000 UTC m=+0.083432334 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:09:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 182 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Nov 29 03:09:55 np0005539550 nova_compute[257631]: 2025-11-29 08:09:55.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.062 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.590 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.609 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.609 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.664 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:09:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.837 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.838 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.844 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:09:56 np0005539550 nova_compute[257631]: 2025-11-29 08:09:56.844 257641 INFO nova.compute.claims [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:09:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.021 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 167 MiB data, 719 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Nov 29 03:09:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:09:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717880881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.465 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.471 257641 DEBUG nova.compute.provider_tree [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.491 257641 DEBUG nova.scheduler.client.report [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.510 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.510 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.556 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.556 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.580 257641 INFO nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.606 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.741 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.743 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.744 257641 INFO nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Creating image(s)#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.771 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.801 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.831 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.835 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.861 257641 DEBUG nova.policy [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9ab0114aca6149af994da2b9052c1368', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8384e5887c0948f5876c019d50057152', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.902 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.903 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.904 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.904 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.930 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:09:57 np0005539550 nova_compute[257631]: 2025-11-29 08:09:57.934 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b373b176-ee91-41a8-a80a-96c957639455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.203 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b373b176-ee91-41a8-a80a-96c957639455_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:09:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:09:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:09:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:09:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.276 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] resizing rbd image b373b176-ee91-41a8-a80a-96c957639455_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.366 257641 DEBUG nova.objects.instance [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.402 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.403 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Ensure instance console log exists: /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.403 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.404 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.404 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.447 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403783.4460661, 103eab90-5b7f-4550-a756-2a9bd5f68194 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.448 257641 INFO nova.compute.manager [-] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.470 257641 DEBUG nova.compute.manager [None req-7679c6cc-fd07-4d83-b4dd-552316d5468c - - - - - -] [instance: 103eab90-5b7f-4550-a756-2a9bd5f68194] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:09:58 np0005539550 nova_compute[257631]: 2025-11-29 08:09:58.501 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:09:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:09:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:09:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:09:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 193 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:09:59
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'backups', '.mgr', 'volumes', 'cephfs.cephfs.meta']
Nov 29 03:09:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:09:59 np0005539550 nova_compute[257631]: 2025-11-29 08:09:59.677 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Successfully created port: 6bdd57b3-15f4-46ef-bab3-67925c3606c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:10:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:10:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:00.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 213 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 115 op/s
Nov 29 03:10:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:01.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.471 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Successfully updated port: 6bdd57b3-15f4-46ef-bab3-67925c3606c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.488 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.489 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.489 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.582 257641 DEBUG nova.compute.manager [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.583 257641 DEBUG nova.compute.manager [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing instance network info cache due to event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.583 257641 DEBUG oslo_concurrency.lockutils [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:01 np0005539550 nova_compute[257631]: 2025-11-29 08:10:01.696 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:10:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:02.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 213 MiB data, 717 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.155 257641 DEBUG nova.network.neutron [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.180 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.181 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance network_info: |[{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.182 257641 DEBUG oslo_concurrency.lockutils [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.182 257641 DEBUG nova.network.neutron [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.186 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Start _get_guest_xml network_info=[{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.191 257641 WARNING nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.200 257641 DEBUG nova.virt.libvirt.host [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.201 257641 DEBUG nova.virt.libvirt.host [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.204 257641 DEBUG nova.virt.libvirt.host [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.204 257641 DEBUG nova.virt.libvirt.host [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.205 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.206 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.206 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.206 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.207 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.207 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.207 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.207 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.207 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.208 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.208 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.208 257641 DEBUG nova.virt.hardware [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.210 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:03.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627606840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.656 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.687 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:03 np0005539550 nova_compute[257631]: 2025-11-29 08:10:03.693 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110381362' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.133 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.135 257641 DEBUG nova.virt.libvirt.vif [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:57Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.135 257641 DEBUG nova.network.os_vif_util [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.136 257641 DEBUG nova.network.os_vif_util [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.137 257641 DEBUG nova.objects.instance [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'pci_devices' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.153 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <uuid>b373b176-ee91-41a8-a80a-96c957639455</uuid>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <name>instance-0000004e</name>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1182352036</nova:name>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:10:03</nova:creationTime>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:user uuid="9ab0114aca6149af994da2b9052c1368">tempest-ServerDiskConfigTestJSON-767135984-project-member</nova:user>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:project uuid="8384e5887c0948f5876c019d50057152">tempest-ServerDiskConfigTestJSON-767135984</nova:project>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <nova:port uuid="6bdd57b3-15f4-46ef-bab3-67925c3606c5">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="serial">b373b176-ee91-41a8-a80a-96c957639455</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="uuid">b373b176-ee91-41a8-a80a-96c957639455</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/b373b176-ee91-41a8-a80a-96c957639455_disk">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/b373b176-ee91-41a8-a80a-96c957639455_disk.config">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:47:01:d2"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <target dev="tap6bdd57b3-15"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/console.log" append="off"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:10:04 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:10:04 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:10:04 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:10:04 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.155 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Preparing to wait for external event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.156 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.156 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.156 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.157 257641 DEBUG nova.virt.libvirt.vif [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:09:57Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.158 257641 DEBUG nova.network.os_vif_util [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.158 257641 DEBUG nova.network.os_vif_util [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.159 257641 DEBUG os_vif [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.160 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.161 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.164 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.165 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bdd57b3-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.166 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bdd57b3-15, col_values=(('external_ids', {'iface-id': '6bdd57b3-15f4-46ef-bab3-67925c3606c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:01:d2', 'vm-uuid': 'b373b176-ee91-41a8-a80a-96c957639455'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:04 np0005539550 NetworkManager[49039]: <info>  [1764403804.2071] manager: (tap6bdd57b3-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.206 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.210 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.212 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.213 257641 INFO os_vif [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15')#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.264 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.265 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.266 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] No VIF found with MAC fa:16:3e:47:01:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.266 257641 INFO nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Using config drive#033[00m
Nov 29 03:10:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:04.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.292 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.893 257641 INFO nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Creating config drive at /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config#033[00m
Nov 29 03:10:04 np0005539550 nova_compute[257631]: 2025-11-29 08:10:04.898 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprf6xi85u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.047 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprf6xi85u" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.077 257641 DEBUG nova.storage.rbd_utils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] rbd image b373b176-ee91-41a8-a80a-96c957639455_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.082 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config b373b176-ee91-41a8-a80a-96c957639455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.107 257641 DEBUG nova.network.neutron [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updated VIF entry in instance network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.108 257641 DEBUG nova.network.neutron [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 213 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.134 257641 DEBUG oslo_concurrency.lockutils [req-b5c6b689-ea64-403b-8de4-7742109eeeb4 req-884a59dd-05e2-4545-960c-39074206f7b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.239 257641 DEBUG oslo_concurrency.processutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config b373b176-ee91-41a8-a80a-96c957639455_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.240 257641 INFO nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Deleting local config drive /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/disk.config because it was imported into RBD.#033[00m
Nov 29 03:10:05 np0005539550 kernel: tap6bdd57b3-15: entered promiscuous mode
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.2929] manager: (tap6bdd57b3-15): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Nov 29 03:10:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:05.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.340 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 systemd-udevd[305946]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:05Z|00273|binding|INFO|Claiming lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 for this chassis.
Nov 29 03:10:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:05Z|00274|binding|INFO|6bdd57b3-15f4-46ef-bab3-67925c3606c5: Claiming fa:16:3e:47:01:d2 10.100.0.4
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.348 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.353 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:01:d2 10.100.0.4'], port_security=['fa:16:3e:47:01:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b373b176-ee91-41a8-a80a-96c957639455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6bdd57b3-15f4-46ef-bab3-67925c3606c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.355 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d bound to our chassis#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.357 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d#033[00m
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.3594] device (tap6bdd57b3-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.3604] device (tap6bdd57b3-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.369 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba54356-f69a-4d1b-bcfc-1950ed5438a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.369 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65f88c5a-81 in ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.372 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65f88c5a-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.372 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5e2e23-cdb9-4e05-a9fe-fb98adbabbe4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 systemd-machined[216673]: New machine qemu-38-instance-0000004e.
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.373 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e86d28a6-7fd6-4468-bf10-f097a6711558]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 systemd[1]: Started Virtual Machine qemu-38-instance-0000004e.
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.387 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7f5f7f33-9a56-4563-805f-64381f6db374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.417 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6151d0a5-bf23-43ce-b3b4-bdcfff26e9e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:05Z|00275|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 ovn-installed in OVS
Nov 29 03:10:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:05Z|00276|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 up in Southbound
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.431 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.446 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[28456057-55fb-4ba8-b5b1-933d9e47aee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.451 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4912ade3-bcfa-41d8-8c31-578126d08813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 systemd-udevd[305950]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.4522] manager: (tap65f88c5a-80): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.477 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[61613e2b-ab14-45e5-995b-23b3b361170e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.480 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0641a15b-01e0-4a9d-bfac-02193539dfd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.4973] device (tap65f88c5a-80): carrier: link connected
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.501 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1dd2e8-d3a8-47d7-a0c1-1cf6743ff71c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.515 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[758afba8-5235-439a-af99-fe58d5b36907]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685316, 'reachable_time': 41478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305982, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.527 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1429a45d-64d2-446f-bebb-5fa802624427]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:227e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685316, 'tstamp': 685316}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305983, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.540 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[84c8f6a5-3881-42ae-bb13-a299c6844de0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65f88c5a-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:22:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 70], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685316, 'reachable_time': 41478, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305984, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.562 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb4c5b6-75a9-444c-b2ea-be14d55c03b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.605 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e64d96f2-f142-4c22-81f1-c881ec042648]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.606 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.607 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.607 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65f88c5a-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:05 np0005539550 NetworkManager[49039]: <info>  [1764403805.6092] manager: (tap65f88c5a-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.608 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 kernel: tap65f88c5a-80: entered promiscuous mode
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.613 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65f88c5a-80, col_values=(('external_ids', {'iface-id': 'dd9b6149-e4f7-45dd-a89e-de246cf739ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.614 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:05Z|00277|binding|INFO|Releasing lport dd9b6149-e4f7-45dd-a89e-de246cf739ae from this chassis (sb_readonly=0)
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.629 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.631 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.632 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2f971b-46c5-4d00-b3ea-e18436e3ef6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.633 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/65f88c5a-8801-4bc1-9eed-15e2bab4717d.pid.haproxy
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 65f88c5a-8801-4bc1-9eed-15e2bab4717d
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:05.633 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'env', 'PROCESS_TAG=haproxy-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65f88c5a-8801-4bc1-9eed-15e2bab4717d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.823 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403805.8226569, b373b176-ee91-41a8-a80a-96c957639455 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.823 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.845 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.850 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403805.8230069, b373b176-ee91-41a8-a80a-96c957639455 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.850 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.872 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.876 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:05 np0005539550 nova_compute[257631]: 2025-11-29 08:10:05.906 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:06 np0005539550 podman[306058]: 2025-11-29 08:10:06.018563661 +0000 UTC m=+0.064022138 container create 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:10:06 np0005539550 systemd[1]: Started libpod-conmon-3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e.scope.
Nov 29 03:10:06 np0005539550 podman[306058]: 2025-11-29 08:10:05.980702221 +0000 UTC m=+0.026160748 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e169c9d9c0d7bdd0f90a7a4ab510146ce7c47e7f8f180c9686a4ab9050452b83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:06 np0005539550 podman[306058]: 2025-11-29 08:10:06.103227345 +0000 UTC m=+0.148685822 container init 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:10:06 np0005539550 podman[306058]: 2025-11-29 08:10:06.110336813 +0000 UTC m=+0.155795290 container start 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:06 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [NOTICE]   (306078) : New worker (306080) forked
Nov 29 03:10:06 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [NOTICE]   (306078) : Loading success.
Nov 29 03:10:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:06.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.589 257641 DEBUG nova.compute.manager [req-c4e18283-f0b2-4eb0-a19c-a8fe7003beec req-3903cdb7-6cf5-46d6-9f5b-dda15a75745d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.590 257641 DEBUG oslo_concurrency.lockutils [req-c4e18283-f0b2-4eb0-a19c-a8fe7003beec req-3903cdb7-6cf5-46d6-9f5b-dda15a75745d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.590 257641 DEBUG oslo_concurrency.lockutils [req-c4e18283-f0b2-4eb0-a19c-a8fe7003beec req-3903cdb7-6cf5-46d6-9f5b-dda15a75745d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.590 257641 DEBUG oslo_concurrency.lockutils [req-c4e18283-f0b2-4eb0-a19c-a8fe7003beec req-3903cdb7-6cf5-46d6-9f5b-dda15a75745d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.591 257641 DEBUG nova.compute.manager [req-c4e18283-f0b2-4eb0-a19c-a8fe7003beec req-3903cdb7-6cf5-46d6-9f5b-dda15a75745d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Processing event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.591 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.596 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403806.59632, b373b176-ee91-41a8-a80a-96c957639455 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.597 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.598 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.601 257641 INFO nova.virt.libvirt.driver [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance spawned successfully.#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.601 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.618 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.623 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.626 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.627 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.627 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.628 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.628 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.628 257641 DEBUG nova.virt.libvirt.driver [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.661 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.745 257641 INFO nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Took 9.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.745 257641 DEBUG nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.830 257641 INFO nova.compute.manager [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Took 10.05 seconds to build instance.#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.857 257641 DEBUG oslo_concurrency.lockutils [None req-c7923b2b-42aa-4571-9dfe-3c4061e4b557 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:06 np0005539550 nova_compute[257631]: 2025-11-29 08:10:06.942 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:06.941 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:06.943 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 213 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 03:10:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:07.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:10:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:08.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.684 257641 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.685 257641 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.685 257641 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.685 257641 DEBUG oslo_concurrency.lockutils [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.685 257641 DEBUG nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:08 np0005539550 nova_compute[257631]: 2025-11-29 08:10:08.686 257641 WARNING nova.compute.manager [req-3c1632fb-a5a1-4374-9587-3fa03dd80a89 req-a709c162-db2b-4045-b9d8-c4bf2f92f3a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:10:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:10:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 214 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 959 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 29 03:10:09 np0005539550 nova_compute[257631]: 2025-11-29 08:10:09.207 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 214 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1022 KiB/s wr, 97 op/s
Nov 29 03:10:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:11 np0005539550 nova_compute[257631]: 2025-11-29 08:10:11.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:12.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:12 np0005539550 podman[306092]: 2025-11-29 08:10:12.322074222 +0000 UTC m=+0.065260788 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:10:12 np0005539550 podman[306093]: 2025-11-29 08:10:12.344987167 +0000 UTC m=+0.084579593 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:10:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 214 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 32 KiB/s wr, 95 op/s
Nov 29 03:10:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:13.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:14 np0005539550 nova_compute[257631]: 2025-11-29 08:10:14.209 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:14.946 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 173 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 33 KiB/s wr, 168 op/s
Nov 29 03:10:15 np0005539550 nova_compute[257631]: 2025-11-29 08:10:15.158 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:15 np0005539550 nova_compute[257631]: 2025-11-29 08:10:15.158 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:15 np0005539550 nova_compute[257631]: 2025-11-29 08:10:15.158 257641 DEBUG nova.network.neutron [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:16 np0005539550 nova_compute[257631]: 2025-11-29 08:10:16.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 703 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 34 KiB/s wr, 180 op/s
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.210 257641 DEBUG nova.network.neutron [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.231 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:17.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.336 257641 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.336 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Creating file /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/d4aa047fe554486caf154e9954d2c949.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.337 257641 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/d4aa047fe554486caf154e9954d2c949.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.710 257641 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/d4aa047fe554486caf154e9954d2c949.tmp" returned: 1 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.711 257641 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455/d4aa047fe554486caf154e9954d2c949.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.712 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Creating directory /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.712 257641 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.920 257641 DEBUG oslo_concurrency.processutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/b373b176-ee91-41a8-a80a-96c957639455" returned: 0 in 0.208s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:17 np0005539550 nova_compute[257631]: 2025-11-29 08:10:17.925 257641 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:10:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:18 np0005539550 nova_compute[257631]: 2025-11-29 08:10:18.936 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:18.942 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:18.942 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:18.943 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 679 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 185 op/s
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.211 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:19.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00100437081239531 of space, bias 1.0, pg target 0.30131124371859297 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:10:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.946 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:10:19 np0005539550 nova_compute[257631]: 2025-11-29 08:10:19.946 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:20.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813087642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.415 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.495 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.496 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:20Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:01:d2 10.100.0.4
Nov 29 03:10:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:20Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:01:d2 10.100.0.4
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.677 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.678 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4339MB free_disk=20.967201232910156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.678 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.679 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.724 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating resource usage from migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.748 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.749 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.749 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.764 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.783 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.783 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.795 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.815 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:10:20 np0005539550 nova_compute[257631]: 2025-11-29 08:10:20.877 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 176 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.4 MiB/s wr, 152 op/s
Nov 29 03:10:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2596830881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.306 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.314 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:21.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.339 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.372 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.372 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:21 np0005539550 nova_compute[257631]: 2025-11-29 08:10:21.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.368 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.369 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.369 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.369 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.400 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.400 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.401 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:10:22 np0005539550 nova_compute[257631]: 2025-11-29 08:10:22.401 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 736b6d56-516c-4e37-882e-17390dee9dd3 does not exist
Nov 29 03:10:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c74e5dfc-cfe3-4636-9627-37f0592469f9 does not exist
Nov 29 03:10:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6ded3846-0f08-48ad-937c-84c9ca164f16 does not exist
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.041380648 +0000 UTC m=+0.041948883 container create 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:10:23 np0005539550 systemd[1]: Started libpod-conmon-677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96.scope.
Nov 29 03:10:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.024240938 +0000 UTC m=+0.024809203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.132378501 +0000 UTC m=+0.132946766 container init 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 176 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 112 op/s
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.14069319 +0000 UTC m=+0.141261425 container start 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.144664769 +0000 UTC m=+0.145233014 container attach 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:23 np0005539550 affectionate_matsumoto[306515]: 167 167
Nov 29 03:10:23 np0005539550 systemd[1]: libpod-677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96.scope: Deactivated successfully.
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.147204103 +0000 UTC m=+0.147772358 container died 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 03:10:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8b3d8f61889029351ca0a777051684debff264ca62ea6122329fce8f5acc92f9-merged.mount: Deactivated successfully.
Nov 29 03:10:23 np0005539550 podman[306499]: 2025-11-29 08:10:23.190242163 +0000 UTC m=+0.190810408 container remove 677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:23 np0005539550 systemd[1]: libpod-conmon-677427686e5a1f42f70061d1557f2a8505349cab0267065a997da2be2e068d96.scope: Deactivated successfully.
Nov 29 03:10:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:23.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:23 np0005539550 podman[306539]: 2025-11-29 08:10:23.349143109 +0000 UTC m=+0.042557648 container create 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:23 np0005539550 systemd[1]: Started libpod-conmon-6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d.scope.
Nov 29 03:10:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:23 np0005539550 podman[306539]: 2025-11-29 08:10:23.423527596 +0000 UTC m=+0.116942165 container init 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:10:23 np0005539550 podman[306539]: 2025-11-29 08:10:23.330184194 +0000 UTC m=+0.023598753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:23 np0005539550 podman[306539]: 2025-11-29 08:10:23.433792713 +0000 UTC m=+0.127207242 container start 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:10:23 np0005539550 podman[306539]: 2025-11-29 08:10:23.436896841 +0000 UTC m=+0.130311380 container attach 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.823 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.839 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.840 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.841 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.841 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:23 np0005539550 nova_compute[257631]: 2025-11-29 08:10:23.841 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.069 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.070 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.093 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.160 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.160 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.168 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.168 257641 INFO nova.compute.claims [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:24 np0005539550 gifted_archimedes[306555]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:10:24 np0005539550 gifted_archimedes[306555]: --> relative data size: 1.0
Nov 29 03:10:24 np0005539550 gifted_archimedes[306555]: --> All data devices are unavailable
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.278 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:24.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:24 np0005539550 systemd[1]: libpod-6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d.scope: Deactivated successfully.
Nov 29 03:10:24 np0005539550 podman[306539]: 2025-11-29 08:10:24.30139835 +0000 UTC m=+0.994812929 container died 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:10:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dcf8fbd6e1fa7b28e3647572b0bb2a4f5561cae603b59748de6a07befc5ae832-merged.mount: Deactivated successfully.
Nov 29 03:10:24 np0005539550 podman[306539]: 2025-11-29 08:10:24.367398496 +0000 UTC m=+1.060813045 container remove 6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_archimedes, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:10:24 np0005539550 systemd[1]: libpod-conmon-6562d2b899e6f40e6c93f3a6561b212d94134ddb5abf55da7315038fe6d00d1d.scope: Deactivated successfully.
Nov 29 03:10:24 np0005539550 podman[306585]: 2025-11-29 08:10:24.522926397 +0000 UTC m=+0.114216896 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:10:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672365699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.764 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.772 257641 DEBUG nova.compute.provider_tree [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.795 257641 DEBUG nova.scheduler.client.report [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.817 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.819 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.871 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.872 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.889 257641 INFO nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.908 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:24 np0005539550 nova_compute[257631]: 2025-11-29 08:10:24.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:10:24 np0005539550 podman[306769]: 2025-11-29 08:10:24.967405229 +0000 UTC m=+0.038184969 container create e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:10:25 np0005539550 systemd[1]: Started libpod-conmon-e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1.scope.
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.020 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.022 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.022 257641 INFO nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Creating image(s)#033[00m
Nov 29 03:10:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:25 np0005539550 podman[306769]: 2025-11-29 08:10:25.044720338 +0000 UTC m=+0.115500108 container init e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:10:25 np0005539550 podman[306769]: 2025-11-29 08:10:24.950500295 +0000 UTC m=+0.021280065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:25 np0005539550 podman[306769]: 2025-11-29 08:10:25.054249087 +0000 UTC m=+0.125028837 container start e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:25 np0005539550 podman[306769]: 2025-11-29 08:10:25.05832759 +0000 UTC m=+0.129107340 container attach e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:10:25 np0005539550 friendly_neumann[306785]: 167 167
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.059 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:25 np0005539550 systemd[1]: libpod-e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1.scope: Deactivated successfully.
Nov 29 03:10:25 np0005539550 conmon[306785]: conmon e186db77430c79a564c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1.scope/container/memory.events
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.089 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:25 np0005539550 podman[306808]: 2025-11-29 08:10:25.103826641 +0000 UTC m=+0.025457910 container died e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.118 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7256d2313397b2c7f7b82eb49e9d073f922b9e60e9a0e14a1198fe01fe06c14a-merged.mount: Deactivated successfully.
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.123 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:25 np0005539550 podman[306808]: 2025-11-29 08:10:25.135233639 +0000 UTC m=+0.056864888 container remove e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:10:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 230 MiB data, 725 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 197 op/s
Nov 29 03:10:25 np0005539550 systemd[1]: libpod-conmon-e186db77430c79a564c40e11d902e8ac62aa7189b19f3708d5f8b4ba241c8af1.scope: Deactivated successfully.
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.151 257641 DEBUG nova.policy [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.189 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.191 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.193 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.194 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.220 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.224 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:25 np0005539550 podman[306887]: 2025-11-29 08:10:25.313017179 +0000 UTC m=+0.050853586 container create 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:10:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:25.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:25 np0005539550 systemd[1]: Started libpod-conmon-4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19.scope.
Nov 29 03:10:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:25 np0005539550 podman[306887]: 2025-11-29 08:10:25.290798342 +0000 UTC m=+0.028634779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8234e23ce08ff03a5c4fb3d97c1bab31dc2245cb36452d03ecb9779a95e856f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8234e23ce08ff03a5c4fb3d97c1bab31dc2245cb36452d03ecb9779a95e856f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8234e23ce08ff03a5c4fb3d97c1bab31dc2245cb36452d03ecb9779a95e856f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8234e23ce08ff03a5c4fb3d97c1bab31dc2245cb36452d03ecb9779a95e856f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:25 np0005539550 podman[306887]: 2025-11-29 08:10:25.401550331 +0000 UTC m=+0.139386768 container init 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 03:10:25 np0005539550 podman[306887]: 2025-11-29 08:10:25.411359077 +0000 UTC m=+0.149195484 container start 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:25 np0005539550 podman[306887]: 2025-11-29 08:10:25.434733033 +0000 UTC m=+0.172569440 container attach 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.514 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.589 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] resizing rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.711 257641 DEBUG nova.objects.instance [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'migration_context' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.729 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.729 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Ensure instance console log exists: /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.730 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.730 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.731 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:25 np0005539550 nova_compute[257631]: 2025-11-29 08:10:25.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:26 np0005539550 sharp_benz[306921]: {
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:    "0": [
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:        {
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "devices": [
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "/dev/loop3"
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            ],
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "lv_name": "ceph_lv0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "lv_size": "7511998464",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "name": "ceph_lv0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "tags": {
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.cluster_name": "ceph",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.crush_device_class": "",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.encrypted": "0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.osd_id": "0",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.type": "block",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:                "ceph.vdo": "0"
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            },
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "type": "block",
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:            "vg_name": "ceph_vg0"
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:        }
Nov 29 03:10:26 np0005539550 sharp_benz[306921]:    ]
Nov 29 03:10:26 np0005539550 sharp_benz[306921]: }
Nov 29 03:10:26 np0005539550 systemd[1]: libpod-4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19.scope: Deactivated successfully.
Nov 29 03:10:26 np0005539550 podman[306887]: 2025-11-29 08:10:26.209349056 +0000 UTC m=+0.947185493 container died 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8234e23ce08ff03a5c4fb3d97c1bab31dc2245cb36452d03ecb9779a95e856f0-merged.mount: Deactivated successfully.
Nov 29 03:10:26 np0005539550 podman[306887]: 2025-11-29 08:10:26.266653593 +0000 UTC m=+1.004489990 container remove 4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:10:26 np0005539550 systemd[1]: libpod-conmon-4fd3c286b9821974eaf2406d86f892ed224709c40dc344db60333160a6b44c19.scope: Deactivated successfully.
Nov 29 03:10:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:26.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:26 np0005539550 nova_compute[257631]: 2025-11-29 08:10:26.456 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully created port: c4c22a55-f99c-429f-8ef0-3fa113b99b13 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:26 np0005539550 nova_compute[257631]: 2025-11-29 08:10:26.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.857660801 +0000 UTC m=+0.039379459 container create 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:26 np0005539550 systemd[1]: Started libpod-conmon-8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421.scope.
Nov 29 03:10:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.928124358 +0000 UTC m=+0.109843046 container init 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.93576767 +0000 UTC m=+0.117486328 container start 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.841142646 +0000 UTC m=+0.022861334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:26 np0005539550 busy_feynman[307174]: 167 167
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.94093947 +0000 UTC m=+0.122658148 container attach 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:10:26 np0005539550 systemd[1]: libpod-8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421.scope: Deactivated successfully.
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.941945305 +0000 UTC m=+0.123663973 container died 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:10:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-71ba6de887102c273d7957656ae69e07975ff9d1c5c46289c5ae6429adeca7bd-merged.mount: Deactivated successfully.
Nov 29 03:10:26 np0005539550 podman[307158]: 2025-11-29 08:10:26.980914973 +0000 UTC m=+0.162633631 container remove 8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_feynman, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:26 np0005539550 systemd[1]: libpod-conmon-8a0845005e65c44ea36eee68358d2b6cedd7174c2ea18a90c3da3d50987bf421.scope: Deactivated successfully.
Nov 29 03:10:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 240 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 631 KiB/s rd, 6.0 MiB/s wr, 157 op/s
Nov 29 03:10:27 np0005539550 podman[307198]: 2025-11-29 08:10:27.144811135 +0000 UTC m=+0.041577204 container create dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:10:27 np0005539550 systemd[1]: Started libpod-conmon-dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd.scope.
Nov 29 03:10:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e1ff44b22b0834b980d09f495ffda19ff4258cffa91f576dc7a08a3b9c55de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e1ff44b22b0834b980d09f495ffda19ff4258cffa91f576dc7a08a3b9c55de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e1ff44b22b0834b980d09f495ffda19ff4258cffa91f576dc7a08a3b9c55de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e1ff44b22b0834b980d09f495ffda19ff4258cffa91f576dc7a08a3b9c55de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:27 np0005539550 podman[307198]: 2025-11-29 08:10:27.220277698 +0000 UTC m=+0.117043787 container init dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:10:27 np0005539550 podman[307198]: 2025-11-29 08:10:27.125339106 +0000 UTC m=+0.022105205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:10:27 np0005539550 podman[307198]: 2025-11-29 08:10:27.227306884 +0000 UTC m=+0.124072953 container start dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:27 np0005539550 podman[307198]: 2025-11-29 08:10:27.232292289 +0000 UTC m=+0.129058428 container attach dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:10:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:27.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.870 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully updated port: c4c22a55-f99c-429f-8ef0-3fa113b99b13 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.882 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.882 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.883 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.975 257641 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.985 257641 DEBUG nova.compute.manager [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-changed-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.985 257641 DEBUG nova.compute.manager [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing instance network info cache due to event network-changed-c4c22a55-f99c-429f-8ef0-3fa113b99b13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:27 np0005539550 nova_compute[257631]: 2025-11-29 08:10:27.986 257641 DEBUG oslo_concurrency.lockutils [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]: {
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:        "osd_id": 0,
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:        "type": "bluestore"
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]:    }
Nov 29 03:10:28 np0005539550 awesome_pascal[307214]: }
Nov 29 03:10:28 np0005539550 systemd[1]: libpod-dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd.scope: Deactivated successfully.
Nov 29 03:10:28 np0005539550 podman[307198]: 2025-11-29 08:10:28.072724964 +0000 UTC m=+0.969491033 container died dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:10:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a4e1ff44b22b0834b980d09f495ffda19ff4258cffa91f576dc7a08a3b9c55de-merged.mount: Deactivated successfully.
Nov 29 03:10:28 np0005539550 podman[307198]: 2025-11-29 08:10:28.144774742 +0000 UTC m=+1.041540801 container remove dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_pascal, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:10:28 np0005539550 systemd[1]: libpod-conmon-dcb2eab3ed5577af8df85ff1ad88ad02d8769c5471a7d8c6e56018502d2129bd.scope: Deactivated successfully.
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:10:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b3005c66-3afc-4720-9acd-15d30ef5ec5d does not exist
Nov 29 03:10:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 69b9b946-313d-472d-b226-ed8b41296d7a does not exist
Nov 29 03:10:28 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7a68bb73-0f5f-4c84-ba75-5d104b83e1bd does not exist
Nov 29 03:10:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:28.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:28 np0005539550 nova_compute[257631]: 2025-11-29 08:10:28.579 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:10:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 276 MiB data, 786 MiB used, 20 GiB / 21 GiB avail; 741 KiB/s rd, 7.2 MiB/s wr, 175 op/s
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.216 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.764 257641 DEBUG nova.network.neutron [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.794 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.795 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance network_info: |[{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.795 257641 DEBUG oslo_concurrency.lockutils [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.795 257641 DEBUG nova.network.neutron [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing network info cache for port c4c22a55-f99c-429f-8ef0-3fa113b99b13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.798 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Start _get_guest_xml network_info=[{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.803 257641 WARNING nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.809 257641 DEBUG nova.virt.libvirt.host [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.809 257641 DEBUG nova.virt.libvirt.host [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.813 257641 DEBUG nova.virt.libvirt.host [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.813 257641 DEBUG nova.virt.libvirt.host [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.815 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.815 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.815 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.815 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.816 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.816 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.816 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.816 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.816 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.817 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.817 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.817 257641 DEBUG nova.virt.hardware [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:10:29 np0005539550 nova_compute[257631]: 2025-11-29 08:10:29.819 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3555498866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.260 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:30 np0005539550 kernel: tap6bdd57b3-15 (unregistering): left promiscuous mode
Nov 29 03:10:30 np0005539550 NetworkManager[49039]: <info>  [1764403830.2670] device (tap6bdd57b3-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:10:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:30Z|00278|binding|INFO|Releasing lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 from this chassis (sb_readonly=0)
Nov 29 03:10:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:30Z|00279|binding|INFO|Setting lport 6bdd57b3-15f4-46ef-bab3-67925c3606c5 down in Southbound
Nov 29 03:10:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:30Z|00280|binding|INFO|Removing iface tap6bdd57b3-15 ovn-installed in OVS
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.281 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:01:d2 10.100.0.4'], port_security=['fa:16:3e:47:01:d2 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b373b176-ee91-41a8-a80a-96c957639455', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8384e5887c0948f5876c019d50057152', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1e9ed13-b0e1-45c0-9be6-be0f145466a4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c0727149-3377-4d23-9d8d-0006462cd03e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6bdd57b3-15f4-46ef-bab3-67925c3606c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.282 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 in datapath 65f88c5a-8801-4bc1-9eed-15e2bab4717d unbound from our chassis#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.284 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65f88c5a-8801-4bc1-9eed-15e2bab4717d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.285 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[73e7e9cf-a172-447a-a18c-560e937eae68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.285 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d namespace which is not needed anymore#033[00m
Nov 29 03:10:30 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:10:30 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:10:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.299 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.310 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:30 np0005539550 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Nov 29 03:10:30 np0005539550 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000004e.scope: Consumed 14.033s CPU time.
Nov 29 03:10:30 np0005539550 systemd-machined[216673]: Machine qemu-38-instance-0000004e terminated.
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.343 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [NOTICE]   (306078) : haproxy version is 2.8.14-c23fe91
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [NOTICE]   (306078) : path to executable is /usr/sbin/haproxy
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [WARNING]  (306078) : Exiting Master process...
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [WARNING]  (306078) : Exiting Master process...
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [ALERT]    (306078) : Current worker (306080) exited with code 143 (Terminated)
Nov 29 03:10:30 np0005539550 neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d[306074]: [WARNING]  (306078) : All workers exited. Exiting... (0)
Nov 29 03:10:30 np0005539550 systemd[1]: libpod-3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e.scope: Deactivated successfully.
Nov 29 03:10:30 np0005539550 podman[307368]: 2025-11-29 08:10:30.424272008 +0000 UTC m=+0.043628155 container died 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:10:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e169c9d9c0d7bdd0f90a7a4ab510146ce7c47e7f8f180c9686a4ab9050452b83-merged.mount: Deactivated successfully.
Nov 29 03:10:30 np0005539550 podman[307368]: 2025-11-29 08:10:30.462756264 +0000 UTC m=+0.082112421 container cleanup 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:10:30 np0005539550 systemd[1]: libpod-conmon-3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e.scope: Deactivated successfully.
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 podman[307416]: 2025-11-29 08:10:30.528573275 +0000 UTC m=+0.047646337 container remove 3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.535 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5a929a4e-b90c-4882-93a8-0c623d05775c]: (4, ('Sat Nov 29 08:10:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e)\n3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e\nSat Nov 29 08:10:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d (3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e)\n3a7f9075151a675561aa1ad21fc5835a24bc7d4204421d99e53e167fb68d8f7e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.536 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[beaadeab-c53c-49a9-800d-c2f5c2c3c9d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.537 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65f88c5a-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 kernel: tap65f88c5a-80: left promiscuous mode
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.556 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.559 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[64e4050a-cd3f-494d-823c-6d2f32c569c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.584 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[feeb0476-f139-40c7-996a-407f92d76320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.586 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6f4e82-85ed-41f8-8dbd-aa87452648b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.600 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b78990-6f75-4463-83c6-29035518c691]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685310, 'reachable_time': 35842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307441, 'error': None, 'target': 'ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.603 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65f88c5a-8801-4bc1-9eed-15e2bab4717d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:10:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:30.603 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[3de22f15-7667-486a-b5ba-0bad18bed8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:30 np0005539550 systemd[1]: run-netns-ovnmeta\x2d65f88c5a\x2d8801\x2d4bc1\x2d9eed\x2d15e2bab4717d.mount: Deactivated successfully.
Nov 29 03:10:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:10:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138613836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.779 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.782 257641 DEBUG nova.virt.libvirt.vif [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.783 257641 DEBUG nova.network.os_vif_util [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.784 257641 DEBUG nova.network.os_vif_util [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.786 257641 DEBUG nova.objects.instance [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.811 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <uuid>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</uuid>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <name>instance-00000050</name>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:10:29</nova:creationTime>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="serial">8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="uuid">8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:f7:34:8d"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <target dev="tapc4c22a55-f9"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log" append="off"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:10:30 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:10:30 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:10:30 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:10:30 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.813 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Preparing to wait for external event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.813 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.814 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.814 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.814 257641 DEBUG nova.virt.libvirt.vif [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:10:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.815 257641 DEBUG nova.network.os_vif_util [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.816 257641 DEBUG nova.network.os_vif_util [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.816 257641 DEBUG os_vif [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.817 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.817 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.817 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.820 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.820 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4c22a55-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.820 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4c22a55-f9, col_values=(('external_ids', {'iface-id': 'c4c22a55-f99c-429f-8ef0-3fa113b99b13', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:34:8d', 'vm-uuid': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.822 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 NetworkManager[49039]: <info>  [1764403830.8229] manager: (tapc4c22a55-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.824 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.828 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.829 257641 INFO os_vif [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9')#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.841 257641 DEBUG nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.842 257641 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.842 257641 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.842 257641 DEBUG oslo_concurrency.lockutils [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.842 257641 DEBUG nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.842 257641 WARNING nova.compute.manager [req-a5f2c73d-7332-4e0a-9015-62273c5526bd req-608a98e9-5983-4104-9592-3632c33a660e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-unplugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.890 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.891 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.891 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:f7:34:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.891 257641 INFO nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Using config drive#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.917 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.922 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.990 257641 INFO nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.995 257641 INFO nova.virt.libvirt.driver [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] Instance destroyed successfully.#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.996 257641 DEBUG nova.virt.libvirt.vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:14Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.996 257641 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "vif_mac": "fa:16:3e:47:01:d2"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.997 257641 DEBUG nova.network.os_vif_util [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.997 257641 DEBUG os_vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.998 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:30 np0005539550 nova_compute[257631]: 2025-11-29 08:10:30.999 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bdd57b3-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.002 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.006 257641 INFO os_vif [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15')#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.009 257641 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.010 257641 DEBUG nova.virt.libvirt.driver [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] skipping disk for instance-0000004e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:10:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 293 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 7.8 MiB/s wr, 239 op/s
Nov 29 03:10:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:31.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.602 257641 DEBUG neutronclient.v2_0.client [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.642 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.734 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.735 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.735 257641 DEBUG oslo_concurrency.lockutils [None req-d9966645-bbf5-48fd-9a38-3cae481b3a5a 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.829 257641 INFO nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Creating config drive at /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.835 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_yd1a2_7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:31 np0005539550 nova_compute[257631]: 2025-11-29 08:10:31.978 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_yd1a2_7" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.010 257641 DEBUG nova.storage.rbd_utils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] rbd image 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.014 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.168 257641 DEBUG nova.network.neutron [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated VIF entry in instance network info cache for port c4c22a55-f99c-429f-8ef0-3fa113b99b13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.169 257641 DEBUG nova.network.neutron [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.188 257641 DEBUG oslo_concurrency.lockutils [req-91fb204f-976d-474b-9987-2a4ea0015eb7 req-c67ed3e3-8c51-4948-b43a-d09dd8cc1e3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.260 257641 DEBUG oslo_concurrency.processutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.261 257641 INFO nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Deleting local config drive /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/disk.config because it was imported into RBD.#033[00m
Nov 29 03:10:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:32 np0005539550 kernel: tapc4c22a55-f9: entered promiscuous mode
Nov 29 03:10:32 np0005539550 systemd-udevd[307342]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.3088] manager: (tapc4c22a55-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Nov 29 03:10:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:32Z|00281|binding|INFO|Claiming lport c4c22a55-f99c-429f-8ef0-3fa113b99b13 for this chassis.
Nov 29 03:10:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:32Z|00282|binding|INFO|c4c22a55-f99c-429f-8ef0-3fa113b99b13: Claiming fa:16:3e:f7:34:8d 10.100.0.6
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.310 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.320 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:34:8d 10.100.0.6'], port_security=['fa:16:3e:f7:34:8d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '368d0e95-c26a-4918-88ba-665a23cd4b1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4c22a55-f99c-429f-8ef0-3fa113b99b13) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.3214] device (tapc4c22a55-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.3226] device (tapc4c22a55-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.322 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4c22a55-f99c-429f-8ef0-3fa113b99b13 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.325 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.335 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[01640a1e-12cf-44b5-abb8-c7a2a10c876c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.336 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapddd8b166-71 in ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.338 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapddd8b166-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.338 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e37868-55e0-4060-953b-ed585f5e11c7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.339 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0043db00-c629-4b4f-b679-ad3b2eabd815]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 systemd-machined[216673]: New machine qemu-39-instance-00000050.
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.353 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6c810c2b-9dc3-4c83-b0c2-7ce9941a30a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 systemd[1]: Started Virtual Machine qemu-39-instance-00000050.
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.376 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[88d34533-0332-49a5-b61d-f4481fdcd255]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.383 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.390 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:32Z|00283|binding|INFO|Setting lport c4c22a55-f99c-429f-8ef0-3fa113b99b13 ovn-installed in OVS
Nov 29 03:10:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:32Z|00284|binding|INFO|Setting lport c4c22a55-f99c-429f-8ef0-3fa113b99b13 up in Southbound
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.394 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.404 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6ac325-88e9-42fa-b2a1-824e04a49699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.4097] manager: (tapddd8b166-70): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.410 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82f478a4-9e07-4fa6-994d-c138245b6bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.439 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8d8c2150-7008-4d96-8098-e550848da583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.442 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e314de7b-d044-4dab-b9ce-3ec5ca1f2f2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.4618] device (tapddd8b166-70): carrier: link connected
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.466 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[590d1cf7-7586-4861-826d-d08947ab6160]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.483 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb2ddbd-0171-4364-a9c7-9a77f7862b09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307551, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.497 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[589cc22e-3a51-4a56-97c8-9edb25abe509]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9d:3576'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688012, 'tstamp': 688012}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307552, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.513 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2cd6fb-dad5-4d46-a26c-9ee160b2165d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307553, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.542 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[186c7da6-7963-46b5-90f6-caf5926b8061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.595 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b56d7adf-4c69-4587-8b68-699da839e7f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 kernel: tapddd8b166-70: entered promiscuous mode
Nov 29 03:10:32 np0005539550 NetworkManager[49039]: <info>  [1764403832.6002] manager: (tapddd8b166-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.602 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:32Z|00285|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 nova_compute[257631]: 2025-11-29 08:10:32.618 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.619 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.620 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[62671f78-c2b4-4536-b169-a358cd96b041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.621 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.pid.haproxy
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID ddd8b166-79ec-408d-b52c-581ad9dd6cb8
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:10:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:32.621 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'env', 'PROCESS_TAG=haproxy-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ddd8b166-79ec-408d-b52c-581ad9dd6cb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:10:32 np0005539550 podman[307585]: 2025-11-29 08:10:32.967952394 +0000 UTC m=+0.047057752 container create 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.013 257641 DEBUG nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.014 257641 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.014 257641 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.015 257641 DEBUG oslo_concurrency.lockutils [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.015 257641 DEBUG nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.015 257641 WARNING nova.compute.manager [req-12f70136-64e7-4206-89c2-c767c90d38e4 req-f89faeea-133e-435b-b770-ccf5ed1bfa5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:10:33 np0005539550 systemd[1]: Started libpod-conmon-0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2.scope.
Nov 29 03:10:33 np0005539550 podman[307585]: 2025-11-29 08:10:32.942045204 +0000 UTC m=+0.021150582 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:10:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:10:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6429e868622f2d619f6a616cd568b27825499c8f48a771215f47eb03c7c2f7e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:10:33 np0005539550 podman[307585]: 2025-11-29 08:10:33.06424116 +0000 UTC m=+0.143346538 container init 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:10:33 np0005539550 podman[307585]: 2025-11-29 08:10:33.06942471 +0000 UTC m=+0.148530068 container start 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:10:33 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [NOTICE]   (307604) : New worker (307606) forked
Nov 29 03:10:33 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [NOTICE]   (307604) : Loading success.
Nov 29 03:10:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 293 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 222 op/s
Nov 29 03:10:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:33.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.578 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403833.5775168, 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.579 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] VM Started (Lifecycle Event)#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.597 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.600 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403833.5778341, 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.600 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.630 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.634 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.656 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.870 257641 DEBUG nova.compute.manager [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.870 257641 DEBUG nova.compute.manager [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing instance network info cache due to event network-changed-6bdd57b3-15f4-46ef-bab3-67925c3606c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.870 257641 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.870 257641 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:33 np0005539550 nova_compute[257631]: 2025-11-29 08:10:33.871 257641 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Refreshing network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.143 257641 DEBUG nova.compute.manager [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.144 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.144 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.144 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.145 257641 DEBUG nova.compute.manager [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Processing event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.145 257641 DEBUG nova.compute.manager [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.145 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.145 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.145 257641 DEBUG oslo_concurrency.lockutils [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.146 257641 DEBUG nova.compute.manager [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.146 257641 WARNING nova.compute.manager [req-6528ccaf-4c2e-485f-823a-59de7dc04d83 req-7e378c94-a103-4ba0-94f4-e915a14c677b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.146 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.150 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403834.1495867, 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.150 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.152 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.156 257641 INFO nova.virt.libvirt.driver [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance spawned successfully.#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.156 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.179 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.185 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.191 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.192 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.193 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.193 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.194 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.195 257641 DEBUG nova.virt.libvirt.driver [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.205 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.253 257641 INFO nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Took 9.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.253 257641 DEBUG nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:34.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.311 257641 INFO nova.compute.manager [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Took 10.17 seconds to build instance.#033[00m
Nov 29 03:10:34 np0005539550 nova_compute[257631]: 2025-11-29 08:10:34.330 257641 DEBUG oslo_concurrency.lockutils [None req-df2ce5f0-62d1-4574-82ad-81f6676d99e6 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 293 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.5 MiB/s wr, 241 op/s
Nov 29 03:10:35 np0005539550 nova_compute[257631]: 2025-11-29 08:10:35.278 257641 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updated VIF entry in instance network info cache for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:35 np0005539550 nova_compute[257631]: 2025-11-29 08:10:35.279 257641 DEBUG nova.network.neutron [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:35 np0005539550 nova_compute[257631]: 2025-11-29 08:10:35.310 257641 DEBUG oslo_concurrency.lockutils [req-be7811f5-fbb1-43ee-9950-8fe386d082af req-f307c895-4013-4f06-b46e-e99b1968d49a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:35.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 29 03:10:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 29 03:10:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.003 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539550 NetworkManager[49039]: <info>  [1764403836.0687] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Nov 29 03:10:36 np0005539550 NetworkManager[49039]: <info>  [1764403836.0697] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.070 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.255 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:36Z|00286|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.277 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.341 257641 DEBUG nova.compute.manager [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-changed-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.342 257641 DEBUG nova.compute.manager [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing instance network info cache due to event network-changed-c4c22a55-f99c-429f-8ef0-3fa113b99b13. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.342 257641 DEBUG oslo_concurrency.lockutils [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.342 257641 DEBUG oslo_concurrency.lockutils [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.342 257641 DEBUG nova.network.neutron [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing network info cache for port c4c22a55-f99c-429f-8ef0-3fa113b99b13 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:10:36 np0005539550 nova_compute[257631]: 2025-11-29 08:10:36.646 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 293 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.3 MiB/s wr, 167 op/s
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.329 257641 DEBUG nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.330 257641 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.330 257641 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.331 257641 DEBUG oslo_concurrency.lockutils [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.331 257641 DEBUG nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.332 257641 WARNING nova.compute.manager [req-6a7af161-c913-4056-bb35-9c49070edbe9 req-660de545-3468-42ce-affa-23e034b4fa05 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:10:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:37.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.752 257641 DEBUG nova.network.neutron [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated VIF entry in instance network info cache for port c4c22a55-f99c-429f-8ef0-3fa113b99b13. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.753 257641 DEBUG nova.network.neutron [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:37 np0005539550 nova_compute[257631]: 2025-11-29 08:10:37.769 257641 DEBUG oslo_concurrency.lockutils [req-93ced92b-4500-4d4b-9be7-6c344afbc1d5 req-55277f15-b5f7-448a-a438-11a6c5ea5c9d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 293 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 778 KiB/s wr, 211 op/s
Nov 29 03:10:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.448 257641 DEBUG nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.449 257641 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.449 257641 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.449 257641 DEBUG oslo_concurrency.lockutils [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.449 257641 DEBUG nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] No waiting events found dispatching network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:10:39 np0005539550 nova_compute[257631]: 2025-11-29 08:10:39.449 257641 WARNING nova.compute.manager [req-f7a24ce8-364d-443b-b757-f317298fd4c4 req-eaa1aff8-b4bf-45f8-9262-ec69926b58f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Received unexpected event network-vif-plugged-6bdd57b3-15f4-46ef-bab3-67925c3606c5 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:10:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:10:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:10:40 np0005539550 nova_compute[257631]: 2025-11-29 08:10:40.830 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "b373b176-ee91-41a8-a80a-96c957639455" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:40 np0005539550 nova_compute[257631]: 2025-11-29 08:10:40.831 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:40 np0005539550 nova_compute[257631]: 2025-11-29 08:10:40.831 257641 DEBUG nova.compute.manager [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Going to confirm migration 12 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.007 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 309 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 187 op/s
Nov 29 03:10:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:41.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.830 257641 DEBUG neutronclient.v2_0.client [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 6bdd57b3-15f4-46ef-bab3-67925c3606c5 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.832 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.833 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquired lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.834 257641 DEBUG nova.network.neutron [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:41 np0005539550 nova_compute[257631]: 2025-11-29 08:10:41.835 257641 DEBUG nova.objects.instance [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'info_cache' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:42.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 309 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 187 op/s
Nov 29 03:10:43 np0005539550 podman[307721]: 2025-11-29 08:10:43.338234523 +0000 UTC m=+0.071835434 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:10:43 np0005539550 podman[307720]: 2025-11-29 08:10:43.344259784 +0000 UTC m=+0.078039979 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Nov 29 03:10:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:43.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.599 257641 DEBUG nova.network.neutron [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] [instance: b373b176-ee91-41a8-a80a-96c957639455] Updating instance_info_cache with network_info: [{"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.621 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Releasing lock "refresh_cache-b373b176-ee91-41a8-a80a-96c957639455" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.622 257641 DEBUG nova.objects.instance [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lazy-loading 'migration_context' on Instance uuid b373b176-ee91-41a8-a80a-96c957639455 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.715 257641 DEBUG nova.storage.rbd_utils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] removing snapshot(nova-resize) on rbd image(b373b176-ee91-41a8-a80a-96c957639455_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:10:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 29 03:10:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 29 03:10:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.966 257641 DEBUG nova.virt.libvirt.vif [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:09:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1182352036',display_name='tempest-ServerDiskConfigTestJSON-server-1182352036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1182352036',id=78,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8384e5887c0948f5876c019d50057152',ramdisk_id='',reservation_id='r-i00kkfjg',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-767135984',owner_user_name='tempest-ServerDiskConfigTestJSON-767135984-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:38Z,user_data=None,user_id='9ab0114aca6149af994da2b9052c1368',uuid=b373b176-ee91-41a8-a80a-96c957639455,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.967 257641 DEBUG nova.network.os_vif_util [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converting VIF {"id": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "address": "fa:16:3e:47:01:d2", "network": {"id": "65f88c5a-8801-4bc1-9eed-15e2bab4717d", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-626539005-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8384e5887c0948f5876c019d50057152", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bdd57b3-15", "ovs_interfaceid": "6bdd57b3-15f4-46ef-bab3-67925c3606c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.968 257641 DEBUG nova.network.os_vif_util [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.968 257641 DEBUG os_vif [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.970 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.971 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bdd57b3-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.971 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.974 257641 INFO os_vif [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:01:d2,bridge_name='br-int',has_traffic_filtering=True,id=6bdd57b3-15f4-46ef-bab3-67925c3606c5,network=Network(65f88c5a-8801-4bc1-9eed-15e2bab4717d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bdd57b3-15')#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.975 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:43 np0005539550 nova_compute[257631]: 2025-11-29 08:10:43.975 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:44 np0005539550 nova_compute[257631]: 2025-11-29 08:10:44.057 257641 DEBUG oslo_concurrency.processutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:10:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:10:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079754989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:10:44 np0005539550 nova_compute[257631]: 2025-11-29 08:10:44.497 257641 DEBUG oslo_concurrency.processutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:10:44 np0005539550 nova_compute[257631]: 2025-11-29 08:10:44.504 257641 DEBUG nova.compute.provider_tree [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:10:44 np0005539550 nova_compute[257631]: 2025-11-29 08:10:44.661 257641 DEBUG nova.scheduler.client.report [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.134 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 318 MiB data, 805 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 227 op/s
Nov 29 03:10:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:45.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.442 257641 INFO nova.scheduler.client.report [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Deleted allocation for migration ab5c62e7-ebd8-47af-8bc8-b3e6db416399#033[00m
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.494 257641 DEBUG oslo_concurrency.lockutils [None req-942e9bb9-90ee-412d-a483-5b3c2a09666c 9ab0114aca6149af994da2b9052c1368 8384e5887c0948f5876c019d50057152 - - default default] Lock "b373b176-ee91-41a8-a80a-96c957639455" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.512 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403830.5122185, b373b176-ee91-41a8-a80a-96c957639455 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.513 257641 INFO nova.compute.manager [-] [instance: b373b176-ee91-41a8-a80a-96c957639455] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:10:45 np0005539550 nova_compute[257631]: 2025-11-29 08:10:45.532 257641 DEBUG nova.compute.manager [None req-2b595b33-b3b3-4c41-a976-081cd6564b50 - - - - - -] [instance: b373b176-ee91-41a8-a80a-96c957639455] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:10:46 np0005539550 nova_compute[257631]: 2025-11-29 08:10:46.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:46.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:46 np0005539550 nova_compute[257631]: 2025-11-29 08:10:46.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:46Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:34:8d 10.100.0.6
Nov 29 03:10:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:10:46Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:34:8d 10.100.0.6
Nov 29 03:10:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 326 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.6 MiB/s wr, 246 op/s
Nov 29 03:10:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:47.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:48.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 343 MiB data, 847 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Nov 29 03:10:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:49.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:50.585 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:10:50 np0005539550 nova_compute[257631]: 2025-11-29 08:10:50.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:50.586 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:10:51 np0005539550 nova_compute[257631]: 2025-11-29 08:10:51.013 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 359 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 191 op/s
Nov 29 03:10:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:51 np0005539550 nova_compute[257631]: 2025-11-29 08:10:51.651 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 29 03:10:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 29 03:10:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 29 03:10:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:52.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 359 MiB data, 865 MiB used, 20 GiB / 21 GiB avail; 648 KiB/s rd, 2.9 MiB/s wr, 133 op/s
Nov 29 03:10:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:53 np0005539550 nova_compute[257631]: 2025-11-29 08:10:53.489 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:10:53 np0005539550 nova_compute[257631]: 2025-11-29 08:10:53.490 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:10:53 np0005539550 nova_compute[257631]: 2025-11-29 08:10:53.490 257641 DEBUG nova.objects.instance [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:53 np0005539550 nova_compute[257631]: 2025-11-29 08:10:53.513 257641 DEBUG nova.objects.instance [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:10:53 np0005539550 nova_compute[257631]: 2025-11-29 08:10:53.529 257641 DEBUG nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:10:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:54.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:54 np0005539550 nova_compute[257631]: 2025-11-29 08:10:54.395 257641 DEBUG nova.policy [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:10:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 328 MiB data, 856 MiB used, 20 GiB / 21 GiB avail; 962 KiB/s rd, 2.7 MiB/s wr, 180 op/s
Nov 29 03:10:55 np0005539550 nova_compute[257631]: 2025-11-29 08:10:55.302 257641 DEBUG nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully created port: d1e89475-0d6a-4f19-8d70-45b91897d9c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:10:55 np0005539550 podman[307868]: 2025-11-29 08:10:55.34559649 +0000 UTC m=+0.088977843 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:10:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:55.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:56 np0005539550 nova_compute[257631]: 2025-11-29 08:10:56.053 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:56.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:56 np0005539550 nova_compute[257631]: 2025-11-29 08:10:56.653 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:10:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:10:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 279 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.6 MiB/s wr, 193 op/s
Nov 29 03:10:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:10:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:57.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:10:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:10:57.588 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:10:57 np0005539550 nova_compute[257631]: 2025-11-29 08:10:57.719 257641 DEBUG nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully updated port: d1e89475-0d6a-4f19-8d70-45b91897d9c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:10:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:57 np0005539550 nova_compute[257631]: 2025-11-29 08:10:57.865 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:10:57 np0005539550 nova_compute[257631]: 2025-11-29 08:10:57.866 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:10:57 np0005539550 nova_compute[257631]: 2025-11-29 08:10:57.866 257641 DEBUG nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:10:58 np0005539550 nova_compute[257631]: 2025-11-29 08:10:58.051 257641 WARNING nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:10:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:10:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:10:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:10:58.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 241 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 913 KiB/s rd, 1.5 MiB/s wr, 167 op/s
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:10:59
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups']
Nov 29 03:10:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:10:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:10:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:10:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:10:59.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:10:59 np0005539550 nova_compute[257631]: 2025-11-29 08:10:59.508 257641 DEBUG nova.compute.manager [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-changed-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:10:59 np0005539550 nova_compute[257631]: 2025-11-29 08:10:59.508 257641 DEBUG nova.compute.manager [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing instance network info cache due to event network-changed-d1e89475-0d6a-4f19-8d70-45b91897d9c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:10:59 np0005539550 nova_compute[257631]: 2025-11-29 08:10:59.508 257641 DEBUG oslo_concurrency.lockutils [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:00.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.059 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 200 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 695 KiB/s rd, 33 KiB/s wr, 136 op/s
Nov 29 03:11:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.754 257641 DEBUG nova.network.neutron [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.834 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.835 257641 DEBUG oslo_concurrency.lockutils [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.835 257641 DEBUG nova.network.neutron [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing network info cache for port d1e89475-0d6a-4f19-8d70-45b91897d9c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.839 257641 DEBUG nova.virt.libvirt.vif [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.839 257641 DEBUG nova.network.os_vif_util [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.840 257641 DEBUG nova.network.os_vif_util [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.840 257641 DEBUG os_vif [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.841 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.841 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.842 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.845 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.845 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e89475-0d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.846 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd1e89475-0d, col_values=(('external_ids', {'iface-id': 'd1e89475-0d6a-4f19-8d70-45b91897d9c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:56:cd', 'vm-uuid': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.847 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 NetworkManager[49039]: <info>  [1764403861.8487] manager: (tapd1e89475-0d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.849 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.855 257641 INFO os_vif [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d')#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.856 257641 DEBUG nova.virt.libvirt.vif [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.856 257641 DEBUG nova.network.os_vif_util [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.857 257641 DEBUG nova.network.os_vif_util [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.860 257641 DEBUG nova.virt.libvirt.guest [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:11:01 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:0b:56:cd"/>
Nov 29 03:11:01 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 03:11:01 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:01 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 03:11:01 np0005539550 nova_compute[257631]:  <target dev="tapd1e89475-0d"/>
Nov 29 03:11:01 np0005539550 nova_compute[257631]: </interface>
Nov 29 03:11:01 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:11:01 np0005539550 NetworkManager[49039]: <info>  [1764403861.8731] manager: (tapd1e89475-0d): new Tun device (/org/freedesktop/NetworkManager/Devices/127)
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.874 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:01Z|00287|binding|INFO|Claiming lport d1e89475-0d6a-4f19-8d70-45b91897d9c0 for this chassis.
Nov 29 03:11:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:01Z|00288|binding|INFO|d1e89475-0d6a-4f19-8d70-45b91897d9c0: Claiming fa:16:3e:0b:56:cd 10.100.0.10
Nov 29 03:11:01 np0005539550 kernel: tapd1e89475-0d: entered promiscuous mode
Nov 29 03:11:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:01Z|00289|binding|INFO|Setting lport d1e89475-0d6a-4f19-8d70-45b91897d9c0 ovn-installed in OVS
Nov 29 03:11:01 np0005539550 nova_compute[257631]: 2025-11-29 08:11:01.896 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:01 np0005539550 systemd-udevd[307905]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:01 np0005539550 NetworkManager[49039]: <info>  [1764403861.9154] device (tapd1e89475-0d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:01 np0005539550 NetworkManager[49039]: <info>  [1764403861.9164] device (tapd1e89475-0d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:02Z|00290|binding|INFO|Setting lport d1e89475-0d6a-4f19-8d70-45b91897d9c0 up in Southbound
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.010 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:56:cd 10.100.0.10'], port_security=['fa:16:3e:0b:56:cd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d1e89475-0d6a-4f19-8d70-45b91897d9c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.011 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d1e89475-0d6a-4f19-8d70-45b91897d9c0 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.013 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.029 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1ddd52c0-b628-48c4-ac88-c1555fc7104f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.068 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab994ba-a030-4dba-9d31-37944598beb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.072 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[43aed85e-e218-42da-a19c-48e47a157be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.102 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8c2d39-b66e-48d2-832e-b7bab6da3f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.125 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e16b23-41e2-4a84-8206-31bd65d3077b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307914, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.141 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[681dd513-27d8-4334-a0ed-8a6c25bc5fb5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307915, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307915, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.143 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.145 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.146 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.147 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.147 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:02.147 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.184 257641 DEBUG nova.virt.libvirt.driver [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.184 257641 DEBUG nova.virt.libvirt.driver [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.185 257641 DEBUG nova.virt.libvirt.driver [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:f7:34:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.185 257641 DEBUG nova.virt.libvirt.driver [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:0b:56:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.221 257641 DEBUG nova.virt.libvirt.guest [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:02</nova:creationTime>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:02 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    <nova:port uuid="d1e89475-0d6a-4f19-8d70-45b91897d9c0">
Nov 29 03:11:02 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:02 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:02 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:02 np0005539550 nova_compute[257631]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:11:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:02.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:02 np0005539550 nova_compute[257631]: 2025-11-29 08:11:02.438 257641 DEBUG oslo_concurrency.lockutils [None req-44ed1d39-6727-49e2-ad4c-5982758def68 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:03Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:56:cd 10.100.0.10
Nov 29 03:11:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:03Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:56:cd 10.100.0.10
Nov 29 03:11:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 200 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 633 KiB/s rd, 30 KiB/s wr, 123 op/s
Nov 29 03:11:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.613 257641 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.613 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.614 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.614 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.614 257641 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.614 257641 WARNING nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.614 257641 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.615 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.615 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.615 257641 DEBUG oslo_concurrency.lockutils [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.615 257641 DEBUG nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.615 257641 WARNING nova.compute.manager [req-52be947e-5f56-4209-a1e2-cf6755fddb88 req-d50af7d4-6b23-4754-900c-09c293d697c5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.693 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.693 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.694 257641 DEBUG nova.objects.instance [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.926 257641 DEBUG nova.network.neutron [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated VIF entry in instance network info cache for port d1e89475-0d6a-4f19-8d70-45b91897d9c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.928 257641 DEBUG nova.network.neutron [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:03 np0005539550 nova_compute[257631]: 2025-11-29 08:11:03.951 257641 DEBUG oslo_concurrency.lockutils [req-db13ee2c-f2ed-4fbb-8ad6-0fa7c0588634 req-a0a8aa99-8a91-42ab-9215-7c8f3e23db59 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:04 np0005539550 nova_compute[257631]: 2025-11-29 08:11:04.121 257641 DEBUG nova.objects.instance [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:04 np0005539550 nova_compute[257631]: 2025-11-29 08:11:04.152 257641 DEBUG nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:04.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:04 np0005539550 nova_compute[257631]: 2025-11-29 08:11:04.377 257641 DEBUG nova.policy [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:05 np0005539550 nova_compute[257631]: 2025-11-29 08:11:05.050 257641 DEBUG nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully created port: 3145abd1-9798-4474-af6e-da5917de6dab _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 200 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 579 KiB/s rd, 28 KiB/s wr, 113 op/s
Nov 29 03:11:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:05.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.323 257641 DEBUG nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully updated port: 3145abd1-9798-4474-af6e-da5917de6dab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:06.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.345 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.346 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.346 257641 DEBUG nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.466 257641 DEBUG nova.compute.manager [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-changed-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.467 257641 DEBUG nova.compute.manager [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing instance network info cache due to event network-changed-3145abd1-9798-4474-af6e-da5917de6dab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.467 257641 DEBUG oslo_concurrency.lockutils [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.552 257641 WARNING nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.552 257641 WARNING nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:11:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:06Z|00291|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.759 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:06 np0005539550 nova_compute[257631]: 2025-11-29 08:11:06.847 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 226 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 1021 KiB/s wr, 78 op/s
Nov 29 03:11:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:11:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:08.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:11:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:11:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 246 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 29 03:11:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:10.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.364 257641 DEBUG nova.network.neutron [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.389 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.390 257641 DEBUG oslo_concurrency.lockutils [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.390 257641 DEBUG nova.network.neutron [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing network info cache for port 3145abd1-9798-4474-af6e-da5917de6dab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.394 257641 DEBUG nova.virt.libvirt.vif [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.394 257641 DEBUG nova.network.os_vif_util [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.395 257641 DEBUG nova.network.os_vif_util [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.395 257641 DEBUG os_vif [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.396 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.396 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.397 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.399 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.399 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3145abd1-97, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.399 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3145abd1-97, col_values=(('external_ids', {'iface-id': '3145abd1-9798-4474-af6e-da5917de6dab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:e2:8d', 'vm-uuid': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 NetworkManager[49039]: <info>  [1764403870.4019] manager: (tap3145abd1-97): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.408 257641 INFO os_vif [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97')#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.409 257641 DEBUG nova.virt.libvirt.vif [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.409 257641 DEBUG nova.network.os_vif_util [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.410 257641 DEBUG nova.network.os_vif_util [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.413 257641 DEBUG nova.virt.libvirt.guest [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:5f:e2:8d"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <target dev="tap3145abd1-97"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]: </interface>
Nov 29 03:11:10 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:11:10 np0005539550 kernel: tap3145abd1-97: entered promiscuous mode
Nov 29 03:11:10 np0005539550 NetworkManager[49039]: <info>  [1764403870.4251] manager: (tap3145abd1-97): new Tun device (/org/freedesktop/NetworkManager/Devices/129)
Nov 29 03:11:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:10Z|00292|binding|INFO|Claiming lport 3145abd1-9798-4474-af6e-da5917de6dab for this chassis.
Nov 29 03:11:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:10Z|00293|binding|INFO|3145abd1-9798-4474-af6e-da5917de6dab: Claiming fa:16:3e:5f:e2:8d 10.100.0.13
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.461 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.468 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e2:8d 10.100.0.13'], port_security=['fa:16:3e:5f:e2:8d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3145abd1-9798-4474-af6e-da5917de6dab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.470 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3145abd1-9798-4474-af6e-da5917de6dab in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.472 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:10Z|00294|binding|INFO|Setting lport 3145abd1-9798-4474-af6e-da5917de6dab ovn-installed in OVS
Nov 29 03:11:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:10Z|00295|binding|INFO|Setting lport 3145abd1-9798-4474-af6e-da5917de6dab up in Southbound
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 systemd-udevd[307927]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.490 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2344364-0e89-42c3-b023-5d1793cebf69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 NetworkManager[49039]: <info>  [1764403870.4996] device (tap3145abd1-97): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:10 np0005539550 NetworkManager[49039]: <info>  [1764403870.5011] device (tap3145abd1-97): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.525 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[04de161e-1e3a-4a4c-81f3-f8d16123609d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.529 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f1be6f3d-5ec4-4e05-a2df-3c85806ccff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.550 257641 DEBUG nova.virt.libvirt.driver [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.551 257641 DEBUG nova.virt.libvirt.driver [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.551 257641 DEBUG nova.virt.libvirt.driver [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:f7:34:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.551 257641 DEBUG nova.virt.libvirt.driver [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:0b:56:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.551 257641 DEBUG nova.virt.libvirt.driver [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:5f:e2:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.560 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[acf0f76f-1186-4a72-a9a9-5403c9a45d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.573 257641 DEBUG nova.virt.libvirt.guest [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:10</nova:creationTime>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:port uuid="d1e89475-0d6a-4f19-8d70-45b91897d9c0">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    <nova:port uuid="3145abd1-9798-4474-af6e-da5917de6dab">
Nov 29 03:11:10 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:10 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:10 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:10 np0005539550 nova_compute[257631]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.577 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49134da9-72dd-4bf9-9d6f-8facc2d76d68]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307934, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.594 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6e2082-a22a-40d2-aa52-df57a6b967c7]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307935, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307935, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.594 257641 DEBUG oslo_concurrency.lockutils [None req-9c50e7f8-41bd-4761-8f12-0bbba29e211b b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.595 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.599 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.600 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.600 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:10.600 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.993 257641 DEBUG nova.compute.manager [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.994 257641 DEBUG oslo_concurrency.lockutils [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.994 257641 DEBUG oslo_concurrency.lockutils [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.994 257641 DEBUG oslo_concurrency.lockutils [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.994 257641 DEBUG nova.compute.manager [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:10 np0005539550 nova_compute[257631]: 2025-11-29 08:11:10.995 257641 WARNING nova.compute.manager [req-bdcca750-37a7-4c2c-b827-96f23d0c39d1 req-4436ddf0-b6e1-42b5-af16-12be2254ce4f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 246 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 29 03:11:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:11 np0005539550 nova_compute[257631]: 2025-11-29 08:11:11.762 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:12 np0005539550 nova_compute[257631]: 2025-11-29 08:11:12.420 257641 DEBUG nova.network.neutron [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated VIF entry in instance network info cache for port 3145abd1-9798-4474-af6e-da5917de6dab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:12 np0005539550 nova_compute[257631]: 2025-11-29 08:11:12.421 257641 DEBUG nova.network.neutron [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:12 np0005539550 nova_compute[257631]: 2025-11-29 08:11:12.437 257641 DEBUG oslo_concurrency.lockutils [req-b92d8598-4225-4cb3-b44f-cea92bdc1483 req-afa17a91-657e-4c41-95fc-178ad5b686f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:12 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:12Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:e2:8d 10.100.0.13
Nov 29 03:11:12 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:12Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:e2:8d 10.100.0.13
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.127 257641 DEBUG nova.compute.manager [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.127 257641 DEBUG oslo_concurrency.lockutils [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.128 257641 DEBUG oslo_concurrency.lockutils [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.128 257641 DEBUG oslo_concurrency.lockutils [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.128 257641 DEBUG nova.compute.manager [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.128 257641 WARNING nova.compute.manager [req-f39aade1-caf0-4e3c-a458-84f9185ce902 req-d376b62f-85a0-4e0b-a187-f0ac7765a990 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 246 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Nov 29 03:11:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:13.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.407 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-da689592-0a11-44fa-ba74-ffc145f551ab" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.407 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-da689592-0a11-44fa-ba74-ffc145f551ab" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:13 np0005539550 nova_compute[257631]: 2025-11-29 08:11:13.407 257641 DEBUG nova.objects.instance [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:13 np0005539550 podman[307964]: 2025-11-29 08:11:13.665480749 +0000 UTC m=+0.060134179 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:11:13 np0005539550 podman[307963]: 2025-11-29 08:11:13.703885463 +0000 UTC m=+0.097054326 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:11:14 np0005539550 nova_compute[257631]: 2025-11-29 08:11:14.140 257641 DEBUG nova.objects.instance [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'pci_requests' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:14 np0005539550 nova_compute[257631]: 2025-11-29 08:11:14.160 257641 DEBUG nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:14 np0005539550 nova_compute[257631]: 2025-11-29 08:11:14.847 257641 DEBUG nova.policy [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b95b3e841be1420c99ee0a04dd0840f1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ff7c805d4242453aa2148a247956391d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 247 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 927 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Nov 29 03:11:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:15.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:15 np0005539550 nova_compute[257631]: 2025-11-29 08:11:15.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:15 np0005539550 nova_compute[257631]: 2025-11-29 08:11:15.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:16 np0005539550 nova_compute[257631]: 2025-11-29 08:11:16.764 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:16 np0005539550 nova_compute[257631]: 2025-11-29 08:11:16.834 257641 DEBUG nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Successfully updated port: da689592-0a11-44fa-ba74-ffc145f551ab _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:16 np0005539550 nova_compute[257631]: 2025-11-29 08:11:16.865 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:16 np0005539550 nova_compute[257631]: 2025-11-29 08:11:16.865 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:16 np0005539550 nova_compute[257631]: 2025-11-29 08:11:16.866 257641 DEBUG nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.020 257641 DEBUG nova.compute.manager [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-changed-da689592-0a11-44fa-ba74-ffc145f551ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.020 257641 DEBUG nova.compute.manager [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing instance network info cache due to event network-changed-da689592-0a11-44fa-ba74-ffc145f551ab. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.021 257641 DEBUG oslo_concurrency.lockutils [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.125 257641 WARNING nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.126 257641 WARNING nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:11:17 np0005539550 nova_compute[257631]: 2025-11-29 08:11:17.126 257641 WARNING nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] ddd8b166-79ec-408d-b52c-581ad9dd6cb8 already exists in list: networks containing: ['ddd8b166-79ec-408d-b52c-581ad9dd6cb8']. ignoring it#033[00m
Nov 29 03:11:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 232 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Nov 29 03:11:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:17.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:18.943 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:18.944 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:18.944 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 823 KiB/s wr, 132 op/s
Nov 29 03:11:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:19.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031690570910106086 of space, bias 1.0, pg target 0.9507171273031826 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:11:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:11:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.972 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.972 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.972 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.972 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:11:20 np0005539550 nova_compute[257631]: 2025-11-29 08:11:20.973 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 118 op/s
Nov 29 03:11:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531123484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.486 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.579 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.579 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:11:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.719 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.773 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.774 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4335MB free_disk=20.921768188476562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.774 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.774 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.879 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.880 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.880 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:11:21 np0005539550 nova_compute[257631]: 2025-11-29 08:11:21.927 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:22Z|00296|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.325 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3239988989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.384 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.391 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.407 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.446 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:11:22 np0005539550 nova_compute[257631]: 2025-11-29 08:11:22.446 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 112 op/s
Nov 29 03:11:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.869 257641 DEBUG nova.network.neutron [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.888 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.889 257641 DEBUG oslo_concurrency.lockutils [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.889 257641 DEBUG nova.network.neutron [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Refreshing network info cache for port da689592-0a11-44fa-ba74-ffc145f551ab _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.892 257641 DEBUG nova.virt.libvirt.vif [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.892 257641 DEBUG nova.network.os_vif_util [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.893 257641 DEBUG nova.network.os_vif_util [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.893 257641 DEBUG os_vif [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.893 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.894 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.894 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.897 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda689592-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.897 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapda689592-0a, col_values=(('external_ids', {'iface-id': 'da689592-0a11-44fa-ba74-ffc145f551ab', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:14:ca', 'vm-uuid': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.899 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 NetworkManager[49039]: <info>  [1764403883.9000] manager: (tapda689592-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.902 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.907 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.908 257641 INFO os_vif [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a')#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.909 257641 DEBUG nova.virt.libvirt.vif [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.909 257641 DEBUG nova.network.os_vif_util [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.909 257641 DEBUG nova.network.os_vif_util [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.911 257641 DEBUG nova.virt.libvirt.guest [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] attach device xml: <interface type="ethernet">
Nov 29 03:11:23 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:7d:14:ca"/>
Nov 29 03:11:23 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 03:11:23 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:23 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 03:11:23 np0005539550 nova_compute[257631]:  <target dev="tapda689592-0a"/>
Nov 29 03:11:23 np0005539550 nova_compute[257631]: </interface>
Nov 29 03:11:23 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:11:23 np0005539550 kernel: tapda689592-0a: entered promiscuous mode
Nov 29 03:11:23 np0005539550 NetworkManager[49039]: <info>  [1764403883.9222] manager: (tapda689592-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:23Z|00297|binding|INFO|Claiming lport da689592-0a11-44fa-ba74-ffc145f551ab for this chassis.
Nov 29 03:11:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:23Z|00298|binding|INFO|da689592-0a11-44fa-ba74-ffc145f551ab: Claiming fa:16:3e:7d:14:ca 10.100.0.14
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.930 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:14:ca 10.100.0.14'], port_security=['fa:16:3e:7d:14:ca 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-903222066', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-903222066', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=da689592-0a11-44fa-ba74-ffc145f551ab) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.931 158978 INFO neutron.agent.ovn.metadata.agent [-] Port da689592-0a11-44fa-ba74-ffc145f551ab in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 bound to our chassis#033[00m
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.932 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:23Z|00299|binding|INFO|Setting lport da689592-0a11-44fa-ba74-ffc145f551ab ovn-installed in OVS
Nov 29 03:11:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:23Z|00300|binding|INFO|Setting lport da689592-0a11-44fa-ba74-ffc145f551ab up in Southbound
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.945 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.948 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e6597a66-57d1-4528-b5ea-20b87cf47be3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:23 np0005539550 nova_compute[257631]: 2025-11-29 08:11:23.948 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:23 np0005539550 systemd-udevd[308085]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.976 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[53ca9fa7-e229-4169-a37b-b8e9cec657b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:23.979 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[74fedaec-449a-4a39-a107-3323e2f13ba8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:23 np0005539550 NetworkManager[49039]: <info>  [1764403883.9828] device (tapda689592-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:11:23 np0005539550 NetworkManager[49039]: <info>  [1764403883.9838] device (tapda689592-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.007 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c38808fa-2138-4607-b7bb-17bf1f415dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.021 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[abac8af4-1f47-4a02-8321-414159d38157]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308091, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.037 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c86d9722-3819-4c52-93b4-db3929694d4f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308092, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308092, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.039 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.040 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.042 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.042 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.043 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:24.043 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.050 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.050 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.050 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:f7:34:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.051 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:0b:56:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.051 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:5f:e2:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.051 257641 DEBUG nova.virt.libvirt.driver [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] No VIF found with MAC fa:16:3e:7d:14:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.073 257641 DEBUG nova.virt.libvirt.guest [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:24</nova:creationTime>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:port uuid="d1e89475-0d6a-4f19-8d70-45b91897d9c0">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:port uuid="3145abd1-9798-4474-af6e-da5917de6dab">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    <nova:port uuid="da689592-0a11-44fa-ba74-ffc145f551ab">
Nov 29 03:11:24 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:24 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:24 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:24 np0005539550 nova_compute[257631]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.097 257641 DEBUG oslo_concurrency.lockutils [None req-1f59cefc-45dd-4581-b418-dd303c293c78 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-da689592-0a11-44fa-ba74-ffc145f551ab" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.446 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.446 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.446 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.633 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.906 257641 DEBUG nova.compute.manager [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.907 257641 DEBUG oslo_concurrency.lockutils [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.907 257641 DEBUG oslo_concurrency.lockutils [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.907 257641 DEBUG oslo_concurrency.lockutils [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.907 257641 DEBUG nova.compute.manager [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:24 np0005539550 nova_compute[257631]: 2025-11-29 08:11:24.907 257641 WARNING nova.compute.manager [req-a1197f33-5053-415f-b59c-6888b6eb1e97 req-f673260f-0621-4da1-939d-91d4265b6d83 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 179 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1020 KiB/s wr, 126 op/s
Nov 29 03:11:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:25.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.088 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-d1e89475-0d6a-4f19-8d70-45b91897d9c0" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.089 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-d1e89475-0d6a-4f19-8d70-45b91897d9c0" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.126 257641 DEBUG nova.objects.instance [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'flavor' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.182 257641 DEBUG nova.virt.libvirt.vif [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.183 257641 DEBUG nova.network.os_vif_util [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.184 257641 DEBUG nova.network.os_vif_util [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.188 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.191 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.193 257641 DEBUG nova.virt.libvirt.driver [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Attempting to detach device tapd1e89475-0d from instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.194 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:0b:56:cd"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <target dev="tapd1e89475-0d"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.275 257641 DEBUG nova.network.neutron [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated VIF entry in instance network info cache for port da689592-0a11-44fa-ba74-ffc145f551ab. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.276 257641 DEBUG nova.network.neutron [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.299 257641 DEBUG oslo_concurrency.lockutils [req-b44931f1-ff27-41f9-b9ea-6c8b14ddfefc req-102c254c-2f1f-43c7-83f1-52c30a2abd10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.300 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.300 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.300 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:26Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:14:ca 10.100.0.14
Nov 29 03:11:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:26Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:14:ca 10.100.0.14
Nov 29 03:11:26 np0005539550 podman[308094]: 2025-11-29 08:11:26.350995761 +0000 UTC m=+0.089184859 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:11:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.359 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:11:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.364 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface>not found in domain: <domain type='kvm' id='39'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <name>instance-00000050</name>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <uuid>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</uuid>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:24</nova:creationTime>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="d1e89475-0d6a-4f19-8d70-45b91897d9c0">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="3145abd1-9798-4474-af6e-da5917de6dab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="da689592-0a11-44fa-ba74-ffc145f551ab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <memory unit='KiB'>131072</memory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <resource>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <partition>/machine</partition>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </resource>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <sysinfo type='smbios'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='serial'>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='uuid'>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <boot dev='hd'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <smbios mode='sysinfo'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <vmcoreinfo state='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='x2apic'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='vme'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <clock offset='utc'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='hpet' present='no'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_reboot>restart</on_reboot>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_crash>destroy</on_crash>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <disk type='network' device='disk'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <auth username='openstack'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source protocol='rbd' name='vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk' index='2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='vda' bus='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='virtio-disk0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <disk type='network' device='cdrom'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <auth username='openstack'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source protocol='rbd' name='vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config' index='1'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='sda' bus='sata'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <readonly/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='sata0-0-0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pcie.0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='1' port='0x10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='2' port='0x11'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='3' port='0x12'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='4' port='0x13'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='5' port='0x14'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='6' port='0x15'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='7' port='0x16'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='8' port='0x17'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.8'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='9' port='0x18'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.9'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='10' port='0x19'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='11' port='0x1a'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.11'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='12' port='0x1b'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.12'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='13' port='0x1c'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.13'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='14' port='0x1d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.14'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='15' port='0x1e'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.15'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='16' port='0x1f'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.16'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='17' port='0x20'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.17'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='18' port='0x21'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.18'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='19' port='0x22'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.19'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='20' port='0x23'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.20'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='21' port='0x24'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.21'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='22' port='0x25'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.22'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='23' port='0x26'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.23'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='24' port='0x27'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.24'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='25' port='0x28'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.25'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-pci-bridge'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.26'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='usb'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='sata' index='0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='ide'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:f7:34:8d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tapc4c22a55-f9'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:0b:56:cd'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tapd1e89475-0d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:5f:e2:8d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tap3145abd1-97'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:7d:14:ca'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tapda689592-0a'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <serial type='pty'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source path='/dev/pts/0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <log file='/var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log' append='off'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target type='isa-serial' port='0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <model name='isa-serial'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </target>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='serial0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source path='/dev/pts/0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <log file='/var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log' append='off'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target type='serial' port='0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='serial0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </console>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='tablet' bus='usb'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='mouse' bus='ps2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='keyboard' bus='ps2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <listen type='address' address='::0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <audio id='1' type='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='video0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <watchdog model='itco' action='reset'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='watchdog0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </watchdog>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <memballoon model='virtio'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <stats period='10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='balloon0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <rng model='virtio'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='rng0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <label>system_u:system_r:svirt_t:s0:c290,c407</label>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c407</imagelabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </seclabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <label>+107:+107</label>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </seclabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.364 257641 INFO nova.virt.libvirt.driver [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapd1e89475-0d from instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 from the persistent domain config.#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.364 257641 DEBUG nova.virt.libvirt.driver [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] (1/8): Attempting to detach device tapd1e89475-0d with device alias net1 from instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.365 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] detach device xml: <interface type="ethernet">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <mac address="fa:16:3e:0b:56:cd"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <model type="virtio"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <mtu size="1442"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <target dev="tapd1e89475-0d"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:11:26 np0005539550 kernel: tapd1e89475-0d (unregistering): left promiscuous mode
Nov 29 03:11:26 np0005539550 NetworkManager[49039]: <info>  [1764403886.4659] device (tapd1e89475-0d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:26Z|00301|binding|INFO|Releasing lport d1e89475-0d6a-4f19-8d70-45b91897d9c0 from this chassis (sb_readonly=0)
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:26Z|00302|binding|INFO|Setting lport d1e89475-0d6a-4f19-8d70-45b91897d9c0 down in Southbound
Nov 29 03:11:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:26Z|00303|binding|INFO|Removing iface tapd1e89475-0d ovn-installed in OVS
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.475 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.478 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764403886.477713, 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.479 257641 DEBUG nova.virt.libvirt.driver [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Start waiting for the detach event from libvirt for device tapd1e89475-0d with device alias net1 for instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.479 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.483 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:0b:56:cd"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapd1e89475-0d"/></interface>not found in domain: <domain type='kvm' id='39'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <name>instance-00000050</name>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <uuid>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</uuid>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:24</nova:creationTime>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="d1e89475-0d6a-4f19-8d70-45b91897d9c0">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="3145abd1-9798-4474-af6e-da5917de6dab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="da689592-0a11-44fa-ba74-ffc145f551ab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <memory unit='KiB'>131072</memory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <currentMemory unit='KiB'>131072</currentMemory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <vcpu placement='static'>1</vcpu>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <resource>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <partition>/machine</partition>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </resource>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <sysinfo type='smbios'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='manufacturer'>RDO</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='product'>OpenStack Compute</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='serial'>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='uuid'>8cea7db0-f5d5-4e6a-bdad-72b5c7dae496</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <entry name='family'>Virtual Machine</entry>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <boot dev='hd'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <smbios mode='sysinfo'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <vmcoreinfo state='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <cpu mode='custom' match='exact' check='full'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <model fallback='forbid'>Nehalem</model>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='x2apic'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='hypervisor'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <feature policy='require' name='vme'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <clock offset='utc'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='pit' tickpolicy='delay'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='rtc' tickpolicy='catchup'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <timer name='hpet' present='no'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_poweroff>destroy</on_poweroff>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_reboot>restart</on_reboot>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <on_crash>destroy</on_crash>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <disk type='network' device='disk'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <auth username='openstack'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source protocol='rbd' name='vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk' index='2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='vda' bus='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='virtio-disk0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <disk type='network' device='cdrom'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='qemu' type='raw' cache='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <auth username='openstack'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <secret type='ceph' uuid='b66774a7-56d9-5535-bd8c-681234404870'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source protocol='rbd' name='vms/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_disk.config' index='1'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.100' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.102' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <host name='192.168.122.101' port='6789'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='sda' bus='sata'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <readonly/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='sata0-0-0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='0' model='pcie-root'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pcie.0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='1' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='1' port='0x10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='2' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='2' port='0x11'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='3' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='3' port='0x12'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='4' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='4' port='0x13'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='5' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='5' port='0x14'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='6' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='6' port='0x15'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='7' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='7' port='0x16'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='8' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='8' port='0x17'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.8'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='9' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='9' port='0x18'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.9'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='10' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='10' port='0x19'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='11' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='11' port='0x1a'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.11'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='12' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='12' port='0x1b'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.12'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='13' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='13' port='0x1c'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.13'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='14' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='14' port='0x1d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.14'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='15' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='15' port='0x1e'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.15'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='16' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='16' port='0x1f'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.16'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='17' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='17' port='0x20'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.17'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='18' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='18' port='0x21'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.18'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='19' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='19' port='0x22'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.19'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='20' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='20' port='0x23'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.20'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='21' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='21' port='0x24'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.21'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='22' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='22' port='0x25'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.22'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='23' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='23' port='0x26'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.23'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='24' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='24' port='0x27'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.24'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='25' model='pcie-root-port'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-root-port'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target chassis='25' port='0x28'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.25'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model name='pcie-pci-bridge'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='pci.26'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='usb' index='0' model='piix3-uhci'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='usb'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <controller type='sata' index='0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='ide'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </controller>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:f7:34:8d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tapc4c22a55-f9'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:5f:e2:8d'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tap3145abd1-97'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <interface type='ethernet'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mac address='fa:16:3e:7d:14:ca'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target dev='tapda689592-0a'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <driver name='vhost' rx_queue_size='512'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <mtu size='1442'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='net3'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <serial type='pty'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source path='/dev/pts/0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <log file='/var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log' append='off'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target type='isa-serial' port='0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:        <model name='isa-serial'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      </target>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='serial0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <console type='pty' tty='/dev/pts/0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <source path='/dev/pts/0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <log file='/var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496/console.log' append='off'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <target type='serial' port='0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='serial0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </console>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='tablet' bus='usb'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='usb' bus='0' port='1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='mouse' bus='ps2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input1'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <input type='keyboard' bus='ps2'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='input2'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </input>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <listen type='address' address='::0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </graphics>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <audio id='1' type='none'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <model type='virtio' heads='1' primary='yes'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='video0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <watchdog model='itco' action='reset'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='watchdog0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </watchdog>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <memballoon model='virtio'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <stats period='10'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='balloon0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <rng model='virtio'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <backend model='random'>/dev/urandom</backend>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <alias name='rng0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <label>system_u:system_r:svirt_t:s0:c290,c407</label>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c290,c407</imagelabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </seclabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <label>+107:+107</label>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <imagelabel>+107:+107</imagelabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </seclabel>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.483 257641 INFO nova.virt.libvirt.driver [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully detached device tapd1e89475-0d from instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 from the live domain config.#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.484 257641 DEBUG nova.virt.libvirt.vif [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.484 257641 DEBUG nova.network.os_vif_util [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.484 257641 DEBUG nova.network.os_vif_util [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.485 257641 DEBUG os_vif [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.486 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.486 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e89475-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.489 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.494 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.496 257641 INFO os_vif [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d')#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.496 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:56:cd 10.100.0.10'], port_security=['fa:16:3e:0b:56:cd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d1e89475-0d6a-4f19-8d70-45b91897d9c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.497 257641 DEBUG nova.virt.libvirt.guest [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:name>tempest-AttachInterfacesTestJSON-server-62373585</nova:name>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:creationTime>2025-11-29 08:11:26</nova:creationTime>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:flavor name="m1.nano">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:memory>128</nova:memory>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:disk>1</nova:disk>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:swap>0</nova:swap>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:vcpus>1</nova:vcpus>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:flavor>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:user uuid="b95b3e841be1420c99ee0a04dd0840f1">tempest-AttachInterfacesTestJSON-372493183-project-member</nova:user>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:project uuid="ff7c805d4242453aa2148a247956391d">tempest-AttachInterfacesTestJSON-372493183</nova:project>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:owner>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  <nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="c4c22a55-f99c-429f-8ef0-3fa113b99b13">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="3145abd1-9798-4474-af6e-da5917de6dab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    <nova:port uuid="da689592-0a11-44fa-ba74-ffc145f551ab">
Nov 29 03:11:26 np0005539550 nova_compute[257631]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:    </nova:port>
Nov 29 03:11:26 np0005539550 nova_compute[257631]:  </nova:ports>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: </nova:instance>
Nov 29 03:11:26 np0005539550 nova_compute[257631]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.497 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d1e89475-0d6a-4f19-8d70-45b91897d9c0 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.499 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.512 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c5aecb9b-dac4-4e38-8756-fe1095a3ad7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.538 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[46942b49-0b69-411b-a914-7b78d25ec42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.541 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[322d5cc6-23e3-489d-b4d6-3839e69ba03d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.567 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6a36be3b-2936-417f-b944-15bbe33e4d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.585 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[77fbea4a-adac-472e-bda0-98b4a784de97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308130, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.602 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[233c26f4-3f53-45c3-918d-6818b0bcb192]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308131, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308131, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.603 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.607 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.607 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.608 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.608 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:26.609 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.861 257641 DEBUG nova.compute.manager [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-unplugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.861 257641 DEBUG oslo_concurrency.lockutils [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.862 257641 DEBUG oslo_concurrency.lockutils [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.862 257641 DEBUG oslo_concurrency.lockutils [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.862 257641 DEBUG nova.compute.manager [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-unplugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:26 np0005539550 nova_compute[257631]: 2025-11-29 08:11:26.862 257641 WARNING nova.compute.manager [req-f458f561-0295-4c94-9381-dbda1229a60a req-9032b115-3ae8-401c-9357-cf913cd1fca8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-unplugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.105 257641 DEBUG nova.compute.manager [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.106 257641 DEBUG oslo_concurrency.lockutils [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.106 257641 DEBUG oslo_concurrency.lockutils [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.106 257641 DEBUG oslo_concurrency.lockutils [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.106 257641 DEBUG nova.compute.manager [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.106 257641 WARNING nova.compute.manager [req-6bb9717d-0ee4-4ba8-aa8d-d2c83fe49491 req-0695fec9-1aab-4a84-896d-26c02361e420 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-da689592-0a11-44fa-ba74-ffc145f551ab for instance with vm_state active and task_state None.#033[00m
Nov 29 03:11:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 198 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 29 03:11:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:27Z|00304|binding|INFO|Releasing lport a9e57abf-e3e4-455b-b4c5-0cda127bd5c1 from this chassis (sb_readonly=0)
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.330 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:27.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:27 np0005539550 nova_compute[257631]: 2025-11-29 08:11:27.683 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:28.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.398 158978 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 99cec7b0-aca6-4667-b0de-f4979a355ada with type ""#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.399 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:14:ca 10.100.0.14'], port_security=['fa:16:3e:7d:14:ca 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-903222066', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-903222066', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=da689592-0a11-44fa-ba74-ffc145f551ab) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.400 158978 INFO neutron.agent.ovn.metadata.agent [-] Port da689592-0a11-44fa-ba74-ffc145f551ab in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.401 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00305|binding|INFO|Removing iface tapda689592-0a ovn-installed in OVS
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00306|binding|INFO|Removing lport da689592-0a11-44fa-ba74-ffc145f551ab ovn-installed in OVS
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.416 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[27ac4f0a-994d-4207-b360-b120c577bd62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.419 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.445 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd3bf5e-0afb-4a4c-8945-76e911f9c57b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.448 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[66726885-a41b-4973-88b8-fb35cd8f8bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.477 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[223fcf05-13ed-4642-ab64-19e078647d8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.492 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49779813-9472-4b26-bb70-da1dfc5611cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308138, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.509 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[98a0c726-01a3-46d2-b4c4-da34ead183b2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308139, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308139, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.511 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.514 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.514 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.515 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.515 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.515 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.719 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.720 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.720 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.720 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.720 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.722 257641 INFO nova.compute.manager [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Terminating instance#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.723 257641 DEBUG nova.compute.manager [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:11:28 np0005539550 kernel: tapc4c22a55-f9 (unregistering): left promiscuous mode
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.7927] device (tapc4c22a55-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00307|binding|INFO|Releasing lport c4c22a55-f99c-429f-8ef0-3fa113b99b13 from this chassis (sb_readonly=0)
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00308|binding|INFO|Setting lport c4c22a55-f99c-429f-8ef0-3fa113b99b13 down in Southbound
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00309|binding|INFO|Removing iface tapc4c22a55-f9 ovn-installed in OVS
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.805 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:34:8d 10.100.0.6'], port_security=['fa:16:3e:f7:34:8d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '368d0e95-c26a-4918-88ba-665a23cd4b1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.250'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4c22a55-f99c-429f-8ef0-3fa113b99b13) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.806 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4c22a55-f99c-429f-8ef0-3fa113b99b13 in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.808 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.814 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 kernel: tap3145abd1-97 (unregistering): left promiscuous mode
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.8213] device (tap3145abd1-97): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.827 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6793d7e1-fbfa-4e08-9c0c-3feb93f83570]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00310|binding|INFO|Releasing lport 3145abd1-9798-4474-af6e-da5917de6dab from this chassis (sb_readonly=0)
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00311|binding|INFO|Setting lport 3145abd1-9798-4474-af6e-da5917de6dab down in Southbound
Nov 29 03:11:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:11:28Z|00312|binding|INFO|Removing iface tap3145abd1-97 ovn-installed in OVS
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.831 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 kernel: tapda689592-0a (unregistering): left promiscuous mode
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.837 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.841 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:e2:8d 10.100.0.13'], port_security=['fa:16:3e:5f:e2:8d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8cea7db0-f5d5-4e6a-bdad-72b5c7dae496', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ff7c805d4242453aa2148a247956391d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '837c5830-d55f-47dc-af7f-7cef5a2ab737', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5330ba90-719c-42ae-a31a-dd5fd1d240e2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3145abd1-9798-4474-af6e-da5917de6dab) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.8414] device (tapda689592-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.855 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.863 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[041e1021-2f1f-4e3b-b710-55444a103a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.866 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6770117a-ef24-4a70-a7c9-0aba052f23dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.890 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[14154a8b-6999-4fb2-8420-cd6ea60f204a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.906 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3a53820e-a182-4200-b20a-337920a2fca2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapddd8b166-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9d:35:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688012, 'reachable_time': 21056, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308265, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000050.scope: Deactivated successfully.
Nov 29 03:11:28 np0005539550 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000050.scope: Consumed 16.292s CPU time.
Nov 29 03:11:28 np0005539550 systemd-machined[216673]: Machine qemu-39-instance-00000050 terminated.
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.924 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cebb8b89-3b01-4041-8898-b9ef0cb7fbdb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688022, 'tstamp': 688022}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308266, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapddd8b166-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688025, 'tstamp': 688025}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308266, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.925 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.926 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.935 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.937 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddd8b166-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.937 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.937 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapddd8b166-70, col_values=(('external_ids', {'iface-id': 'a9e57abf-e3e4-455b-b4c5-0cda127bd5c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.937 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.938 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3145abd1-9798-4474-af6e-da5917de6dab in datapath ddd8b166-79ec-408d-b52c-581ad9dd6cb8 unbound from our chassis#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.940 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.940 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b79c8e-6ecf-42b1-84d9-d2dbd6b6b50d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:28.941 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 namespace which is not needed anymore#033[00m
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.9444] manager: (tapc4c22a55-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.9575] manager: (tap3145abd1-97): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Nov 29 03:11:28 np0005539550 NetworkManager[49039]: <info>  [1764403888.9714] manager: (tapda689592-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.994 257641 INFO nova.virt.libvirt.driver [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance destroyed successfully.#033[00m
Nov 29 03:11:28 np0005539550 nova_compute[257631]: 2025-11-29 08:11:28.995 257641 DEBUG nova.objects.instance [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lazy-loading 'resources' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.025 257641 DEBUG nova.virt.libvirt.vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.026 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.026 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.026 257641 DEBUG os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.028 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4c22a55-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.030 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.040 257641 INFO os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:34:8d,bridge_name='br-int',has_traffic_filtering=True,id=c4c22a55-f99c-429f-8ef0-3fa113b99b13,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4c22a55-f9')#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.041 257641 DEBUG nova.virt.libvirt.vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.041 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.042 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.043 257641 DEBUG os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.044 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e89475-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.044 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.047 257641 INFO os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d')#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.048 257641 DEBUG nova.virt.libvirt.vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.049 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.049 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.050 257641 DEBUG os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.051 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.051 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3145abd1-97, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.053 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.054 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.057 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.059 257641 INFO os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:e2:8d,bridge_name='br-int',has_traffic_filtering=True,id=3145abd1-9798-4474-af6e-da5917de6dab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3145abd1-97')#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.060 257641 DEBUG nova.virt.libvirt.vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:10:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.060 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converting VIF {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.061 257641 DEBUG nova.network.os_vif_util [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.061 257641 DEBUG os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.064 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda689592-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.068 257641 DEBUG nova.compute.manager [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.068 257641 DEBUG oslo_concurrency.lockutils [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.068 257641 DEBUG oslo_concurrency.lockutils [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.069 257641 DEBUG oslo_concurrency.lockutils [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.069 257641 DEBUG nova.compute.manager [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.069 257641 WARNING nova.compute.manager [req-560a6ba0-1e68-46f0-80fc-6ca53c0af452 req-69b663d0-ad62-4cbc-96e9-bf620c9b7433 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-d1e89475-0d6a-4f19-8d70-45b91897d9c0 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.070 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.072 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.074 257641 INFO os_vif [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:14:ca,bridge_name='br-int',has_traffic_filtering=True,id=da689592-0a11-44fa-ba74-ffc145f551ab,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapda689592-0a')#033[00m
Nov 29 03:11:29 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [NOTICE]   (307604) : haproxy version is 2.8.14-c23fe91
Nov 29 03:11:29 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [NOTICE]   (307604) : path to executable is /usr/sbin/haproxy
Nov 29 03:11:29 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [WARNING]  (307604) : Exiting Master process...
Nov 29 03:11:29 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [ALERT]    (307604) : Current worker (307606) exited with code 143 (Terminated)
Nov 29 03:11:29 np0005539550 neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8[307600]: [WARNING]  (307604) : All workers exited. Exiting... (0)
Nov 29 03:11:29 np0005539550 systemd[1]: libpod-0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2.scope: Deactivated successfully.
Nov 29 03:11:29 np0005539550 podman[308331]: 2025-11-29 08:11:29.115208979 +0000 UTC m=+0.049536114 container died 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:11:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2-userdata-shm.mount: Deactivated successfully.
Nov 29 03:11:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6429e868622f2d619f6a616cd568b27825499c8f48a771215f47eb03c7c2f7e1-merged.mount: Deactivated successfully.
Nov 29 03:11:29 np0005539550 podman[308331]: 2025-11-29 08:11:29.164071515 +0000 UTC m=+0.098398640 container cleanup 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:11:29 np0005539550 systemd[1]: libpod-conmon-0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2.scope: Deactivated successfully.
Nov 29 03:11:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 200 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Nov 29 03:11:29 np0005539550 podman[308380]: 2025-11-29 08:11:29.228015349 +0000 UTC m=+0.041426930 container remove 0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.234 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e22ad500-0e41-47b1-9ad2-2f7b54af33df]: (4, ('Sat Nov 29 08:11:29 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2)\n0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2\nSat Nov 29 08:11:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 (0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2)\n0e035443fc6685c9261616a27f3f4185dc6ff7df449184ce966c97fdb34fd3a2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.236 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[414f1a54-f6cd-467f-b610-4592471dd44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.237 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddd8b166-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 kernel: tapddd8b166-70: left promiscuous mode
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.277 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[072f64fb-8872-4b22-97b2-92b28abcc0d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.295 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fc5a007b-1ebc-4e53-8467-fc1481954167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.296 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4973549c-3c40-4005-820a-6bf15e005e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.311 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[285d7d85-c60a-4db9-b0d7-dc6135954d07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688006, 'reachable_time': 41012, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308398, 'error': None, 'target': 'ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 systemd[1]: run-netns-ovnmeta\x2dddd8b166\x2d79ec\x2d408d\x2db52c\x2d581ad9dd6cb8.mount: Deactivated successfully.
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.314 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ddd8b166-79ec-408d-b52c-581ad9dd6cb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:11:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:29.314 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f4f229-5dcc-4d6e-bf23-7a1672b43a77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.337 257641 DEBUG nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-deleted-d1e89475-0d6a-4f19-8d70-45b91897d9c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.337 257641 INFO nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Neutron deleted interface d1e89475-0d6a-4f19-8d70-45b91897d9c0; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.337 257641 DEBUG nova.network.neutron [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:29.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.552 257641 INFO nova.virt.libvirt.driver [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Deleting instance files /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_del#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.553 257641 INFO nova.virt.libvirt.driver [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Deletion of /var/lib/nova/instances/8cea7db0-f5d5-4e6a-bdad-72b5c7dae496_del complete#033[00m
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f7af963-102a-4f91-b628-c7fc30812a32 does not exist
Nov 29 03:11:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev df4752d6-d47c-4b9e-b41d-908e3f55d00f does not exist
Nov 29 03:11:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0530326d-352d-4465-8e52-bef6312bc89e does not exist
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.693 257641 DEBUG nova.compute.manager [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-unplugged-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.694 257641 DEBUG oslo_concurrency.lockutils [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.695 257641 DEBUG oslo_concurrency.lockutils [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.695 257641 DEBUG oslo_concurrency.lockutils [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.695 257641 DEBUG nova.compute.manager [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-unplugged-3145abd1-9798-4474-af6e-da5917de6dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.695 257641 DEBUG nova.compute.manager [req-1d3d75a0-7fb3-47c9-87ff-18362c2635e6 req-713a903a-fcb4-480e-9e3c-5c877dc042e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-unplugged-3145abd1-9798-4474-af6e-da5917de6dab for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.700 257641 DEBUG nova.objects.instance [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'system_metadata' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.777 257641 DEBUG nova.objects.instance [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lazy-loading 'flavor' on Instance uuid 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.802 257641 DEBUG nova.virt.libvirt.vif [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.803 257641 DEBUG nova.network.os_vif_util [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.804 257641 DEBUG nova.network.os_vif_util [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.808 257641 INFO nova.compute.manager [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.809 257641 DEBUG oslo.service.loopingcall [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.809 257641 DEBUG nova.compute.manager [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.810 257641 DEBUG nova.network.neutron [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.812 257641 DEBUG nova.virt.libvirt.vif [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:10:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-62373585',display_name='tempest-AttachInterfacesTestJSON-server-62373585',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-62373585',id=80,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLL9o28NlfoVz7F2OwVSn4gfwLkCfAaYryFLaoJxgzhxKW78ygsINwyh2kLQcjn7Xwil5BPn2o8sd5kU9nrVeHXEwOABSKE3JfXOjbRalZiiSINKvDa5Zog4fvRzORXCQ==',key_name='tempest-keypair-2039507493',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:10:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ff7c805d4242453aa2148a247956391d',ramdisk_id='',reservation_id='r-lm4ktprm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-372493183',owner_user_name='tempest-AttachInterfacesTestJSON-372493183-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:11:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b95b3e841be1420c99ee0a04dd0840f1',uuid=8cea7db0-f5d5-4e6a-bdad-72b5c7dae496,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.813 257641 DEBUG nova.network.os_vif_util [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converting VIF {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.813 257641 DEBUG nova.network.os_vif_util [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.814 257641 DEBUG os_vif [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.815 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.815 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e89475-0d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.815 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.818 257641 INFO os_vif [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:56:cd,bridge_name='br-int',has_traffic_filtering=True,id=d1e89475-0d6a-4f19-8d70-45b91897d9c0,network=Network(ddd8b166-79ec-408d-b52c-581ad9dd6cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1e89475-0d')#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.818 257641 DEBUG nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Detach interface failed, port_id=d1e89475-0d6a-4f19-8d70-45b91897d9c0, reason: Instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.818 257641 DEBUG nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-deleted-da689592-0a11-44fa-ba74-ffc145f551ab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.819 257641 INFO nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Neutron deleted interface da689592-0a11-44fa-ba74-ffc145f551ab; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.819 257641 DEBUG nova.network.neutron [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:29 np0005539550 nova_compute[257631]: 2025-11-29 08:11:29.846 257641 DEBUG nova.compute.manager [req-e40b6bb5-9854-4c24-ba2b-4644055a6973 req-425635d4-bed3-4c5d-8e0e-28523558ba53 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Detach interface failed, port_id=da689592-0a11-44fa-ba74-ffc145f551ab, reason: Instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:11:30 np0005539550 podman[308551]: 2025-11-29 08:11:30.163410006 +0000 UTC m=+0.044291812 container create 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:30 np0005539550 systemd[1]: Started libpod-conmon-1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671.scope.
Nov 29 03:11:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:30 np0005539550 podman[308551]: 2025-11-29 08:11:30.144172703 +0000 UTC m=+0.025054509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:30 np0005539550 podman[308551]: 2025-11-29 08:11:30.252820099 +0000 UTC m=+0.133701935 container init 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:30 np0005539550 podman[308551]: 2025-11-29 08:11:30.262709807 +0000 UTC m=+0.143591613 container start 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:11:30 np0005539550 podman[308551]: 2025-11-29 08:11:30.266371939 +0000 UTC m=+0.147253765 container attach 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:11:30 np0005539550 quizzical_blackwell[308567]: 167 167
Nov 29 03:11:30 np0005539550 systemd[1]: libpod-1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671.scope: Deactivated successfully.
Nov 29 03:11:30 np0005539550 conmon[308567]: conmon 1d82abcf8fccc05f2682 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671.scope/container/memory.events
Nov 29 03:11:30 np0005539550 podman[308572]: 2025-11-29 08:11:30.309767928 +0000 UTC m=+0.024000693 container died 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:11:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-01b9995a2b762b60f51b410a6b16a70ac80f87c98fe56d5d8f61452f37f69eef-merged.mount: Deactivated successfully.
Nov 29 03:11:30 np0005539550 podman[308572]: 2025-11-29 08:11:30.343234438 +0000 UTC m=+0.057467183 container remove 1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:30 np0005539550 systemd[1]: libpod-conmon-1d82abcf8fccc05f268241a55075d8212b14860c4bd03ce1d2e23cc9f95c4671.scope: Deactivated successfully.
Nov 29 03:11:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:30 np0005539550 podman[308594]: 2025-11-29 08:11:30.505533278 +0000 UTC m=+0.039448889 container create 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:11:30 np0005539550 systemd[1]: Started libpod-conmon-53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34.scope.
Nov 29 03:11:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:30 np0005539550 podman[308594]: 2025-11-29 08:11:30.489937937 +0000 UTC m=+0.023853568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:30 np0005539550 podman[308594]: 2025-11-29 08:11:30.590257704 +0000 UTC m=+0.124173345 container init 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:30 np0005539550 podman[308594]: 2025-11-29 08:11:30.599170577 +0000 UTC m=+0.133086188 container start 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:11:30 np0005539550 podman[308594]: 2025-11-29 08:11:30.602591403 +0000 UTC m=+0.136507014 container attach 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:11:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 168 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 29 03:11:31 np0005539550 musing_ishizaka[308610]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:11:31 np0005539550 musing_ishizaka[308610]: --> relative data size: 1.0
Nov 29 03:11:31 np0005539550 musing_ishizaka[308610]: --> All data devices are unavailable
Nov 29 03:11:31 np0005539550 systemd[1]: libpod-53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34.scope: Deactivated successfully.
Nov 29 03:11:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:31 np0005539550 podman[308594]: 2025-11-29 08:11:31.451335767 +0000 UTC m=+0.985251378 container died 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:11:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-52daeb2747f98238c92b4d3a8478a201d9fe4e89e933c5393e509a6dff865745-merged.mount: Deactivated successfully.
Nov 29 03:11:31 np0005539550 nova_compute[257631]: 2025-11-29 08:11:31.619 257641 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port da689592-0a11-44fa-ba74-ffc145f551ab could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:11:31 np0005539550 nova_compute[257631]: 2025-11-29 08:11:31.621 257641 DEBUG nova.network.neutron [-] Unable to show port da689592-0a11-44fa-ba74-ffc145f551ab as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Nov 29 03:11:31 np0005539550 podman[308594]: 2025-11-29 08:11:31.641966659 +0000 UTC m=+1.175882310 container remove 53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ishizaka, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:11:31 np0005539550 systemd[1]: libpod-conmon-53f0d7d78d08bd64b6dfa21f7c9bfa7df340b67d31822a51e4bc3c19068d0a34.scope: Deactivated successfully.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.734645) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891734935, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1910, "num_deletes": 256, "total_data_size": 3231277, "memory_usage": 3273808, "flush_reason": "Manual Compaction"}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891763944, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3181158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36278, "largest_seqno": 38187, "table_properties": {"data_size": 3172556, "index_size": 5224, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18594, "raw_average_key_size": 20, "raw_value_size": 3154991, "raw_average_value_size": 3455, "num_data_blocks": 228, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403711, "oldest_key_time": 1764403711, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 29349 microseconds, and 11039 cpu microseconds.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.764059) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3181158 bytes OK
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.764090) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.765879) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.765910) EVENT_LOG_v1 {"time_micros": 1764403891765903, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.765930) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3223327, prev total WAL file size 3254095, number of live WAL files 2.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.767624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323537' seq:0, type:0; will stop at (end)
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3106KB)], [77(10MB)]
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891767708, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 13867575, "oldest_snapshot_seqno": -1}
Nov 29 03:11:31 np0005539550 nova_compute[257631]: 2025-11-29 08:11:31.771 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7299 keys, 13714699 bytes, temperature: kUnknown
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891852362, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13714699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13662422, "index_size": 32901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 187828, "raw_average_key_size": 25, "raw_value_size": 13528094, "raw_average_value_size": 1853, "num_data_blocks": 1317, "num_entries": 7299, "num_filter_entries": 7299, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.852700) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13714699 bytes
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.854072) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.6 rd, 161.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.7) write-amplify(4.3) OK, records in: 7830, records dropped: 531 output_compression: NoCompression
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.854087) EVENT_LOG_v1 {"time_micros": 1764403891854080, "job": 44, "event": "compaction_finished", "compaction_time_micros": 84744, "compaction_time_cpu_micros": 31303, "output_level": 6, "num_output_files": 1, "total_output_size": 13714699, "num_input_records": 7830, "num_output_records": 7299, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891854689, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403891857046, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.767449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.857080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.857084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.857086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.857087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:31.857090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:32 np0005539550 podman[308775]: 2025-11-29 08:11:32.24866735 +0000 UTC m=+0.021368137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:32.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.449 257641 DEBUG nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-unplugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.449 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-unplugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-unplugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.450 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.451 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.451 257641 DEBUG oslo_concurrency.lockutils [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.451 257641 DEBUG nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.451 257641 WARNING nova.compute.manager [req-dfe62785-b8c8-441b-a507-113f6b42a4e0 req-54a8814a-6690-450d-8ecc-e7bf29994b48 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-c4c22a55-f99c-429f-8ef0-3fa113b99b13 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.453 257641 DEBUG nova.compute.manager [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.453 257641 DEBUG oslo_concurrency.lockutils [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.453 257641 DEBUG oslo_concurrency.lockutils [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.453 257641 DEBUG oslo_concurrency.lockutils [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.454 257641 DEBUG nova.compute.manager [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] No waiting events found dispatching network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:11:32 np0005539550 nova_compute[257631]: 2025-11-29 08:11:32.454 257641 WARNING nova.compute.manager [req-61b325e6-3b6e-44e1-9fb8-3798689bd862 req-5272e73b-61fe-4e5d-9f8c-ef266178040c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received unexpected event network-vif-plugged-3145abd1-9798-4474-af6e-da5917de6dab for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:11:32 np0005539550 podman[308775]: 2025-11-29 08:11:32.548286007 +0000 UTC m=+0.320986774 container create 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:11:32 np0005539550 systemd[1]: Started libpod-conmon-6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423.scope.
Nov 29 03:11:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:32 np0005539550 podman[308775]: 2025-11-29 08:11:32.966621982 +0000 UTC m=+0.739322779 container init 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:11:32 np0005539550 podman[308775]: 2025-11-29 08:11:32.974711905 +0000 UTC m=+0.747412662 container start 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:11:32 np0005539550 funny_shirley[308791]: 167 167
Nov 29 03:11:32 np0005539550 systemd[1]: libpod-6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423.scope: Deactivated successfully.
Nov 29 03:11:33 np0005539550 podman[308775]: 2025-11-29 08:11:33.043982573 +0000 UTC m=+0.816683360 container attach 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:33 np0005539550 podman[308775]: 2025-11-29 08:11:33.04506378 +0000 UTC m=+0.817764577 container died 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-691e5c066051193a17d9e72445f6be03e0e94b2a9f3d78d77a746c9d22c03d82-merged.mount: Deactivated successfully.
Nov 29 03:11:33 np0005539550 podman[308775]: 2025-11-29 08:11:33.125878338 +0000 UTC m=+0.898579105 container remove 6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:11:33 np0005539550 systemd[1]: libpod-conmon-6e60996909344a3ba1ead10dce06f9ba07c8179322163b8b4319392a57ea6423.scope: Deactivated successfully.
Nov 29 03:11:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 168 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Nov 29 03:11:33 np0005539550 podman[308815]: 2025-11-29 08:11:33.285588195 +0000 UTC m=+0.039897552 container create 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:11:33 np0005539550 systemd[1]: Started libpod-conmon-2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35.scope.
Nov 29 03:11:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c14fc9f966957526875c1a5726e945b0a8977196c12a21ce67caea253d507b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c14fc9f966957526875c1a5726e945b0a8977196c12a21ce67caea253d507b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c14fc9f966957526875c1a5726e945b0a8977196c12a21ce67caea253d507b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71c14fc9f966957526875c1a5726e945b0a8977196c12a21ce67caea253d507b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:33 np0005539550 podman[308815]: 2025-11-29 08:11:33.349340934 +0000 UTC m=+0.103650311 container init 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:11:33 np0005539550 podman[308815]: 2025-11-29 08:11:33.355389576 +0000 UTC m=+0.109698933 container start 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:33 np0005539550 podman[308815]: 2025-11-29 08:11:33.358689649 +0000 UTC m=+0.112999006 container attach 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:11:33 np0005539550 podman[308815]: 2025-11-29 08:11:33.270091146 +0000 UTC m=+0.024400503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:34 np0005539550 nova_compute[257631]: 2025-11-29 08:11:34.066 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:34 np0005539550 goofy_nash[308832]: {
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:    "0": [
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:        {
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "devices": [
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "/dev/loop3"
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            ],
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "lv_name": "ceph_lv0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "lv_size": "7511998464",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "name": "ceph_lv0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "tags": {
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.cluster_name": "ceph",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.crush_device_class": "",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.encrypted": "0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.osd_id": "0",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.type": "block",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:                "ceph.vdo": "0"
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            },
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "type": "block",
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:            "vg_name": "ceph_vg0"
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:        }
Nov 29 03:11:34 np0005539550 goofy_nash[308832]:    ]
Nov 29 03:11:34 np0005539550 goofy_nash[308832]: }
Nov 29 03:11:34 np0005539550 systemd[1]: libpod-2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35.scope: Deactivated successfully.
Nov 29 03:11:34 np0005539550 podman[308815]: 2025-11-29 08:11:34.241774213 +0000 UTC m=+0.996083580 container died 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:11:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-71c14fc9f966957526875c1a5726e945b0a8977196c12a21ce67caea253d507b-merged.mount: Deactivated successfully.
Nov 29 03:11:34 np0005539550 podman[308815]: 2025-11-29 08:11:34.298924467 +0000 UTC m=+1.053233824 container remove 2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nash, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:11:34 np0005539550 systemd[1]: libpod-conmon-2200cf75f184e3159487862b5539d0729f75083b70361feef0eccbafb14c8c35.scope: Deactivated successfully.
Nov 29 03:11:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:34 np0005539550 podman[309041]: 2025-11-29 08:11:34.919358342 +0000 UTC m=+0.042866086 container create 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:34 np0005539550 systemd[1]: Started libpod-conmon-969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85.scope.
Nov 29 03:11:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:34 np0005539550 podman[309041]: 2025-11-29 08:11:34.899905134 +0000 UTC m=+0.023412898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:34 np0005539550 podman[309041]: 2025-11-29 08:11:34.996047306 +0000 UTC m=+0.119555130 container init 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:11:35 np0005539550 podman[309041]: 2025-11-29 08:11:35.002692913 +0000 UTC m=+0.126200657 container start 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:11:35 np0005539550 podman[309041]: 2025-11-29 08:11:35.006030316 +0000 UTC m=+0.129538090 container attach 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:11:35 np0005539550 thirsty_tu[309056]: 167 167
Nov 29 03:11:35 np0005539550 systemd[1]: libpod-969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85.scope: Deactivated successfully.
Nov 29 03:11:35 np0005539550 podman[309041]: 2025-11-29 08:11:35.00776669 +0000 UTC m=+0.131274434 container died 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:11:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1d045c3dc57b1007e47f85032b4da81dd7b3cb0053696eec930536056a8d0bb1-merged.mount: Deactivated successfully.
Nov 29 03:11:35 np0005539550 podman[309041]: 2025-11-29 08:11:35.042464961 +0000 UTC m=+0.165972705 container remove 969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:35 np0005539550 systemd[1]: libpod-conmon-969efcd8b30d0cf55e4ee6b926dd2716294468f25bcc768562af5ed1828dce85.scope: Deactivated successfully.
Nov 29 03:11:35 np0005539550 podman[309080]: 2025-11-29 08:11:35.189306834 +0000 UTC m=+0.038953788 container create 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:11:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 121 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 2.2 MiB/s wr, 95 op/s
Nov 29 03:11:35 np0005539550 systemd[1]: Started libpod-conmon-952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c.scope.
Nov 29 03:11:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:11:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d48da8f1d9ea76d8b85d685283370f04c24732d38adf4d434fec117ef74892b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d48da8f1d9ea76d8b85d685283370f04c24732d38adf4d434fec117ef74892b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d48da8f1d9ea76d8b85d685283370f04c24732d38adf4d434fec117ef74892b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d48da8f1d9ea76d8b85d685283370f04c24732d38adf4d434fec117ef74892b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:11:35 np0005539550 podman[309080]: 2025-11-29 08:11:35.172306508 +0000 UTC m=+0.021953482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:11:35 np0005539550 podman[309080]: 2025-11-29 08:11:35.274387899 +0000 UTC m=+0.124034873 container init 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:11:35 np0005539550 podman[309080]: 2025-11-29 08:11:35.281689562 +0000 UTC m=+0.131336516 container start 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:11:35 np0005539550 podman[309080]: 2025-11-29 08:11:35.285295193 +0000 UTC m=+0.134942147 container attach 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:11:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:36 np0005539550 eager_turing[309097]: {
Nov 29 03:11:36 np0005539550 eager_turing[309097]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:11:36 np0005539550 eager_turing[309097]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:11:36 np0005539550 eager_turing[309097]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:11:36 np0005539550 eager_turing[309097]:        "osd_id": 0,
Nov 29 03:11:36 np0005539550 eager_turing[309097]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:11:36 np0005539550 eager_turing[309097]:        "type": "bluestore"
Nov 29 03:11:36 np0005539550 eager_turing[309097]:    }
Nov 29 03:11:36 np0005539550 eager_turing[309097]: }
Nov 29 03:11:36 np0005539550 systemd[1]: libpod-952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c.scope: Deactivated successfully.
Nov 29 03:11:36 np0005539550 podman[309080]: 2025-11-29 08:11:36.143265028 +0000 UTC m=+0.992911982 container died 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:11:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1d48da8f1d9ea76d8b85d685283370f04c24732d38adf4d434fec117ef74892b-merged.mount: Deactivated successfully.
Nov 29 03:11:36 np0005539550 podman[309080]: 2025-11-29 08:11:36.196750139 +0000 UTC m=+1.046397093 container remove 952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:11:36 np0005539550 systemd[1]: libpod-conmon-952623ae7f905752992407ed3c900861e7687fe1216288329a493572766fba6c.scope: Deactivated successfully.
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:11:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:36.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 25f6e1a3-445a-48c6-b950-c2fb01a3c187 does not exist
Nov 29 03:11:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fde53a4f-4410-4943-b8c5-b92571e7309f does not exist
Nov 29 03:11:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 87957b8e-661a-4bbd-87e3-52ae31c8380f does not exist
Nov 29 03:11:36 np0005539550 nova_compute[257631]: 2025-11-29 08:11:36.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:36 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:36 np0005539550 nova_compute[257631]: 2025-11-29 08:11:36.877 257641 DEBUG nova.compute.manager [req-9c904e31-bc13-4ffa-8c77-ec1bfd63980e req-b3858907-bc71-475a-b321-76320d8db3f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-deleted-c4c22a55-f99c-429f-8ef0-3fa113b99b13 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:36 np0005539550 nova_compute[257631]: 2025-11-29 08:11:36.877 257641 INFO nova.compute.manager [req-9c904e31-bc13-4ffa-8c77-ec1bfd63980e req-b3858907-bc71-475a-b321-76320d8db3f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Neutron deleted interface c4c22a55-f99c-429f-8ef0-3fa113b99b13; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:11:36 np0005539550 nova_compute[257631]: 2025-11-29 08:11:36.877 257641 DEBUG nova.network.neutron [req-9c904e31-bc13-4ffa-8c77-ec1bfd63980e req-b3858907-bc71-475a-b321-76320d8db3f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.128 257641 DEBUG nova.compute.manager [req-9c904e31-bc13-4ffa-8c77-ec1bfd63980e req-b3858907-bc71-475a-b321-76320d8db3f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Detach interface failed, port_id=c4c22a55-f99c-429f-8ef0-3fa113b99b13, reason: Instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:11:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 121 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 308 KiB/s rd, 1.2 MiB/s wr, 80 op/s
Nov 29 03:11:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:37.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.569 257641 DEBUG nova.network.neutron [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.646 257641 INFO nova.compute.manager [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Took 7.84 seconds to deallocate network for instance.#033[00m
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.836 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.837 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:11:37 np0005539550 nova_compute[257631]: 2025-11-29 08:11:37.985 257641 DEBUG oslo_concurrency.processutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415872374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.407 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [{"id": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "address": "fa:16:3e:f7:34:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4c22a55-f9", "ovs_interfaceid": "c4c22a55-f99c-429f-8ef0-3fa113b99b13", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "address": "fa:16:3e:0b:56:cd", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1e89475-0d", "ovs_interfaceid": "d1e89475-0d6a-4f19-8d70-45b91897d9c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3145abd1-9798-4474-af6e-da5917de6dab", "address": "fa:16:3e:5f:e2:8d", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3145abd1-97", "ovs_interfaceid": "3145abd1-9798-4474-af6e-da5917de6dab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "da689592-0a11-44fa-ba74-ffc145f551ab", "address": "fa:16:3e:7d:14:ca", "network": {"id": "ddd8b166-79ec-408d-b52c-581ad9dd6cb8", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1175341869-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ff7c805d4242453aa2148a247956391d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapda689592-0a", "ovs_interfaceid": "da689592-0a11-44fa-ba74-ffc145f551ab", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.419 257641 DEBUG oslo_concurrency.processutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.425 257641 DEBUG nova.compute.provider_tree [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.490 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.491 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.491 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Acquired lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.491 257641 DEBUG nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.493 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.494 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.494 257641 DEBUG nova.scheduler.client.report [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.497 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.498 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.498 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.564 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.627 257641 INFO nova.scheduler.client.report [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Deleted allocations for instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496#033[00m
Nov 29 03:11:38 np0005539550 nova_compute[257631]: 2025-11-29 08:11:38.792 257641 DEBUG oslo_concurrency.lockutils [None req-59d1d99e-daca-48bb-8bb0-f3076cb8ca59 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.069 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 121 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 105 KiB/s wr, 55 op/s
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.292 257641 DEBUG nova.compute.manager [req-bf2287d5-4e84-4f06-9eec-bedc81ccd1d0 req-d763e108-7c46-48b4-8767-f70d17d3b9da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Received event network-vif-deleted-3145abd1-9798-4474-af6e-da5917de6dab external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.292 257641 INFO nova.compute.manager [req-bf2287d5-4e84-4f06-9eec-bedc81ccd1d0 req-d763e108-7c46-48b4-8767-f70d17d3b9da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Neutron deleted interface 3145abd1-9798-4474-af6e-da5917de6dab; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.292 257641 DEBUG nova.network.neutron [req-bf2287d5-4e84-4f06-9eec-bedc81ccd1d0 req-d763e108-7c46-48b4-8767-f70d17d3b9da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.294 257641 DEBUG nova.compute.manager [req-bf2287d5-4e84-4f06-9eec-bedc81ccd1d0 req-d763e108-7c46-48b4-8767-f70d17d3b9da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Detach interface failed, port_id=3145abd1-9798-4474-af6e-da5917de6dab, reason: Instance 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:11:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:39.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.748 257641 INFO nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Port c4c22a55-f99c-429f-8ef0-3fa113b99b13 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.748 257641 INFO nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Port d1e89475-0d6a-4f19-8d70-45b91897d9c0 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.748 257641 INFO nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Port 3145abd1-9798-4474-af6e-da5917de6dab from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.749 257641 INFO nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Port da689592-0a11-44fa-ba74-ffc145f551ab from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.749 257641 DEBUG nova.network.neutron [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.823 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Releasing lock "refresh_cache-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:39 np0005539550 nova_compute[257631]: 2025-11-29 08:11:39.857 257641 DEBUG oslo_concurrency.lockutils [None req-aa98dd1a-83e6-4a0d-8a14-0e75fdec66f1 b95b3e841be1420c99ee0a04dd0840f1 ff7c805d4242453aa2148a247956391d - - default default] Lock "interface-8cea7db0-f5d5-4e6a-bdad-72b5c7dae496-d1e89475-0d6a-4f19-8d70-45b91897d9c0" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 13.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:40.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 121 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 27 KiB/s wr, 30 op/s
Nov 29 03:11:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.488 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.488 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.555 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.698 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.699 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.720 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.720 257641 INFO nova.compute.claims [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:11:41 np0005539550 nova_compute[257631]: 2025-11-29 08:11:41.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.110 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:11:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/248573193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.536 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.542 257641 DEBUG nova.compute.provider_tree [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.648 257641 DEBUG nova.scheduler.client.report [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.712 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.712 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.897 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.898 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:11:42 np0005539550 nova_compute[257631]: 2025-11-29 08:11:42.945 257641 INFO nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.000 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:11:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 121 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 26 KiB/s wr, 25 op/s
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.404 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.406 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.406 257641 INFO nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Creating image(s)#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.434 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.464 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:43.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.580 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.583 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.651 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.652 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.652 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.653 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.682 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.686 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.766 257641 DEBUG nova.policy [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '247a4abb59cd459fa66a891e998e548c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.952 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.993 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403888.9864266, 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:11:43 np0005539550 nova_compute[257631]: 2025-11-29 08:11:43.993 257641 INFO nova.compute.manager [-] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.033 257641 DEBUG nova.compute.manager [None req-63448ed7-f55a-4bb4-bc07-3a512ae75d01 - - - - - -] [instance: 8cea7db0-f5d5-4e6a-bdad-72b5c7dae496] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.039 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] resizing rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.081 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.159 257641 DEBUG nova.objects.instance [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'migration_context' on Instance uuid e5a6e53a-bb0e-40f5-838d-3780bd0263b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.184 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.185 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Ensure instance console log exists: /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.186 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.186 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:11:44 np0005539550 nova_compute[257631]: 2025-11-29 08:11:44.187 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:11:44 np0005539550 podman[309395]: 2025-11-29 08:11:44.329554964 +0000 UTC m=+0.061939785 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:11:44 np0005539550 podman[309394]: 2025-11-29 08:11:44.364833119 +0000 UTC m=+0.097682602 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:11:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:44.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 29 03:11:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 29 03:11:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 29 03:11:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 121 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 29 03:11:45 np0005539550 nova_compute[257631]: 2025-11-29 08:11:45.344 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully created port: fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:45.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:46 np0005539550 nova_compute[257631]: 2025-11-29 08:11:46.777 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 141 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.1 MiB/s wr, 17 op/s
Nov 29 03:11:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:47.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:47 np0005539550 nova_compute[257631]: 2025-11-29 08:11:47.554 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully created port: 832f7d29-6f46-4a5f-bf46-483717a6c682 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:48 np0005539550 nova_compute[257631]: 2025-11-29 08:11:48.674 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully created port: ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:11:49 np0005539550 nova_compute[257631]: 2025-11-29 08:11:49.083 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 86 op/s
Nov 29 03:11:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:11:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:49.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.133 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully updated port: fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.235 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.358 257641 DEBUG nova.compute.manager [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-changed-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.359 257641 DEBUG nova.compute.manager [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing instance network info cache due to event network-changed-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.359 257641 DEBUG oslo_concurrency.lockutils [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.359 257641 DEBUG oslo_concurrency.lockutils [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:50 np0005539550 nova_compute[257631]: 2025-11-29 08:11:50.359 257641 DEBUG nova.network.neutron [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing network info cache for port fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:50.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:51 np0005539550 nova_compute[257631]: 2025-11-29 08:11:51.085 257641 DEBUG nova.network.neutron [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 29 03:11:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:51.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:51 np0005539550 nova_compute[257631]: 2025-11-29 08:11:51.780 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:52 np0005539550 nova_compute[257631]: 2025-11-29 08:11:52.083 257641 DEBUG nova.network.neutron [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:52 np0005539550 nova_compute[257631]: 2025-11-29 08:11:52.147 257641 DEBUG oslo_concurrency.lockutils [req-2aee6d5d-c1a2-4f59-8f99-ee3182397c62 req-f0401f7f-2af6-4ea3-800a-0dbf4fd3bc2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:52.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:53.047 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.047 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:53.048 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:11:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:11:53.049 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.124 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully updated port: 832f7d29-6f46-4a5f-bf46-483717a6c682 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.318 257641 DEBUG nova.compute.manager [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-changed-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.319 257641 DEBUG nova.compute.manager [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing instance network info cache due to event network-changed-832f7d29-6f46-4a5f-bf46-483717a6c682. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.319 257641 DEBUG oslo_concurrency.lockutils [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.319 257641 DEBUG oslo_concurrency.lockutils [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.319 257641 DEBUG nova.network.neutron [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing network info cache for port 832f7d29-6f46-4a5f-bf46-483717a6c682 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:11:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:53.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:53 np0005539550 nova_compute[257631]: 2025-11-29 08:11:53.698 257641 DEBUG nova.network.neutron [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.186 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 29 03:11:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.285 257641 DEBUG nova.network.neutron [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.314 257641 DEBUG oslo_concurrency.lockutils [req-5a0f7d79-b8b5-4653-8752-9b076e616bc8 req-1a5bd608-e087-4639-97d0-cf57f14675fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:11:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:54.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.450 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Successfully updated port: ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.472 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.473 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquired lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:11:54 np0005539550 nova_compute[257631]: 2025-11-29 08:11:54.473 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:11:55 np0005539550 nova_compute[257631]: 2025-11-29 08:11:55.093 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:11:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Nov 29 03:11:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:55.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:55 np0005539550 nova_compute[257631]: 2025-11-29 08:11:55.507 257641 DEBUG nova.compute.manager [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-changed-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:11:55 np0005539550 nova_compute[257631]: 2025-11-29 08:11:55.508 257641 DEBUG nova.compute.manager [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing instance network info cache due to event network-changed-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:11:55 np0005539550 nova_compute[257631]: 2025-11-29 08:11:55.508 257641 DEBUG oslo_concurrency.lockutils [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.280172) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916280279, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 510, "num_deletes": 251, "total_data_size": 522357, "memory_usage": 533168, "flush_reason": "Manual Compaction"}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916285024, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 516957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38188, "largest_seqno": 38697, "table_properties": {"data_size": 514062, "index_size": 867, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7000, "raw_average_key_size": 19, "raw_value_size": 508197, "raw_average_value_size": 1403, "num_data_blocks": 37, "num_entries": 362, "num_filter_entries": 362, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403891, "oldest_key_time": 1764403891, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 4892 microseconds, and 2127 cpu microseconds.
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.285065) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 516957 bytes OK
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.285085) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.286955) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.286969) EVENT_LOG_v1 {"time_micros": 1764403916286965, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.286985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 519436, prev total WAL file size 519436, number of live WAL files 2.
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.287385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(504KB)], [80(13MB)]
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916287414, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 14231656, "oldest_snapshot_seqno": -1}
Nov 29 03:11:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:11:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:56.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7144 keys, 12293979 bytes, temperature: kUnknown
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916425740, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 12293979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12244267, "index_size": 30720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 185333, "raw_average_key_size": 25, "raw_value_size": 12114206, "raw_average_value_size": 1695, "num_data_blocks": 1216, "num_entries": 7144, "num_filter_entries": 7144, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764403916, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.426067) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 12293979 bytes
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.428990) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.8 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.1 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(51.3) write-amplify(23.8) OK, records in: 7661, records dropped: 517 output_compression: NoCompression
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.429018) EVENT_LOG_v1 {"time_micros": 1764403916429007, "job": 46, "event": "compaction_finished", "compaction_time_micros": 138406, "compaction_time_cpu_micros": 27338, "output_level": 6, "num_output_files": 1, "total_output_size": 12293979, "num_input_records": 7661, "num_output_records": 7144, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916429235, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764403916431342, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.287322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.431445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.431452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.431454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.431455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:11:56.431457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:11:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:11:56 np0005539550 nova_compute[257631]: 2025-11-29 08:11:56.818 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 167 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.0 MiB/s wr, 126 op/s
Nov 29 03:11:57 np0005539550 podman[309492]: 2025-11-29 08:11:57.342402977 +0000 UTC m=+0.084813545 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:11:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:57.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:11:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:11:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:11:58.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:11:58 np0005539550 nova_compute[257631]: 2025-11-29 08:11:58.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:59 np0005539550 nova_compute[257631]: 2025-11-29 08:11:59.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 198 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.6 MiB/s wr, 63 op/s
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:11:59
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log', 'images']
Nov 29 03:11:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:11:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:11:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:11:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:11:59.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:00.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8559 writes, 38K keys, 8553 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 8559 writes, 8553 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1856 writes, 8492 keys, 1856 commit groups, 1.0 writes per commit group, ingest: 11.84 MB, 0.02 MB/s#012Interval WAL: 1856 writes, 1856 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     26.2      1.94              0.17        23    0.084       0      0       0.0       0.0#012  L6      1/0   11.72 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   4.4     25.4     21.6     10.37              0.65        22    0.471    131K    12K       0.0       0.0#012 Sum      1/0   11.72 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   5.4     21.4     22.4     12.31              0.82        45    0.274    131K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    102.4    106.4      0.73              0.23        12    0.061     44K   3626       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0     25.4     21.6     10.37              0.65        22    0.471    131K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     26.2      1.94              0.17        22    0.088       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 12.3 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 29.21 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000258 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1664,28.20 MB,9.27598%) FilterBlock(46,370.98 KB,0.119174%) IndexBlock(46,666.98 KB,0.214261%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:12:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:01.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 29 03:12:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 29 03:12:01 np0005539550 nova_compute[257631]: 2025-11-29 08:12:01.822 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:02.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.764 257641 DEBUG nova.network.neutron [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [{"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.798 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Releasing lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.798 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance network_info: |[{"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.798 257641 DEBUG oslo_concurrency.lockutils [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.799 257641 DEBUG nova.network.neutron [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Refreshing network info cache for port ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.802 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Start _get_guest_xml network_info=[{"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.806 257641 WARNING nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.812 257641 DEBUG nova.virt.libvirt.host [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.813 257641 DEBUG nova.virt.libvirt.host [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.816 257641 DEBUG nova.virt.libvirt.host [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.816 257641 DEBUG nova.virt.libvirt.host [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.817 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.818 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.818 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.818 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.819 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.819 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.819 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.819 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.819 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.820 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.820 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.820 257641 DEBUG nova.virt.hardware [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:02 np0005539550 nova_compute[257631]: 2025-11-29 08:12:02.822 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 261 KiB/s rd, 2.4 MiB/s wr, 54 op/s
Nov 29 03:12:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558668389' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.253 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.282 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.286 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:03.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2493623856' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.731 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.733 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.733 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.734 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.735 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.736 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.736 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.737 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.737 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.739 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.740 257641 DEBUG nova.objects.instance [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'pci_devices' on Instance uuid e5a6e53a-bb0e-40f5-838d-3780bd0263b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.768 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <uuid>e5a6e53a-bb0e-40f5-838d-3780bd0263b6</uuid>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <name>instance-00000052</name>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersTestMultiNic-server-734926424</nova:name>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:12:02</nova:creationTime>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:user uuid="247a4abb59cd459fa66a891e998e548c">tempest-ServersTestMultiNic-2054665875-project-member</nova:user>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:project uuid="c5e5a2f42dc64b7cb0a22b666f160b1d">tempest-ServersTestMultiNic-2054665875</nova:project>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:port uuid="fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.68" ipVersion="4"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:port uuid="832f7d29-6f46-4a5f-bf46-483717a6c682">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.1.249" ipVersion="4"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <nova:port uuid="ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="serial">e5a6e53a-bb0e-40f5-838d-3780bd0263b6</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="uuid">e5a6e53a-bb0e-40f5-838d-3780bd0263b6</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:71:4c:8f"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <target dev="tapfc3d7dc7-8a"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:6e:c5:34"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <target dev="tap832f7d29-6f"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:53:b5:fa"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <target dev="tapff5c1c4c-f4"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/console.log" append="off"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:12:03 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:12:03 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:12:03 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:12:03 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.768 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Preparing to wait for external event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.769 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.769 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.769 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.769 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Preparing to wait for external event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.769 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Preparing to wait for external event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.770 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.771 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.771 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.772 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.772 257641 DEBUG os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.773 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.774 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.777 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.777 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc3d7dc7-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.777 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc3d7dc7-8a, col_values=(('external_ids', {'iface-id': 'fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:4c:8f', 'vm-uuid': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 NetworkManager[49039]: <info>  [1764403923.7802] manager: (tapfc3d7dc7-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.789 257641 INFO os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a')#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.789 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.790 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.790 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.790 257641 DEBUG os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.791 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.791 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.791 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.793 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.793 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap832f7d29-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.794 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap832f7d29-6f, col_values=(('external_ids', {'iface-id': '832f7d29-6f46-4a5f-bf46-483717a6c682', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:c5:34', 'vm-uuid': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 NetworkManager[49039]: <info>  [1764403923.7959] manager: (tap832f7d29-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.803 257641 INFO os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f')#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.804 257641 DEBUG nova.virt.libvirt.vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:11:43Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.804 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.805 257641 DEBUG nova.network.os_vif_util [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.805 257641 DEBUG os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.806 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.806 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.809 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff5c1c4c-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.810 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff5c1c4c-f4, col_values=(('external_ids', {'iface-id': 'ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:b5:fa', 'vm-uuid': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.811 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 NetworkManager[49039]: <info>  [1764403923.8126] manager: (tapff5c1c4c-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.814 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.820 257641 INFO os_vif [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4')#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.887 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.888 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.888 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No VIF found with MAC fa:16:3e:71:4c:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.888 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No VIF found with MAC fa:16:3e:6e:c5:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.888 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] No VIF found with MAC fa:16:3e:53:b5:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.889 257641 INFO nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Using config drive#033[00m
Nov 29 03:12:03 np0005539550 nova_compute[257631]: 2025-11-29 08:12:03.916 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:04.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.115 257641 INFO nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Creating config drive at /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.122 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk207imd4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 179 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 643 KiB/s rd, 2.1 MiB/s wr, 100 op/s
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.258 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk207imd4" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.294 257641 DEBUG nova.storage.rbd_utils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] rbd image e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.298 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.484 257641 DEBUG oslo_concurrency.processutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config e5a6e53a-bb0e-40f5-838d-3780bd0263b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.486 257641 INFO nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Deleting local config drive /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:05.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:05 np0005539550 kernel: tapfc3d7dc7-8a: entered promiscuous mode
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.5487] manager: (tapfc3d7dc7-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00313|binding|INFO|Claiming lport fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f for this chassis.
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00314|binding|INFO|fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f: Claiming fa:16:3e:71:4c:8f 10.100.0.68
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.556 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.5683] manager: (tap832f7d29-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.580 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:4c:8f 10.100.0.68'], port_security=['fa:16:3e:71:4c:8f 10.100.0.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.68/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d68765f-77f4-4ef6-a997-f426f146a6a1, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.581 158978 INFO neutron.agent.ovn.metadata.agent [-] Port fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f in datapath 65e8e3ac-03ff-4cf6-bafc-f73830b4558d bound to our chassis#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.583 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65e8e3ac-03ff-4cf6-bafc-f73830b4558d#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.5891] manager: (tapff5c1c4c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Nov 29 03:12:05 np0005539550 systemd-udevd[309673]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:05 np0005539550 systemd-udevd[309672]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:05 np0005539550 systemd-udevd[309675]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.594 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1af29d06-1fd6-4692-8852-a30aeeb0743f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.595 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap65e8e3ac-01 in ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.597 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap65e8e3ac-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.597 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[54948ccb-f481-4a9f-885a-0bbdfa3f21fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.598 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b160a122-95e9-4d1d-937d-280709427b02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6061] device (tapfc3d7dc7-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6074] device (tapfc3d7dc7-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:05 np0005539550 kernel: tap832f7d29-6f: entered promiscuous mode
Nov 29 03:12:05 np0005539550 kernel: tapff5c1c4c-f4: entered promiscuous mode
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6102] device (tap832f7d29-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6116] device (tapff5c1c4c-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.612 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[875f1a35-81d4-432c-a1cb-86131e902b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6137] device (tap832f7d29-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6141] device (tapff5c1c4c-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00315|binding|INFO|Claiming lport ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 for this chassis.
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00316|binding|INFO|ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3: Claiming fa:16:3e:53:b5:fa 10.100.0.21
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00317|binding|INFO|Claiming lport 832f7d29-6f46-4a5f-bf46-483717a6c682 for this chassis.
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00318|binding|INFO|832f7d29-6f46-4a5f-bf46-483717a6c682: Claiming fa:16:3e:6e:c5:34 10.100.1.249
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00319|binding|INFO|Setting lport fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f ovn-installed in OVS
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00320|binding|INFO|Setting lport fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f up in Southbound
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.622 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b5:fa 10.100.0.21'], port_security=['fa:16:3e:53:b5:fa 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d68765f-77f4-4ef6-a997-f426f146a6a1, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.624 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:c5:34 10.100.1.249'], port_security=['fa:16:3e:6e:c5:34 10.100.1.249'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.249/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a730f2c7-6b8e-499e-a2e6-b9751c280d74, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=832f7d29-6f46-4a5f-bf46-483717a6c682) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.630 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[26d18185-116e-4e01-9241-c71fafbce496]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 systemd-machined[216673]: New machine qemu-40-instance-00000052.
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.658 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c76ac1da-6d76-4e79-97af-1fd3554a649a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.666 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[060a47ac-45f2-4d8a-be56-a38e895eb269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.6669] manager: (tap65e8e3ac-00): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Nov 29 03:12:05 np0005539550 systemd[1]: Started Virtual Machine qemu-40-instance-00000052.
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00321|binding|INFO|Setting lport 832f7d29-6f46-4a5f-bf46-483717a6c682 ovn-installed in OVS
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00322|binding|INFO|Setting lport 832f7d29-6f46-4a5f-bf46-483717a6c682 up in Southbound
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00323|binding|INFO|Setting lport ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 ovn-installed in OVS
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00324|binding|INFO|Setting lport ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 up in Southbound
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.696 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b39749-cf64-48a9-80c5-2e7a436fa442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.699 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[86a0373b-ec10-4719-ab2e-ee365be97359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.7184] device (tap65e8e3ac-00): carrier: link connected
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.723 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[dfeb5132-4cfb-46c4-924d-df7c8740f380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.740 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[10873ed3-e8bd-4cd9-be5b-9e765b0fea93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65e8e3ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a6:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697338, 'reachable_time': 34525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309710, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.754 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2fe5a9-bbd5-4c48-a05c-b6c9832420e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:a6ac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697338, 'tstamp': 697338}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309711, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.771 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d224e145-302a-4df4-8a08-3319d5e29909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65e8e3ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a6:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697338, 'reachable_time': 34525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309713, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.802 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d63e58d-b068-4e9a-9f7a-fb8f4d4ab8a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.862 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[13504585-f2c6-4d98-a124-4974f9a5401b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.864 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65e8e3ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.864 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.864 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65e8e3ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539550 kernel: tap65e8e3ac-00: entered promiscuous mode
Nov 29 03:12:05 np0005539550 NetworkManager[49039]: <info>  [1764403925.8670] manager: (tap65e8e3ac-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.869 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65e8e3ac-00, col_values=(('external_ids', {'iface-id': '7e233840-f009-4337-9f5b-4e18a736769b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:05Z|00325|binding|INFO|Releasing lport 7e233840-f009-4337-9f5b-4e18a736769b from this chassis (sb_readonly=0)
Nov 29 03:12:05 np0005539550 nova_compute[257631]: 2025-11-29 08:12:05.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.887 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/65e8e3ac-03ff-4cf6-bafc-f73830b4558d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/65e8e3ac-03ff-4cf6-bafc-f73830b4558d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.887 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[638131e9-64d4-4652-9fe3-422f3fbd2d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.888 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-65e8e3ac-03ff-4cf6-bafc-f73830b4558d
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/65e8e3ac-03ff-4cf6-bafc-f73830b4558d.pid.haproxy
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 65e8e3ac-03ff-4cf6-bafc-f73830b4558d
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:05.889 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'env', 'PROCESS_TAG=haproxy-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/65e8e3ac-03ff-4cf6-bafc-f73830b4558d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.076 257641 DEBUG nova.compute.manager [req-cad617a0-2300-4fd8-9a3b-5e5be6e6e835 req-2f0e0359-0e0c-4776-849c-370fe0b789f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.077 257641 DEBUG oslo_concurrency.lockutils [req-cad617a0-2300-4fd8-9a3b-5e5be6e6e835 req-2f0e0359-0e0c-4776-849c-370fe0b789f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.077 257641 DEBUG oslo_concurrency.lockutils [req-cad617a0-2300-4fd8-9a3b-5e5be6e6e835 req-2f0e0359-0e0c-4776-849c-370fe0b789f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.078 257641 DEBUG oslo_concurrency.lockutils [req-cad617a0-2300-4fd8-9a3b-5e5be6e6e835 req-2f0e0359-0e0c-4776-849c-370fe0b789f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.078 257641 DEBUG nova.compute.manager [req-cad617a0-2300-4fd8-9a3b-5e5be6e6e835 req-2f0e0359-0e0c-4776-849c-370fe0b789f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Processing event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.280 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403926.2798233, e5a6e53a-bb0e-40f5-838d-3780bd0263b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.281 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:06 np0005539550 podman[309786]: 2025-11-29 08:12:06.29182117 +0000 UTC m=+0.051633777 container create 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.310 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.315 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403926.2809527, e5a6e53a-bb0e-40f5-838d-3780bd0263b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.316 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:06 np0005539550 systemd[1]: Started libpod-conmon-84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a.scope.
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.338 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.342 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:06 np0005539550 podman[309786]: 2025-11-29 08:12:06.264164386 +0000 UTC m=+0.023977013 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ae33368d183b6f6f19915aa5e012adf29adb05ae20f3345203af892b760e1c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.368 257641 DEBUG nova.network.neutron [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updated VIF entry in instance network info cache for port ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.369 257641 DEBUG nova.network.neutron [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [{"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.372 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:06 np0005539550 podman[309786]: 2025-11-29 08:12:06.375196698 +0000 UTC m=+0.135009325 container init 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:12:06 np0005539550 podman[309786]: 2025-11-29 08:12:06.381569741 +0000 UTC m=+0.141382348 container start 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.387 257641 DEBUG oslo_concurrency.lockutils [req-70dc9e86-6899-42a9-88ae-f5b9edd7fd6f req-9d7ce736-0ca5-4352-88cb-45f00adc799e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e5a6e53a-bb0e-40f5-838d-3780bd0263b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:06.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:06 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [NOTICE]   (309807) : New worker (309809) forked
Nov 29 03:12:06 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [NOTICE]   (309807) : Loading success.
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.451 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 in datapath 65e8e3ac-03ff-4cf6-bafc-f73830b4558d unbound from our chassis#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.454 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65e8e3ac-03ff-4cf6-bafc-f73830b4558d#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.471 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1924289f-0e88-40cf-aaac-a6c562327146]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.499 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bc2626-416b-4bc9-aa6f-d78533596cdb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.502 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d449060d-2e4f-4996-a248-c035a06809f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.527 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3b27345c-6f74-4de5-b52e-5e58a39f7756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.544 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4589240d-ffd5-4cd3-ae48-e472b04832e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65e8e3ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a6:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 4, 'rx_bytes': 266, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697338, 'reachable_time': 34525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309823, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.560 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3325d71e-e680-4092-8151-4206063c5341]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap65e8e3ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697349, 'tstamp': 697349}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309824, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65e8e3ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697352, 'tstamp': 697352}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309824, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.563 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65e8e3ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.565 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.565 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.566 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65e8e3ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.566 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.566 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65e8e3ac-00, col_values=(('external_ids', {'iface-id': '7e233840-f009-4337-9f5b-4e18a736769b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.567 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.567 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 832f7d29-6f46-4a5f-bf46-483717a6c682 in datapath f0e99e9e-47dd-4bd5-b003-eabd88928a95 unbound from our chassis#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.569 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0e99e9e-47dd-4bd5-b003-eabd88928a95#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.580 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e612620-98a4-4dcc-b291-0eea731c0335]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.581 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0e99e9e-41 in ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.582 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0e99e9e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.583 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d605f07b-13c8-4421-9364-68a68671c890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.583 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[574d8f96-f2ae-484d-a384-c102f283bd3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.593 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e12d93b6-9cf7-4535-a3e2-39a8f7171536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.616 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e2473a35-4149-4e03-836f-d46cda91a215]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.642 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[63e3c0a3-de38-477a-b5a7-054d99a781db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.648 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b99ffa-dd65-4ba0-b541-ee2f1cb759d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 NetworkManager[49039]: <info>  [1764403926.6488] manager: (tapf0e99e9e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Nov 29 03:12:06 np0005539550 systemd-udevd[309696]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.675 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[db109a10-bed4-4bd7-a6f6-45246c0a2a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.677 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1fffc201-fe13-4b76-8ece-5f57e7646fc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 NetworkManager[49039]: <info>  [1764403926.6957] device (tapf0e99e9e-40): carrier: link connected
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.700 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dd05c2-6cbc-4e8f-9275-01ec8538796c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.714 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1ef576bb-200f-446c-bbe8-051c6a3e5747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e99e9e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:a4:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697436, 'reachable_time': 24127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309835, 'error': None, 'target': 'ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.726 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4215a820-fd1d-4383-b838-8c1b7e0895c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:a401'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697436, 'tstamp': 697436}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309836, 'error': None, 'target': 'ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.739 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d15f00ea-ebb1-449f-9e95-81678dff5903]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0e99e9e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:a4:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697436, 'reachable_time': 24127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309837, 'error': None, 'target': 'ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.783 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[185ef724-7b4f-4452-b65f-e816165e736a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.851 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[76ef4714-c982-45b1-9297-84693ba517a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.852 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e99e9e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.852 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.852 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0e99e9e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 NetworkManager[49039]: <info>  [1764403926.8549] manager: (tapf0e99e9e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 29 03:12:06 np0005539550 kernel: tapf0e99e9e-40: entered promiscuous mode
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.857 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.858 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0e99e9e-40, col_values=(('external_ids', {'iface-id': 'f6645041-8cc9-4055-9d62-904a0db5d462'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.860 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:06Z|00326|binding|INFO|Releasing lport f6645041-8cc9-4055-9d62-904a0db5d462 from this chassis (sb_readonly=0)
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.862 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0e99e9e-47dd-4bd5-b003-eabd88928a95.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0e99e9e-47dd-4bd5-b003-eabd88928a95.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.864 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b33bfb7a-fdb9-46ba-80fc-deb193a9d2b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.864 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-f0e99e9e-47dd-4bd5-b003-eabd88928a95
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/f0e99e9e-47dd-4bd5-b003-eabd88928a95.pid.haproxy
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID f0e99e9e-47dd-4bd5-b003-eabd88928a95
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:06.865 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'env', 'PROCESS_TAG=haproxy-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0e99e9e-47dd-4bd5-b003-eabd88928a95.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:06 np0005539550 nova_compute[257631]: 2025-11-29 08:12:06.876 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:07 np0005539550 nova_compute[257631]: 2025-11-29 08:12:07.157 257641 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:07 np0005539550 nova_compute[257631]: 2025-11-29 08:12:07.158 257641 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:07 np0005539550 nova_compute[257631]: 2025-11-29 08:12:07.158 257641 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:07 np0005539550 nova_compute[257631]: 2025-11-29 08:12:07.158 257641 DEBUG oslo_concurrency.lockutils [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:07 np0005539550 nova_compute[257631]: 2025-11-29 08:12:07.158 257641 DEBUG nova.compute.manager [req-4ff0031c-b1e2-40e2-87a1-d836231e7f62 req-c32380a8-9cf5-48ca-ae03-1d897faf4209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Processing event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:07 np0005539550 podman[309869]: 2025-11-29 08:12:07.221542015 +0000 UTC m=+0.047495753 container create abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:12:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 134 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 678 KiB/s rd, 2.1 MiB/s wr, 121 op/s
Nov 29 03:12:07 np0005539550 systemd[1]: Started libpod-conmon-abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9.scope.
Nov 29 03:12:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68dd1a39eae5951a8857073031083824f6d01d6ab9a536ea123e3d048e8d6e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:07 np0005539550 podman[309869]: 2025-11-29 08:12:07.197679126 +0000 UTC m=+0.023632864 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:07 np0005539550 podman[309869]: 2025-11-29 08:12:07.299565326 +0000 UTC m=+0.125519054 container init abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:12:07 np0005539550 podman[309869]: 2025-11-29 08:12:07.306291098 +0000 UTC m=+0.132244816 container start abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:12:07 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [NOTICE]   (309888) : New worker (309890) forked
Nov 29 03:12:07 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [NOTICE]   (309888) : Loading success.
Nov 29 03:12:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:12:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:12:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:08.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.490 257641 DEBUG nova.compute.manager [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.491 257641 DEBUG oslo_concurrency.lockutils [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.491 257641 DEBUG oslo_concurrency.lockutils [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.491 257641 DEBUG oslo_concurrency.lockutils [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.491 257641 DEBUG nova.compute.manager [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No event matching network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 in dict_keys([('network-vif-plugged', '832f7d29-6f46-4a5f-bf46-483717a6c682')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.491 257641 WARNING nova.compute.manager [req-3fe3b5a3-32cd-4d45-b0af-b874ca74ee38 req-3f8f20bf-98d9-45f5-9719-b70cdd15bdba 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:12:08 np0005539550 nova_compute[257631]: 2025-11-29 08:12:08.812 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:12:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:12:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 147 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.1 MiB/s wr, 168 op/s
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.331 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.331 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.332 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.332 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.332 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No event matching network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f in dict_keys([('network-vif-plugged', '832f7d29-6f46-4a5f-bf46-483717a6c682')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.332 257641 WARNING nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.332 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.333 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.333 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.333 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.333 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Processing event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.333 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.334 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.334 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.334 257641 DEBUG oslo_concurrency.lockutils [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.334 257641 DEBUG nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.334 257641 WARNING nova.compute.manager [req-9ff50d6c-f9a1-42b5-ad3e-5e8e8f5ba383 req-4d822ccd-61c9-40f9-9bb8-2ad6c1419fdf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.335 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.339 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403929.339273, e5a6e53a-bb0e-40f5-838d-3780bd0263b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.340 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.341 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.345 257641 INFO nova.virt.libvirt.driver [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance spawned successfully.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.345 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:09.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.569 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.573 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.579 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.580 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.580 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.581 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.581 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.582 257641 DEBUG nova.virt.libvirt.driver [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.613 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.769 257641 INFO nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Took 26.36 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.771 257641 DEBUG nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.931 257641 INFO nova.compute.manager [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Took 28.26 seconds to build instance.#033[00m
Nov 29 03:12:09 np0005539550 nova_compute[257631]: 2025-11-29 08:12:09.959 257641 DEBUG oslo_concurrency.lockutils [None req-dbd32110-c900-4793-878f-c0b8d9fd83d5 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:10.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.076 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.076 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 160 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 168 op/s
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.257 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.382 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.384 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.384 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.384 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.384 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.386 257641 INFO nova.compute.manager [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Terminating instance#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.386 257641 DEBUG nova.compute.manager [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.418 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.418 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:11 np0005539550 kernel: tapfc3d7dc7-8a (unregistering): left promiscuous mode
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.426 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.427 257641 INFO nova.compute.claims [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.4296] device (tapfc3d7dc7-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00327|binding|INFO|Releasing lport fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f from this chassis (sb_readonly=0)
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00328|binding|INFO|Setting lport fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f down in Southbound
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00329|binding|INFO|Removing iface tapfc3d7dc7-8a ovn-installed in OVS
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.455 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 kernel: tap832f7d29-6f (unregistering): left promiscuous mode
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.4620] device (tap832f7d29-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00330|binding|INFO|Releasing lport 832f7d29-6f46-4a5f-bf46-483717a6c682 from this chassis (sb_readonly=1)
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00331|binding|INFO|Removing iface tap832f7d29-6f ovn-installed in OVS
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00332|if_status|INFO|Dropped 2 log messages in last 148 seconds (most recently, 148 seconds ago) due to excessive rate
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00333|if_status|INFO|Not setting lport 832f7d29-6f46-4a5f-bf46-483717a6c682 down as sb is readonly
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 kernel: tapff5c1c4c-f4 (unregistering): left promiscuous mode
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.4954] device (tapff5c1c4c-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.500 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00334|binding|INFO|Releasing lport ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 from this chassis (sb_readonly=1)
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00335|binding|INFO|Removing iface tapff5c1c4c-f4 ovn-installed in OVS
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:11.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00336|binding|INFO|Setting lport 832f7d29-6f46-4a5f-bf46-483717a6c682 down in Southbound
Nov 29 03:12:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:11Z|00337|binding|INFO|Setting lport ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 down in Southbound
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.535 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:4c:8f 10.100.0.68'], port_security=['fa:16:3e:71:4c:8f 10.100.0.68'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.68/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d68765f-77f4-4ef6-a997-f426f146a6a1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.537 158978 INFO neutron.agent.ovn.metadata.agent [-] Port fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f in datapath 65e8e3ac-03ff-4cf6-bafc-f73830b4558d unbound from our chassis#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.538 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 65e8e3ac-03ff-4cf6-bafc-f73830b4558d#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.540 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:b5:fa 10.100.0.21'], port_security=['fa:16:3e:53:b5:fa 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d68765f-77f4-4ef6-a997-f426f146a6a1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.542 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:c5:34 10.100.1.249'], port_security=['fa:16:3e:6e:c5:34 10.100.1.249'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.249/24', 'neutron:device_id': 'e5a6e53a-bb0e-40f5-838d-3780bd0263b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c5e5a2f42dc64b7cb0a22b666f160b1d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed2cdfce-a096-4b63-95b4-e5e189367262', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a730f2c7-6b8e-499e-a2e6-b9751c280d74, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=832f7d29-6f46-4a5f-bf46-483717a6c682) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.553 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[843326fb-52a7-478e-a23a-d6d73eb2c1df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000052.scope: Deactivated successfully.
Nov 29 03:12:11 np0005539550 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000052.scope: Consumed 2.790s CPU time.
Nov 29 03:12:11 np0005539550 systemd-machined[216673]: Machine qemu-40-instance-00000052 terminated.
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.582 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[158cacea-adcf-49f0-bfe7-4724a68dfc74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.585 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[84bc36a3-e6ac-4d38-bc36-04c1e1e78235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.6047] manager: (tapfc3d7dc7-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.611 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.611 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae77b58-ba28-4e80-9830-d8633c273b1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.6168] manager: (tap832f7d29-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/146)
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.625 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 NetworkManager[49039]: <info>  [1764403931.6305] manager: (tapff5c1c4c-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/147)
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.632 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[62ab8eed-fb44-493e-817e-172eebb5fe67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap65e8e3ac-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:a6:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 83], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697338, 'reachable_time': 34525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309936, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.648 257641 INFO nova.virt.libvirt.driver [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Instance destroyed successfully.#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.649 257641 DEBUG nova.objects.instance [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lazy-loading 'resources' on Instance uuid e5a6e53a-bb0e-40f5-838d-3780bd0263b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.654 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[edf681b1-4f29-426e-86bf-628b41781c97]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap65e8e3ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697349, 'tstamp': 697349}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309953, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap65e8e3ac-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697352, 'tstamp': 697352}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309953, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.656 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65e8e3ac-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.657 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.665 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65e8e3ac-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.665 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.665 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap65e8e3ac-00, col_values=(('external_ids', {'iface-id': '7e233840-f009-4337-9f5b-4e18a736769b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.666 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.667 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 in datapath 65e8e3ac-03ff-4cf6-bafc-f73830b4558d unbound from our chassis#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.668 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 65e8e3ac-03ff-4cf6-bafc-f73830b4558d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.669 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b41225-b62e-4788-8e91-8c032d25e236]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.669 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d namespace which is not needed anymore#033[00m
Nov 29 03:12:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:11 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [NOTICE]   (309807) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:11 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [NOTICE]   (309807) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:11 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [WARNING]  (309807) : Exiting Master process...
Nov 29 03:12:11 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [ALERT]    (309807) : Current worker (309809) exited with code 143 (Terminated)
Nov 29 03:12:11 np0005539550 neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d[309803]: [WARNING]  (309807) : All workers exited. Exiting... (0)
Nov 29 03:12:11 np0005539550 systemd[1]: libpod-84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a.scope: Deactivated successfully.
Nov 29 03:12:11 np0005539550 podman[309981]: 2025-11-29 08:12:11.79649739 +0000 UTC m=+0.045975284 container died 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:12:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-94ae33368d183b6f6f19915aa5e012adf29adb05ae20f3345203af892b760e1c-merged.mount: Deactivated successfully.
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.826 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 podman[309981]: 2025-11-29 08:12:11.838993484 +0000 UTC m=+0.088471358 container cleanup 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:11 np0005539550 systemd[1]: libpod-conmon-84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a.scope: Deactivated successfully.
Nov 29 03:12:11 np0005539550 podman[310012]: 2025-11-29 08:12:11.900373461 +0000 UTC m=+0.041216583 container remove 84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.906 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2abceeee-81ca-4697-9488-a5a896615c86]: (4, ('Sat Nov 29 08:12:11 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d (84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a)\n84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a\nSat Nov 29 08:12:11 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d (84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a)\n84716473025b5126de957a188083443ffb45ed953dec03257fed7cb65aed929a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.907 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2083a796-4d80-4ba2-bca9-2ed80b55098c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.908 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65e8e3ac-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 kernel: tap65e8e3ac-00: left promiscuous mode
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.935 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06178f09-110f-4369-9384-b3853307e3b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.942 257641 DEBUG nova.virt.libvirt.vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:09Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.943 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "address": "fa:16:3e:71:4c:8f", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.68", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3d7dc7-8a", "ovs_interfaceid": "fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.944 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.944 257641 DEBUG os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.946 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.947 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc3d7dc7-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.949 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.951 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3097d4cc-80fa-4b38-b92d-30ec42815772]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.952 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc40034-09ca-469b-afe1-c02fa88d4985]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.961 257641 INFO os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:4c:8f,bridge_name='br-int',has_traffic_filtering=True,id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3d7dc7-8a')#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.962 257641 DEBUG nova.virt.libvirt.vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:09Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.963 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.963 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.964 257641 DEBUG os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.965 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.965 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap832f7d29-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.966 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.967 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1963ce-28fa-49cf-8f89-69f77cd88f0f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697331, 'reachable_time': 28291, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310038, 'error': None, 'target': 'ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 systemd[1]: run-netns-ovnmeta\x2d65e8e3ac\x2d03ff\x2d4cf6\x2dbafc\x2df73830b4558d.mount: Deactivated successfully.
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.971 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-65e8e3ac-03ff-4cf6-bafc-f73830b4558d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.972 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7d74ae-a490-4e76-ad60-1ea7c2b2f714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.972 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 832f7d29-6f46-4a5f-bf46-483717a6c682 in datapath f0e99e9e-47dd-4bd5-b003-eabd88928a95 unbound from our chassis#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.974 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0e99e9e-47dd-4bd5-b003-eabd88928a95, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.975 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2a1a48-65d2-4f82-98fb-024cec71e043]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:11.975 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95 namespace which is not needed anymore#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.977 257641 INFO os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:c5:34,bridge_name='br-int',has_traffic_filtering=True,id=832f7d29-6f46-4a5f-bf46-483717a6c682,network=Network(f0e99e9e-47dd-4bd5-b003-eabd88928a95),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap832f7d29-6f')#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.978 257641 DEBUG nova.virt.libvirt.vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:11:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-734926424',display_name='tempest-ServersTestMultiNic-server-734926424',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-734926424',id=82,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c5e5a2f42dc64b7cb0a22b666f160b1d',ramdisk_id='',reservation_id='r-7mn86exo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-2054665875',owner_user_name='tempest-ServersTestMultiNic-2054665875-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:12:09Z,user_data=None,user_id='247a4abb59cd459fa66a891e998e548c',uuid=e5a6e53a-bb0e-40f5-838d-3780bd0263b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.978 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converting VIF {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.979 257641 DEBUG nova.network.os_vif_util [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.979 257641 DEBUG os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.980 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.980 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff5c1c4c-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.982 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.983 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:11 np0005539550 nova_compute[257631]: 2025-11-29 08:12:11.984 257641 INFO os_vif [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:b5:fa,bridge_name='br-int',has_traffic_filtering=True,id=ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3,network=Network(65e8e3ac-03ff-4cf6-bafc-f73830b4558d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff5c1c4c-f4')#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.009 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:12 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [NOTICE]   (309888) : haproxy version is 2.8.14-c23fe91
Nov 29 03:12:12 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [NOTICE]   (309888) : path to executable is /usr/sbin/haproxy
Nov 29 03:12:12 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [WARNING]  (309888) : Exiting Master process...
Nov 29 03:12:12 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [ALERT]    (309888) : Current worker (309890) exited with code 143 (Terminated)
Nov 29 03:12:12 np0005539550 neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95[309884]: [WARNING]  (309888) : All workers exited. Exiting... (0)
Nov 29 03:12:12 np0005539550 systemd[1]: libpod-abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9.scope: Deactivated successfully.
Nov 29 03:12:12 np0005539550 podman[310077]: 2025-11-29 08:12:12.117305636 +0000 UTC m=+0.051824573 container died abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9-userdata-shm.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f68dd1a39eae5951a8857073031083824f6d01d6ab9a536ea123e3d048e8d6e7-merged.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539550 podman[310077]: 2025-11-29 08:12:12.161823712 +0000 UTC m=+0.096342649 container cleanup abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:12:12 np0005539550 systemd[1]: libpod-conmon-abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9.scope: Deactivated successfully.
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.221 257641 DEBUG nova.compute.manager [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.222 257641 DEBUG oslo_concurrency.lockutils [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.222 257641 DEBUG oslo_concurrency.lockutils [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.223 257641 DEBUG oslo_concurrency.lockutils [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.224 257641 DEBUG nova.compute.manager [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-unplugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.224 257641 DEBUG nova.compute.manager [req-4494fe93-7a49-4104-aca9-872fff439474 req-83b4806c-e908-4b27-9373-51be04d39d44 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:12 np0005539550 podman[310122]: 2025-11-29 08:12:12.231745747 +0000 UTC m=+0.046080647 container remove abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.237 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28cbc4d4-c10f-466c-8f78-25538c5df235]: (4, ('Sat Nov 29 08:12:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95 (abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9)\nabc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9\nSat Nov 29 08:12:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95 (abc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9)\nabc72f1d67f49bc13f1cfbef4fdfa1adba02b6e026d71ff114e0b119a8a9c1e9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.239 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[453c34d1-3ac3-41da-a7f4-25f3cf993f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.240 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0e99e9e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.249 257641 DEBUG nova.compute.manager [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.249 257641 DEBUG oslo_concurrency.lockutils [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.249 257641 DEBUG oslo_concurrency.lockutils [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.249 257641 DEBUG oslo_concurrency.lockutils [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.250 257641 DEBUG nova.compute.manager [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-unplugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.250 257641 DEBUG nova.compute.manager [req-d0cb43c6-1d18-4541-af90-acda7252741c req-e81f00cc-4c55-494a-a590-a73af44d2ce8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.292 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539550 kernel: tapf0e99e9e-40: left promiscuous mode
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.309 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.310 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.315 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[030590eb-09b5-4b11-ae3a-91df6f5a1ff6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.335 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[be7fd658-91b8-4a39-9dbe-202c8b8b3ada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.336 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3da700-48fc-40c0-8d3c-df08043a1751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.354 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d7299129-712a-4cb0-92fb-2886c22e4d5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697430, 'reachable_time': 39718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310138, 'error': None, 'target': 'ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.356 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0e99e9e-47dd-4bd5-b003-eabd88928a95 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:12:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:12.356 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9ec11e-cfc7-48c9-9c8a-905fd253ffc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:12.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334762337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.478 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.483 257641 DEBUG nova.compute.provider_tree [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.505 257641 DEBUG nova.scheduler.client.report [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.512 257641 INFO nova.virt.libvirt.driver [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Deleting instance files /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6_del#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.513 257641 INFO nova.virt.libvirt.driver [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Deletion of /var/lib/nova/instances/e5a6e53a-bb0e-40f5-838d-3780bd0263b6_del complete#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.562 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.563 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.667 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.668 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.683 257641 INFO nova.compute.manager [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.683 257641 DEBUG oslo.service.loopingcall [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.684 257641 DEBUG nova.compute.manager [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.684 257641 DEBUG nova.network.neutron [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.715 257641 INFO nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.762 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:12:12 np0005539550 systemd[1]: run-netns-ovnmeta\x2df0e99e9e\x2d47dd\x2d4bd5\x2db003\x2deabd88928a95.mount: Deactivated successfully.
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.902 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.903 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.904 257641 INFO nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Creating image(s)#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.930 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.960 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.985 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:12 np0005539550 nova_compute[257631]: 2025-11-29 08:12:12.988 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.050 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.051 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.052 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.052 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.078 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.081 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.106 257641 DEBUG nova.policy [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:12:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 160 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 900 KiB/s wr, 146 op/s
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.344 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.421 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] resizing rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:12:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:13.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.524 257641 DEBUG nova.objects.instance [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.547 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.548 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Ensure instance console log exists: /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.548 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.549 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:13 np0005539550 nova_compute[257631]: 2025-11-29 08:12:13.549 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.366 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.366 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.366 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.367 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.367 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.367 257641 WARNING nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.367 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.367 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.368 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.368 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.368 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-unplugged-832f7d29-6f46-4a5f-bf46-483717a6c682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.368 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-unplugged-832f7d29-6f46-4a5f-bf46-483717a6c682 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.368 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.369 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.369 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.369 257641 DEBUG oslo_concurrency.lockutils [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.369 257641 DEBUG nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.369 257641 WARNING nova.compute.manager [req-08133a88-fcb8-4b10-bdbf-1ae33c7cfc7d req-4cea6f10-8feb-4f1f-857a-f8cb6b3b3f04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-832f7d29-6f46-4a5f-bf46-483717a6c682 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.387 257641 DEBUG nova.compute.manager [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.388 257641 DEBUG oslo_concurrency.lockutils [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.388 257641 DEBUG oslo_concurrency.lockutils [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.388 257641 DEBUG oslo_concurrency.lockutils [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.388 257641 DEBUG nova.compute.manager [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] No waiting events found dispatching network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:14 np0005539550 nova_compute[257631]: 2025-11-29 08:12:14.389 257641 WARNING nova.compute.manager [req-170a6e7b-8aed-4841-a454-57a4c2e69f01 req-9daab773-180a-4475-8bd5-a6f997db2245 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received unexpected event network-vif-plugged-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:12:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:14.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 184 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.9 MiB/s wr, 236 op/s
Nov 29 03:12:15 np0005539550 podman[310361]: 2025-11-29 08:12:15.309036003 +0000 UTC m=+0.045035670 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:12:15 np0005539550 podman[310360]: 2025-11-29 08:12:15.315074457 +0000 UTC m=+0.053835575 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:12:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:15.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:15 np0005539550 nova_compute[257631]: 2025-11-29 08:12:15.983 257641 DEBUG nova.compute.manager [req-33899327-d132-434d-9b5a-dcc85718187f req-cbfc433a-bfa6-4502-8dcb-1b142778cf3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-deleted-fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:15 np0005539550 nova_compute[257631]: 2025-11-29 08:12:15.983 257641 INFO nova.compute.manager [req-33899327-d132-434d-9b5a-dcc85718187f req-cbfc433a-bfa6-4502-8dcb-1b142778cf3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Neutron deleted interface fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:12:15 np0005539550 nova_compute[257631]: 2025-11-29 08:12:15.983 257641 DEBUG nova.network.neutron [req-33899327-d132-434d-9b5a-dcc85718187f req-cbfc433a-bfa6-4502-8dcb-1b142778cf3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [{"id": "832f7d29-6f46-4a5f-bf46-483717a6c682", "address": "fa:16:3e:6e:c5:34", "network": {"id": "f0e99e9e-47dd-4bd5-b003-eabd88928a95", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-541108229", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.249", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap832f7d29-6f", "ovs_interfaceid": "832f7d29-6f46-4a5f-bf46-483717a6c682", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "address": "fa:16:3e:53:b5:fa", "network": {"id": "65e8e3ac-03ff-4cf6-bafc-f73830b4558d", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-9264862", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c5e5a2f42dc64b7cb0a22b666f160b1d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff5c1c4c-f4", "ovs_interfaceid": "ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:16 np0005539550 nova_compute[257631]: 2025-11-29 08:12:16.043 257641 DEBUG nova.compute.manager [req-33899327-d132-434d-9b5a-dcc85718187f req-cbfc433a-bfa6-4502-8dcb-1b142778cf3a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Detach interface failed, port_id=fc3d7dc7-8a7d-4bdd-bb9e-cb8f8f99123f, reason: Instance e5a6e53a-bb0e-40f5-838d-3780bd0263b6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:12:16 np0005539550 nova_compute[257631]: 2025-11-29 08:12:16.253 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Successfully created port: d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:12:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:16.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:16 np0005539550 nova_compute[257631]: 2025-11-29 08:12:16.828 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:16 np0005539550 nova_compute[257631]: 2025-11-29 08:12:16.982 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 180 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 243 op/s
Nov 29 03:12:17 np0005539550 nova_compute[257631]: 2025-11-29 08:12:17.481 257641 DEBUG nova.network.neutron [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:17 np0005539550 nova_compute[257631]: 2025-11-29 08:12:17.509 257641 INFO nova.compute.manager [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Took 4.82 seconds to deallocate network for instance.#033[00m
Nov 29 03:12:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:17.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:17 np0005539550 nova_compute[257631]: 2025-11-29 08:12:17.598 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:17 np0005539550 nova_compute[257631]: 2025-11-29 08:12:17.599 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:17 np0005539550 nova_compute[257631]: 2025-11-29 08:12:17.724 257641 DEBUG oslo_concurrency.processutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218698782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.155 257641 DEBUG oslo_concurrency.processutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.161 257641 DEBUG nova.compute.provider_tree [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.190 257641 DEBUG nova.scheduler.client.report [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.217 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.221 257641 DEBUG nova.compute.manager [req-550207ce-2d03-408e-bd0c-e8531d1ed966 req-1612eca6-7799-49b6-9462-730ec667df28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-deleted-832f7d29-6f46-4a5f-bf46-483717a6c682 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.222 257641 DEBUG nova.compute.manager [req-550207ce-2d03-408e-bd0c-e8531d1ed966 req-1612eca6-7799-49b6-9462-730ec667df28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Received event network-vif-deleted-ff5c1c4c-f4e8-4225-96e5-27e39f15c9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.255 257641 INFO nova.scheduler.client.report [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Deleted allocations for instance e5a6e53a-bb0e-40f5-838d-3780bd0263b6#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.354 257641 DEBUG oslo_concurrency.lockutils [None req-3e6363ba-7e55-4648-8f5e-92f1c17d641f 247a4abb59cd459fa66a891e998e548c c5e5a2f42dc64b7cb0a22b666f160b1d - - default default] Lock "e5a6e53a-bb0e-40f5-838d-3780bd0263b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:18.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.569 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Successfully updated port: d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.619 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.620 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:18 np0005539550 nova_compute[257631]: 2025-11-29 08:12:18.620 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:12:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:18.944 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:18.945 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:18.945 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:19 np0005539550 nova_compute[257631]: 2025-11-29 08:12:19.003 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 181 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 229 op/s
Nov 29 03:12:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:19.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029813055439233205 of space, bias 1.0, pg target 0.8943916631769961 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:12:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.402 257641 DEBUG nova.compute.manager [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.402 257641 DEBUG nova.compute.manager [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing instance network info cache due to event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.403 257641 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:20.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.955 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:12:20 np0005539550 nova_compute[257631]: 2025-11-29 08:12:20.955 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.176 257641 DEBUG nova.network.neutron [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 219 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 241 op/s
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.243 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.245 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.246 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance network_info: |[{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.246 257641 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.263 257641 DEBUG nova.network.neutron [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.266 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Start _get_guest_xml network_info=[{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.278 257641 WARNING nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.286 257641 DEBUG nova.virt.libvirt.host [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.287 257641 DEBUG nova.virt.libvirt.host [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.290 257641 DEBUG nova.virt.libvirt.host [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.290 257641 DEBUG nova.virt.libvirt.host [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.291 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.292 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.292 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.293 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.293 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.293 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.293 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.294 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.294 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.294 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.294 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.295 257641 DEBUG nova.virt.hardware [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.297 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647786111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.392 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:21.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.553 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.554 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4473MB free_disk=20.925708770751953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.555 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.555 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.673 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 19e85fae-c57e-409b-95f7-b53ddb4c928e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.674 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.674 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:12:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906560278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.745 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.772 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.776 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.833 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:21 np0005539550 nova_compute[257631]: 2025-11-29 08:12:21.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:12:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723823620' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.226 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.228 257641 DEBUG nova.virt.libvirt.vif [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.229 257641 DEBUG nova.network.os_vif_util [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.230 257641 DEBUG nova.network.os_vif_util [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.231 257641 DEBUG nova.objects.instance [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:12:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2631974317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.276 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.282 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.290 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <uuid>19e85fae-c57e-409b-95f7-b53ddb4c928e</uuid>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <name>instance-00000055</name>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-1456117084</nova:name>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:12:21</nova:creationTime>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <nova:port uuid="d8b38a34-8274-43e4-8ebd-3924de5c5ba7">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="serial">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="uuid">19e85fae-c57e-409b-95f7-b53ddb4c928e</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:de:2f:2f"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <target dev="tapd8b38a34-82"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/console.log" append="off"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:12:22 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:12:22 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:12:22 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:12:22 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.293 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Preparing to wait for external event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.293 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.294 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.294 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.295 257641 DEBUG nova.virt.libvirt.vif [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:12:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.295 257641 DEBUG nova.network.os_vif_util [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.296 257641 DEBUG nova.network.os_vif_util [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.296 257641 DEBUG os_vif [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.296 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.297 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.297 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.299 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8b38a34-82, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.300 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8b38a34-82, col_values=(('external_ids', {'iface-id': 'd8b38a34-8274-43e4-8ebd-3924de5c5ba7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:2f:2f', 'vm-uuid': '19e85fae-c57e-409b-95f7-b53ddb4c928e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539550 NetworkManager[49039]: <info>  [1764403942.3020] manager: (tapd8b38a34-82): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.303 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.307 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.308 257641 INFO os_vif [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.311 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.376 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.376 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.385 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.385 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.385 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:de:2f:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.386 257641 INFO nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Using config drive#033[00m
Nov 29 03:12:22 np0005539550 nova_compute[257631]: 2025-11-29 08:12:22.412 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:22.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.191 257641 INFO nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Creating config drive at /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.198 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl5s3redh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 219 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.9 MiB/s wr, 205 op/s
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.335 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl5s3redh" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.363 257641 DEBUG nova.storage.rbd_utils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.367 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.394 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.395 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.395 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:23.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.754 257641 DEBUG oslo_concurrency.processutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config 19e85fae-c57e-409b-95f7-b53ddb4c928e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.755 257641 INFO nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Deleting local config drive /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/disk.config because it was imported into RBD.#033[00m
Nov 29 03:12:23 np0005539550 kernel: tapd8b38a34-82: entered promiscuous mode
Nov 29 03:12:23 np0005539550 NetworkManager[49039]: <info>  [1764403943.8035] manager: (tapd8b38a34-82): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Nov 29 03:12:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:23Z|00338|binding|INFO|Claiming lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for this chassis.
Nov 29 03:12:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:23Z|00339|binding|INFO|d8b38a34-8274-43e4-8ebd-3924de5c5ba7: Claiming fa:16:3e:de:2f:2f 10.100.0.6
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.816 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:2f:2f 10.100.0.6'], port_security=['fa:16:3e:de:2f:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19e85fae-c57e-409b-95f7-b53ddb4c928e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d8b38a34-8274-43e4-8ebd-3924de5c5ba7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.818 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.819 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.831 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[548d0b60-9e1e-436f-8af0-e78480c0823f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.832 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.834 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.834 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[daccf4b2-e6b9-43c7-92c5-151dec4eb5cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.834 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[564d9d13-0f4c-4b09-bcb4-5a02c6e4041a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 systemd-machined[216673]: New machine qemu-41-instance-00000055.
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.846 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[8d011036-f5f5-4920-985a-a74a180b8155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 systemd[1]: Started Virtual Machine qemu-41-instance-00000055.
Nov 29 03:12:23 np0005539550 systemd-udevd[310608]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.875 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7d53430f-31f4-4cb1-b87d-d41200b6bb3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.882 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539550 NetworkManager[49039]: <info>  [1764403943.8839] device (tapd8b38a34-82): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:12:23 np0005539550 NetworkManager[49039]: <info>  [1764403943.8854] device (tapd8b38a34-82): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:12:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:23Z|00340|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 ovn-installed in OVS
Nov 29 03:12:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:23Z|00341|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 up in Southbound
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.892 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.911 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[000ffd5c-c0d8-4163-b522-a3b52b11525f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 NetworkManager[49039]: <info>  [1764403943.9188] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.917 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[25de3b29-69a8-493c-9dbe-495926bfdaef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:12:23 np0005539550 nova_compute[257631]: 2025-11-29 08:12:23.946 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.951 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[172ac14b-8953-45eb-aab3-ee666a237c9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.956 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3e016edb-cdc6-4484-b65a-bf0ed8c75fd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:23 np0005539550 NetworkManager[49039]: <info>  [1764403943.9781] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:12:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:23.984 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7067dfef-a0e5-408a-b4f6-8e33d48ae9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.002 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[75c68e98-1626-4de2-a1e3-0894c0cf7eed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699164, 'reachable_time': 20987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310638, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.017 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c8083ce7-b77a-4a3b-8b29-0f8654e129d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699164, 'tstamp': 699164}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310640, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.032 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[43084fbe-a2fc-43be-8ca4-a81deedcbfb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 89], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699164, 'reachable_time': 20987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310641, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.061 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7103239c-3d75-41c8-9560-80cc824a3cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.117 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1b2e39-94fb-4be2-b7a0-31a3700a7419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.118 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.121 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539550 NetworkManager[49039]: <info>  [1764403944.1222] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 29 03:12:24 np0005539550 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.123 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.124 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.125 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:24Z|00342|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.127 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.128 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.129 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55f15f6a-3401-4e2b-83d4-118f53c41aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.130 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:12:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:24.132 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.143 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.260 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403944.2597337, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.260 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Started (Lifecycle Event)#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.282 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.291 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403944.2608829, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.291 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.322 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.326 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.348 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.399 257641 DEBUG nova.network.neutron [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated VIF entry in instance network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.400 257641 DEBUG nova.network.neutron [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.415 257641 DEBUG nova.compute.manager [req-efc18460-3e39-496c-9ae3-9d369fd9d0de req-11f02613-0dcc-4b22-b160-1d2fb0e195b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.416 257641 DEBUG oslo_concurrency.lockutils [req-efc18460-3e39-496c-9ae3-9d369fd9d0de req-11f02613-0dcc-4b22-b160-1d2fb0e195b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.416 257641 DEBUG oslo_concurrency.lockutils [req-efc18460-3e39-496c-9ae3-9d369fd9d0de req-11f02613-0dcc-4b22-b160-1d2fb0e195b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.416 257641 DEBUG oslo_concurrency.lockutils [req-efc18460-3e39-496c-9ae3-9d369fd9d0de req-11f02613-0dcc-4b22-b160-1d2fb0e195b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.417 257641 DEBUG nova.compute.manager [req-efc18460-3e39-496c-9ae3-9d369fd9d0de req-11f02613-0dcc-4b22-b160-1d2fb0e195b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Processing event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.418 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.419 257641 DEBUG oslo_concurrency.lockutils [req-9caec334-a39f-4fd1-ae0c-2920a225d004 req-2dd91a44-c0fd-45a0-b5d9-edec15ada4fd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:24.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.423 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764403944.4232347, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.424 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.429 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.433 257641 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance spawned successfully.#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.433 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.459 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.460 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.461 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.461 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.461 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.462 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.462 257641 DEBUG nova.virt.libvirt.driver [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.468 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.492 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:12:24 np0005539550 podman[310713]: 2025-11-29 08:12:24.516703335 +0000 UTC m=+0.051980568 container create 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.533 257641 INFO nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.534 257641 DEBUG nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:24 np0005539550 systemd[1]: Started libpod-conmon-31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348.scope.
Nov 29 03:12:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:24 np0005539550 podman[310713]: 2025-11-29 08:12:24.488794682 +0000 UTC m=+0.024071935 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:12:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e72b55267421d17145402cebf9fd790f9c93851e89c2f54b0219e156e43a0eb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.626 257641 INFO nova.compute.manager [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 13.26 seconds to build instance.#033[00m
Nov 29 03:12:24 np0005539550 nova_compute[257631]: 2025-11-29 08:12:24.649 257641 DEBUG oslo_concurrency.lockutils [None req-cf8ee379-7075-40d9-b1af-ce109b7fba22 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:24 np0005539550 podman[310713]: 2025-11-29 08:12:24.677371635 +0000 UTC m=+0.212648888 container init 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:12:24 np0005539550 podman[310713]: 2025-11-29 08:12:24.683997164 +0000 UTC m=+0.219274397 container start 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:12:24 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [NOTICE]   (310732) : New worker (310734) forked
Nov 29 03:12:24 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [NOTICE]   (310732) : Loading success.
Nov 29 03:12:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 254 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.6 MiB/s wr, 298 op/s
Nov 29 03:12:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:25 np0005539550 nova_compute[257631]: 2025-11-29 08:12:25.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:25 np0005539550 nova_compute[257631]: 2025-11-29 08:12:25.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:25 np0005539550 nova_compute[257631]: 2025-11-29 08:12:25.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:12:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:26.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.647 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764403931.646085, e5a6e53a-bb0e-40f5-838d-3780bd0263b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.647 257641 INFO nova.compute.manager [-] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.664 257641 DEBUG nova.compute.manager [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.665 257641 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.665 257641 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.665 257641 DEBUG oslo_concurrency.lockutils [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.665 257641 DEBUG nova.compute.manager [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] No waiting events found dispatching network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.665 257641 WARNING nova.compute.manager [req-7e93388a-4f05-47b9-b698-46284e4ab7f8 req-05a3f225-3bc7-4bbd-b35e-e50cfe1ccc24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received unexpected event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.704 257641 DEBUG nova.compute.manager [None req-4d7de36a-95cf-46bf-8c04-bababa077e8b - - - - - -] [instance: e5a6e53a-bb0e-40f5-838d-3780bd0263b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:26 np0005539550 nova_compute[257631]: 2025-11-29 08:12:26.832 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 260 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.6 MiB/s wr, 227 op/s
Nov 29 03:12:27 np0005539550 nova_compute[257631]: 2025-11-29 08:12:27.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:27.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:28 np0005539550 podman[310745]: 2025-11-29 08:12:28.353841822 +0000 UTC m=+0.092042630 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:12:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:28.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:28 np0005539550 nova_compute[257631]: 2025-11-29 08:12:28.655 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539550 NetworkManager[49039]: <info>  [1764403948.6564] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Nov 29 03:12:28 np0005539550 NetworkManager[49039]: <info>  [1764403948.6573] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Nov 29 03:12:28 np0005539550 nova_compute[257631]: 2025-11-29 08:12:28.832 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:28Z|00343|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:12:28 np0005539550 nova_compute[257631]: 2025-11-29 08:12:28.851 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:28 np0005539550 nova_compute[257631]: 2025-11-29 08:12:28.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 226 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 238 op/s
Nov 29 03:12:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:30.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.385 257641 DEBUG nova.compute.manager [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.386 257641 DEBUG nova.compute.manager [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing instance network info cache due to event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.386 257641 DEBUG oslo_concurrency.lockutils [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.386 257641 DEBUG oslo_concurrency.lockutils [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.386 257641 DEBUG nova.network.neutron [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:12:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:31 np0005539550 nova_compute[257631]: 2025-11-29 08:12:31.835 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539550 nova_compute[257631]: 2025-11-29 08:12:32.304 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:32.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:32 np0005539550 nova_compute[257631]: 2025-11-29 08:12:32.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:12:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Nov 29 03:12:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:33.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:33 np0005539550 nova_compute[257631]: 2025-11-29 08:12:33.633 257641 DEBUG nova.network.neutron [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated VIF entry in instance network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:12:33 np0005539550 nova_compute[257631]: 2025-11-29 08:12:33.634 257641 DEBUG nova.network.neutron [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:12:33 np0005539550 nova_compute[257631]: 2025-11-29 08:12:33.668 257641 DEBUG oslo_concurrency.lockutils [req-b31e0e6f-0ec7-45f6-93d0-6a7f2c9d908c req-3ee5b316-57a1-4d60-87f7-20cbab671efb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:12:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:34.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Nov 29 03:12:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:35.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:12:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:12:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:36 np0005539550 nova_compute[257631]: 2025-11-29 08:12:36.837 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 214 MiB data, 773 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 78 KiB/s wr, 175 op/s
Nov 29 03:12:37 np0005539550 nova_compute[257631]: 2025-11-29 08:12:37.306 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:37.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:12:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:12:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:12:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 67c3f894-0d30-46a2-b85d-f49995a90911 does not exist
Nov 29 03:12:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c024f2f8-db9e-45f9-8850-728fcecb9693 does not exist
Nov 29 03:12:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2dc38abf-b11e-4c2b-b226-e7ffc06d40f1 does not exist
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:12:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.586636424 +0000 UTC m=+0.040907575 container create 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:12:38 np0005539550 systemd[1]: Started libpod-conmon-5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea.scope.
Nov 29 03:12:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.56570249 +0000 UTC m=+0.019973661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.6707355 +0000 UTC m=+0.125006671 container init 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.680939731 +0000 UTC m=+0.135210882 container start 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.685477707 +0000 UTC m=+0.139748888 container attach 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:38 np0005539550 exciting_wiles[311110]: 167 167
Nov 29 03:12:38 np0005539550 systemd[1]: libpod-5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea.scope: Deactivated successfully.
Nov 29 03:12:38 np0005539550 conmon[311110]: conmon 5f1d104929a7350f6712 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea.scope/container/memory.events
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.68835257 +0000 UTC m=+0.142623751 container died 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:12:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-40e5c7262ad9601809bd17b50a3f2b129beee60513b9e86bfbd3ade24d80ad0d-merged.mount: Deactivated successfully.
Nov 29 03:12:38 np0005539550 podman[311094]: 2025-11-29 08:12:38.730605008 +0000 UTC m=+0.184876159 container remove 5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:12:38 np0005539550 systemd[1]: libpod-conmon-5f1d104929a7350f6712b0fc573efecfcf70c197ed75e43cca21fcf24818b3ea.scope: Deactivated successfully.
Nov 29 03:12:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:38Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:2f:2f 10.100.0.6
Nov 29 03:12:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:38Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:2f:2f 10.100.0.6
Nov 29 03:12:38 np0005539550 podman[311134]: 2025-11-29 08:12:38.903311805 +0000 UTC m=+0.044189918 container create d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:12:38 np0005539550 systemd[1]: Started libpod-conmon-d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b.scope.
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:12:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:38 np0005539550 podman[311134]: 2025-11-29 08:12:38.8854615 +0000 UTC m=+0.026339653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:38 np0005539550 podman[311134]: 2025-11-29 08:12:38.981353467 +0000 UTC m=+0.122231610 container init d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:38 np0005539550 podman[311134]: 2025-11-29 08:12:38.991899866 +0000 UTC m=+0.132777989 container start d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:12:38 np0005539550 podman[311134]: 2025-11-29 08:12:38.9955581 +0000 UTC m=+0.136436223 container attach d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 240 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 209 op/s
Nov 29 03:12:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:12:39Z|00344|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:12:39 np0005539550 nova_compute[257631]: 2025-11-29 08:12:39.342 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:39 np0005539550 eloquent_bohr[311151]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:12:39 np0005539550 eloquent_bohr[311151]: --> relative data size: 1.0
Nov 29 03:12:39 np0005539550 eloquent_bohr[311151]: --> All data devices are unavailable
Nov 29 03:12:39 np0005539550 systemd[1]: libpod-d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b.scope: Deactivated successfully.
Nov 29 03:12:39 np0005539550 conmon[311151]: conmon d15272bbe404261bd566 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b.scope/container/memory.events
Nov 29 03:12:39 np0005539550 podman[311134]: 2025-11-29 08:12:39.871599805 +0000 UTC m=+1.012477928 container died d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:12:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3b691ffb9b76d9eef063130a9193049fc981962feed0d9d2f8525d58fd4ffea1-merged.mount: Deactivated successfully.
Nov 29 03:12:39 np0005539550 podman[311134]: 2025-11-29 08:12:39.931560735 +0000 UTC m=+1.072438858 container remove d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bohr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:12:39 np0005539550 systemd[1]: libpod-conmon-d15272bbe404261bd5660390c852cf241507de99fc884e311a88e4eac173e09b.scope: Deactivated successfully.
Nov 29 03:12:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:40.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.546095007 +0000 UTC m=+0.038661138 container create e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:12:40 np0005539550 systemd[1]: Started libpod-conmon-e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0.scope.
Nov 29 03:12:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.529533064 +0000 UTC m=+0.022099215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.630398158 +0000 UTC m=+0.122964309 container init e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.636936675 +0000 UTC m=+0.129502806 container start e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.640589998 +0000 UTC m=+0.133156149 container attach e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:12:40 np0005539550 musing_blackburn[311335]: 167 167
Nov 29 03:12:40 np0005539550 systemd[1]: libpod-e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0.scope: Deactivated successfully.
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.645310189 +0000 UTC m=+0.137876320 container died e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:12:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-170382c1c48bc6ac90d228a4fc0c7e80d25159d4d6483ca1c6a41f6971e0af4a-merged.mount: Deactivated successfully.
Nov 29 03:12:40 np0005539550 podman[311319]: 2025-11-29 08:12:40.682549099 +0000 UTC m=+0.175115230 container remove e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:12:40 np0005539550 systemd[1]: libpod-conmon-e07cbb253a8b53cd1561fd1c44b40be010cee7f366abb8bf6256bbf4a3c10fe0.scope: Deactivated successfully.
Nov 29 03:12:40 np0005539550 podman[311359]: 2025-11-29 08:12:40.856753094 +0000 UTC m=+0.049710099 container create e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:12:40 np0005539550 systemd[1]: Started libpod-conmon-e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4.scope.
Nov 29 03:12:40 np0005539550 podman[311359]: 2025-11-29 08:12:40.832034574 +0000 UTC m=+0.024991579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e575adb1b3d58ef675c14d38c10e142c0ff11bfbfe8e961164f5c657f99d23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e575adb1b3d58ef675c14d38c10e142c0ff11bfbfe8e961164f5c657f99d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e575adb1b3d58ef675c14d38c10e142c0ff11bfbfe8e961164f5c657f99d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e575adb1b3d58ef675c14d38c10e142c0ff11bfbfe8e961164f5c657f99d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:40 np0005539550 podman[311359]: 2025-11-29 08:12:40.955157885 +0000 UTC m=+0.148114890 container init e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:12:40 np0005539550 podman[311359]: 2025-11-29 08:12:40.967630944 +0000 UTC m=+0.160587919 container start e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:12:40 np0005539550 podman[311359]: 2025-11-29 08:12:40.970936758 +0000 UTC m=+0.163893733 container attach e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:12:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 293 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 188 op/s
Nov 29 03:12:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]: {
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:    "0": [
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:        {
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "devices": [
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "/dev/loop3"
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            ],
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "lv_name": "ceph_lv0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "lv_size": "7511998464",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "name": "ceph_lv0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "tags": {
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.cluster_name": "ceph",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.crush_device_class": "",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.encrypted": "0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.osd_id": "0",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.type": "block",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:                "ceph.vdo": "0"
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            },
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "type": "block",
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:            "vg_name": "ceph_vg0"
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:        }
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]:    ]
Nov 29 03:12:41 np0005539550 heuristic_torvalds[311377]: }
Nov 29 03:12:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:41 np0005539550 systemd[1]: libpod-e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4.scope: Deactivated successfully.
Nov 29 03:12:41 np0005539550 podman[311359]: 2025-11-29 08:12:41.756695819 +0000 UTC m=+0.949652814 container died e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-94e575adb1b3d58ef675c14d38c10e142c0ff11bfbfe8e961164f5c657f99d23-merged.mount: Deactivated successfully.
Nov 29 03:12:41 np0005539550 podman[311359]: 2025-11-29 08:12:41.813038847 +0000 UTC m=+1.005995822 container remove e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_torvalds, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:41 np0005539550 systemd[1]: libpod-conmon-e0d9ebb7d438c85c5423ce0c3a81b9c039c9f11d0dfce61001d20a133e4c0ed4.scope: Deactivated successfully.
Nov 29 03:12:41 np0005539550 nova_compute[257631]: 2025-11-29 08:12:41.840 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:42 np0005539550 nova_compute[257631]: 2025-11-29 08:12:42.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.405938556 +0000 UTC m=+0.038382640 container create f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:12:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:42.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:42 np0005539550 systemd[1]: Started libpod-conmon-f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937.scope.
Nov 29 03:12:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.390058801 +0000 UTC m=+0.022502905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.492575467 +0000 UTC m=+0.125019571 container init f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.498986281 +0000 UTC m=+0.131430365 container start f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.502800968 +0000 UTC m=+0.135245062 container attach f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:12:42 np0005539550 nostalgic_lichterman[311551]: 167 167
Nov 29 03:12:42 np0005539550 systemd[1]: libpod-f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937.scope: Deactivated successfully.
Nov 29 03:12:42 np0005539550 conmon[311551]: conmon f150a15462f8c47ff708 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937.scope/container/memory.events
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.506216335 +0000 UTC m=+0.138660429 container died f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:12:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9c8506a638bab7b88b5d2078396cba4938b116cfd06168bb8fc8e6a2bac3b80b-merged.mount: Deactivated successfully.
Nov 29 03:12:42 np0005539550 podman[311535]: 2025-11-29 08:12:42.541653809 +0000 UTC m=+0.174097893 container remove f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:12:42 np0005539550 systemd[1]: libpod-conmon-f150a15462f8c47ff7083d65c9fb1f2648853050f999cb8ad195cff73acb9937.scope: Deactivated successfully.
Nov 29 03:12:42 np0005539550 podman[311574]: 2025-11-29 08:12:42.733644039 +0000 UTC m=+0.041488950 container create 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:12:42 np0005539550 systemd[1]: Started libpod-conmon-69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d.scope.
Nov 29 03:12:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:12:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a90aaa4be428baf3c15c7827cf1345c8424c0b7dd41cfd5861767b3577ef0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a90aaa4be428baf3c15c7827cf1345c8424c0b7dd41cfd5861767b3577ef0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a90aaa4be428baf3c15c7827cf1345c8424c0b7dd41cfd5861767b3577ef0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0a90aaa4be428baf3c15c7827cf1345c8424c0b7dd41cfd5861767b3577ef0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:12:42 np0005539550 podman[311574]: 2025-11-29 08:12:42.7164509 +0000 UTC m=+0.024295841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:12:42 np0005539550 podman[311574]: 2025-11-29 08:12:42.815692852 +0000 UTC m=+0.123537793 container init 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:12:42 np0005539550 podman[311574]: 2025-11-29 08:12:42.824688082 +0000 UTC m=+0.132533003 container start 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:12:42 np0005539550 podman[311574]: 2025-11-29 08:12:42.828895629 +0000 UTC m=+0.136740560 container attach 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:12:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 293 MiB data, 815 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 160 op/s
Nov 29 03:12:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]: {
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:        "osd_id": 0,
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:        "type": "bluestore"
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]:    }
Nov 29 03:12:43 np0005539550 peaceful_hamilton[311590]: }
Nov 29 03:12:43 np0005539550 systemd[1]: libpod-69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d.scope: Deactivated successfully.
Nov 29 03:12:43 np0005539550 podman[311574]: 2025-11-29 08:12:43.706443123 +0000 UTC m=+1.014288044 container died 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:12:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b0a90aaa4be428baf3c15c7827cf1345c8424c0b7dd41cfd5861767b3577ef0a-merged.mount: Deactivated successfully.
Nov 29 03:12:43 np0005539550 podman[311574]: 2025-11-29 08:12:43.763188661 +0000 UTC m=+1.071033582 container remove 69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:12:43 np0005539550 systemd[1]: libpod-conmon-69f577b328f99598c14fbdd25a6b3d109a61352e0ab72620f081f36ccdf6365d.scope: Deactivated successfully.
Nov 29 03:12:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:12:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:12:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dbc3ab1c-b883-4f25-b9b3-423003485c93 does not exist
Nov 29 03:12:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b41f445d-b569-4c43-82af-ad1b9ce23a3d does not exist
Nov 29 03:12:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a5f38cb1-66db-4c20-a40a-6e7f64afd548 does not exist
Nov 29 03:12:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:12:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:44.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:12:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:12:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 293 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 167 op/s
Nov 29 03:12:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:46 np0005539550 podman[311679]: 2025-11-29 08:12:46.323739911 +0000 UTC m=+0.057047557 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:12:46 np0005539550 podman[311678]: 2025-11-29 08:12:46.331794217 +0000 UTC m=+0.064482717 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:12:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:46.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:46 np0005539550 nova_compute[257631]: 2025-11-29 08:12:46.840 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 293 MiB data, 819 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Nov 29 03:12:47 np0005539550 nova_compute[257631]: 2025-11-29 08:12:47.310 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 312 MiB data, 832 MiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 5.0 MiB/s wr, 143 op/s
Nov 29 03:12:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:49.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:49 np0005539550 nova_compute[257631]: 2025-11-29 08:12:49.797 257641 DEBUG nova.compute.manager [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:49 np0005539550 nova_compute[257631]: 2025-11-29 08:12:49.870 257641 INFO nova.compute.manager [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] instance snapshotting#033[00m
Nov 29 03:12:49 np0005539550 nova_compute[257631]: 2025-11-29 08:12:49.871 257641 DEBUG nova.objects.instance [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:50 np0005539550 nova_compute[257631]: 2025-11-29 08:12:50.230 257641 INFO nova.virt.libvirt.driver [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Beginning live snapshot process#033[00m
Nov 29 03:12:50 np0005539550 nova_compute[257631]: 2025-11-29 08:12:50.377 257641 DEBUG nova.virt.libvirt.imagebackend [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:12:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:50 np0005539550 nova_compute[257631]: 2025-11-29 08:12:50.558 257641 DEBUG nova.storage.rbd_utils [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(2b57da3a73544f5ab1c68c1bda87d00e) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:12:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 29 03:12:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 29 03:12:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 29 03:12:50 np0005539550 nova_compute[257631]: 2025-11-29 08:12:50.941 257641 DEBUG nova.storage.rbd_utils [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk@2b57da3a73544f5ab1c68c1bda87d00e to images/7e8836eb-bacd-485f-b61b-a89bcd8f6e84 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:12:51 np0005539550 nova_compute[257631]: 2025-11-29 08:12:51.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539550 nova_compute[257631]: 2025-11-29 08:12:51.070 257641 DEBUG nova.storage.rbd_utils [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening images/7e8836eb-bacd-485f-b61b-a89bcd8f6e84 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:12:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 326 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 497 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Nov 29 03:12:51 np0005539550 nova_compute[257631]: 2025-11-29 08:12:51.481 257641 DEBUG nova.storage.rbd_utils [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(2b57da3a73544f5ab1c68c1bda87d00e) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:12:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:51 np0005539550 nova_compute[257631]: 2025-11-29 08:12:51.843 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 29 03:12:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 29 03:12:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 29 03:12:51 np0005539550 nova_compute[257631]: 2025-11-29 08:12:51.928 257641 DEBUG nova.storage.rbd_utils [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(snap) on rbd image(7e8836eb-bacd-485f-b61b-a89bcd8f6e84) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:12:52 np0005539550 nova_compute[257631]: 2025-11-29 08:12:52.312 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:52.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 29 03:12:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 29 03:12:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 29 03:12:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 326 MiB data, 862 MiB used, 20 GiB / 21 GiB avail; 765 KiB/s rd, 4.3 MiB/s wr, 140 op/s
Nov 29 03:12:53 np0005539550 nova_compute[257631]: 2025-11-29 08:12:53.283 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:53.282 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:12:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:53.284 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:12:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:54.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:54 np0005539550 nova_compute[257631]: 2025-11-29 08:12:54.710 257641 INFO nova.virt.libvirt.driver [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Snapshot image upload complete#033[00m
Nov 29 03:12:54 np0005539550 nova_compute[257631]: 2025-11-29 08:12:54.711 257641 INFO nova.compute.manager [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 4.81 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:12:55 np0005539550 nova_compute[257631]: 2025-11-29 08:12:55.119 257641 DEBUG nova.compute.manager [None req-16848c61-7519-4276-a111-697dc2c3c966 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:12:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 374 MiB data, 894 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 8.8 MiB/s wr, 255 op/s
Nov 29 03:12:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:55.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:12:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:56.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:12:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:12:56 np0005539550 nova_compute[257631]: 2025-11-29 08:12:56.768 257641 DEBUG nova.compute.manager [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:12:56 np0005539550 nova_compute[257631]: 2025-11-29 08:12:56.845 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:56 np0005539550 nova_compute[257631]: 2025-11-29 08:12:56.856 257641 INFO nova.compute.manager [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] instance snapshotting#033[00m
Nov 29 03:12:56 np0005539550 nova_compute[257631]: 2025-11-29 08:12:56.857 257641 DEBUG nova.objects.instance [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:12:57 np0005539550 nova_compute[257631]: 2025-11-29 08:12:57.165 257641 INFO nova.virt.libvirt.driver [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Beginning live snapshot process#033[00m
Nov 29 03:12:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 359 MiB data, 890 MiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.4 MiB/s wr, 328 op/s
Nov 29 03:12:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:12:57.286 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:12:57 np0005539550 nova_compute[257631]: 2025-11-29 08:12:57.330 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:57 np0005539550 nova_compute[257631]: 2025-11-29 08:12:57.335 257641 DEBUG nova.virt.libvirt.imagebackend [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:12:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:57.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:57 np0005539550 nova_compute[257631]: 2025-11-29 08:12:57.592 257641 DEBUG nova.storage.rbd_utils [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(68b3285d010644d491f55bcda7ed6774) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:12:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 29 03:12:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 29 03:12:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 29 03:12:58 np0005539550 nova_compute[257631]: 2025-11-29 08:12:58.101 257641 DEBUG nova.storage.rbd_utils [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk@68b3285d010644d491f55bcda7ed6774 to images/3c08af3e-7b9c-4dba-a2c8-3cba120c3814 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:12:58 np0005539550 nova_compute[257631]: 2025-11-29 08:12:58.217 257641 DEBUG nova.storage.rbd_utils [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening images/3c08af3e-7b9c-4dba-a2c8-3cba120c3814 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:12:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:12:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:12:58 np0005539550 nova_compute[257631]: 2025-11-29 08:12:58.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:12:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:12:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:58 np0005539550 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:12:58 np0005539550 nova_compute[257631]: 2025-11-29 08:12:58.658 257641 DEBUG nova.storage.rbd_utils [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(68b3285d010644d491f55bcda7ed6774) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:12:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 29 03:12:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 29 03:12:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 373 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 14 MiB/s rd, 9.0 MiB/s wr, 376 op/s
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:12:59
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'volumes']
Nov 29 03:12:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:12:59 np0005539550 podman[312040]: 2025-11-29 08:12:59.416731782 +0000 UTC m=+0.115009675 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller)
Nov 29 03:12:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:12:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:12:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:12:59.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:12:59 np0005539550 nova_compute[257631]: 2025-11-29 08:12:59.597 257641 DEBUG nova.storage.rbd_utils [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(snap) on rbd image(3c08af3e-7b9c-4dba-a2c8-3cba120c3814) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:13:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 29 03:13:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 29 03:13:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 29 03:13:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 391 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 13 MiB/s rd, 4.2 MiB/s wr, 287 op/s
Nov 29 03:13:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:01.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 29 03:13:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 29 03:13:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 29 03:13:01 np0005539550 nova_compute[257631]: 2025-11-29 08:13:01.847 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:02Z|00345|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:13:02 np0005539550 nova_compute[257631]: 2025-11-29 08:13:02.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539550 nova_compute[257631]: 2025-11-29 08:13:02.202 257641 INFO nova.virt.libvirt.driver [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Snapshot image upload complete#033[00m
Nov 29 03:13:02 np0005539550 nova_compute[257631]: 2025-11-29 08:13:02.202 257641 INFO nova.compute.manager [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 5.32 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:13:02 np0005539550 nova_compute[257631]: 2025-11-29 08:13:02.331 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:02 np0005539550 nova_compute[257631]: 2025-11-29 08:13:02.689 257641 DEBUG nova.compute.manager [None req-f00f9313-0a1c-4c96-a3c5-ef51a724af9d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:13:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 391 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 3.7 MiB/s wr, 144 op/s
Nov 29 03:13:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:03.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:04.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 438 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.1 MiB/s wr, 122 op/s
Nov 29 03:13:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:05.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:05 np0005539550 nova_compute[257631]: 2025-11-29 08:13:05.991 257641 DEBUG nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:06 np0005539550 nova_compute[257631]: 2025-11-29 08:13:06.031 257641 INFO nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] instance snapshotting#033[00m
Nov 29 03:13:06 np0005539550 nova_compute[257631]: 2025-11-29 08:13:06.032 257641 DEBUG nova.objects.instance [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:06.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:06 np0005539550 nova_compute[257631]: 2025-11-29 08:13:06.722 257641 INFO nova.virt.libvirt.driver [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Beginning live snapshot process#033[00m
Nov 29 03:13:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 29 03:13:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 29 03:13:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 29 03:13:06 np0005539550 nova_compute[257631]: 2025-11-29 08:13:06.849 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:06 np0005539550 nova_compute[257631]: 2025-11-29 08:13:06.947 257641 DEBUG nova.virt.libvirt.imagebackend [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:13:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 458 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.3 MiB/s wr, 144 op/s
Nov 29 03:13:07 np0005539550 nova_compute[257631]: 2025-11-29 08:13:07.333 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:07 np0005539550 nova_compute[257631]: 2025-11-29 08:13:07.347 257641 DEBUG nova.storage.rbd_utils [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(e06e1c4d990c4e11a97c64e7e97e9547) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:13:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:07.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 29 03:13:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 29 03:13:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 29 03:13:07 np0005539550 nova_compute[257631]: 2025-11-29 08:13:07.878 257641 DEBUG nova.storage.rbd_utils [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning vms/19e85fae-c57e-409b-95f7-b53ddb4c928e_disk@e06e1c4d990c4e11a97c64e7e97e9547 to images/4e132098-a673-4ed7-8cfc-d67f77fc5495 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:13:07 np0005539550 nova_compute[257631]: 2025-11-29 08:13:07.985 257641 DEBUG nova.storage.rbd_utils [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening images/4e132098-a673-4ed7-8cfc-d67f77fc5495 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:13:08 np0005539550 nova_compute[257631]: 2025-11-29 08:13:08.367 257641 DEBUG nova.storage.rbd_utils [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(e06e1c4d990c4e11a97c64e7e97e9547) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:13:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:08.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 29 03:13:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 29 03:13:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 29 03:13:08 np0005539550 nova_compute[257631]: 2025-11-29 08:13:08.849 257641 DEBUG nova.storage.rbd_utils [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(snap) on rbd image(4e132098-a673-4ed7-8cfc-d67f77fc5495) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:13:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:13:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 536 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 13 MiB/s wr, 200 op/s
Nov 29 03:13:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:09.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 29 03:13:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 29 03:13:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 29 03:13:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:10.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 517 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 265 op/s
Nov 29 03:13:11 np0005539550 nova_compute[257631]: 2025-11-29 08:13:11.499 257641 INFO nova.virt.libvirt.driver [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Snapshot image upload complete#033[00m
Nov 29 03:13:11 np0005539550 nova_compute[257631]: 2025-11-29 08:13:11.500 257641 INFO nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Took 5.44 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:13:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:11.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:11 np0005539550 nova_compute[257631]: 2025-11-29 08:13:11.851 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539550 nova_compute[257631]: 2025-11-29 08:13:12.176 257641 DEBUG nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:13:12 np0005539550 nova_compute[257631]: 2025-11-29 08:13:12.177 257641 DEBUG nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Nov 29 03:13:12 np0005539550 nova_compute[257631]: 2025-11-29 08:13:12.177 257641 DEBUG nova.compute.manager [None req-9d2ead88-0bd8-44bc-8e2e-193b907c6e88 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Deleting image 7e8836eb-bacd-485f-b61b-a89bcd8f6e84 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Nov 29 03:13:12 np0005539550 nova_compute[257631]: 2025-11-29 08:13:12.335 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:12.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 517 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 8.9 MiB/s wr, 199 op/s
Nov 29 03:13:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:13.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 29 03:13:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 29 03:13:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 29 03:13:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:14.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:14 np0005539550 nova_compute[257631]: 2025-11-29 08:13:14.551 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 484 MiB data, 1006 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.4 MiB/s wr, 170 op/s
Nov 29 03:13:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:15.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:16.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 29 03:13:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 29 03:13:16 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 29 03:13:16 np0005539550 nova_compute[257631]: 2025-11-29 08:13:16.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 447 MiB data, 959 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.1 MiB/s wr, 262 op/s
Nov 29 03:13:17 np0005539550 podman[312287]: 2025-11-29 08:13:17.307734528 +0000 UTC m=+0.048124219 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:13:17 np0005539550 podman[312286]: 2025-11-29 08:13:17.314532221 +0000 UTC m=+0.057876998 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:13:17 np0005539550 nova_compute[257631]: 2025-11-29 08:13:17.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:17.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 29 03:13:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 29 03:13:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 29 03:13:18 np0005539550 nova_compute[257631]: 2025-11-29 08:13:18.105 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:18.946 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:18.947 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:18.947 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 187 MiB data, 855 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 40 KiB/s wr, 412 op/s
Nov 29 03:13:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:19.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002837901893727899 of space, bias 1.0, pg target 0.8513705681183696 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003028925006979341 of space, bias 1.0, pg target 0.9086775020938024 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:13:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:13:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:20.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 121 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 9.5 KiB/s wr, 303 op/s
Nov 29 03:13:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:21.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 29 03:13:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 29 03:13:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.856 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.947 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.947 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.947 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:13:21 np0005539550 nova_compute[257631]: 2025-11-29 08:13:21.947 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.338 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109504117' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.381 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.462 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.463 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:13:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:22.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.631 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.632 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=20.94269561767578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.632 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.633 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.719 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 19e85fae-c57e-409b-95f7-b53ddb4c928e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.719 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.719 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:13:22 np0005539550 nova_compute[257631]: 2025-11-29 08:13:22.753 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:22Z|00346|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:13:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:23Z|00347|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.062 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.088 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914698494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.190 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.196 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.214 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.234 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:13:23 np0005539550 nova_compute[257631]: 2025-11-29 08:13:23.235 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 121 MiB data, 816 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.6 KiB/s wr, 211 op/s
Nov 29 03:13:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:23.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.236 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.236 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:24.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:13:24 np0005539550 nova_compute[257631]: 2025-11-29 08:13:24.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:13:25 np0005539550 nova_compute[257631]: 2025-11-29 08:13:25.239 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:25 np0005539550 nova_compute[257631]: 2025-11-29 08:13:25.239 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:25 np0005539550 nova_compute[257631]: 2025-11-29 08:13:25.239 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:13:25 np0005539550 nova_compute[257631]: 2025-11-29 08:13:25.240 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 121 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 7.0 KiB/s wr, 172 op/s
Nov 29 03:13:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:25.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:26.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:26 np0005539550 nova_compute[257631]: 2025-11-29 08:13:26.674 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 29 03:13:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 29 03:13:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 29 03:13:26 np0005539550 nova_compute[257631]: 2025-11-29 08:13:26.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.239 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.256 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.256 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.256 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.256 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 121 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.358 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:27.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:27 np0005539550 nova_compute[257631]: 2025-11-29 08:13:27.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:13:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:28.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:28 np0005539550 nova_compute[257631]: 2025-11-29 08:13:28.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:13:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 121 MiB data, 787 MiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 13 KiB/s wr, 1 op/s
Nov 29 03:13:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:29.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:30 np0005539550 podman[312382]: 2025-11-29 08:13:30.366416472 +0000 UTC m=+0.109079675 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:13:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:30.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 138 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 769 KiB/s wr, 18 op/s
Nov 29 03:13:31 np0005539550 nova_compute[257631]: 2025-11-29 08:13:31.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:31.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.765572) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011765752, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1312, "num_deletes": 258, "total_data_size": 1923118, "memory_usage": 1961976, "flush_reason": "Manual Compaction"}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011775197, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1270000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38698, "largest_seqno": 40009, "table_properties": {"data_size": 1264835, "index_size": 2497, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13733, "raw_average_key_size": 21, "raw_value_size": 1253550, "raw_average_value_size": 1974, "num_data_blocks": 109, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764403917, "oldest_key_time": 1764403917, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 9654 microseconds, and 4751 cpu microseconds.
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.775237) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1270000 bytes OK
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.775258) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777107) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777119) EVENT_LOG_v1 {"time_micros": 1764404011777115, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1917204, prev total WAL file size 1917204, number of live WAL files 2.
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323532' seq:72057594037927935, type:22 .. '6D6772737461740031353033' seq:0, type:0; will stop at (end)
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1240KB)], [83(11MB)]
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011778011, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 13563979, "oldest_snapshot_seqno": -1}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7292 keys, 10352334 bytes, temperature: kUnknown
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011850803, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 10352334, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10304771, "index_size": 28251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188833, "raw_average_key_size": 25, "raw_value_size": 10175198, "raw_average_value_size": 1395, "num_data_blocks": 1114, "num_entries": 7292, "num_filter_entries": 7292, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.851147) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 10352334 bytes
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.852731) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 141.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.8) write-amplify(8.2) OK, records in: 7779, records dropped: 487 output_compression: NoCompression
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.852753) EVENT_LOG_v1 {"time_micros": 1764404011852743, "job": 48, "event": "compaction_finished", "compaction_time_micros": 72938, "compaction_time_cpu_micros": 29805, "output_level": 6, "num_output_files": 1, "total_output_size": 10352334, "num_input_records": 7779, "num_output_records": 7292, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011853182, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404011856303, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.777777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.856347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.856352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.856354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.856356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:13:31.856358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:13:31 np0005539550 nova_compute[257631]: 2025-11-29 08:13:31.860 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:32 np0005539550 nova_compute[257631]: 2025-11-29 08:13:32.361 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:32.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 138 MiB data, 794 MiB used, 20 GiB / 21 GiB avail; 8.4 KiB/s rd, 731 KiB/s wr, 17 op/s
Nov 29 03:13:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:33.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:34.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 167 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Nov 29 03:13:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:36.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:36 np0005539550 nova_compute[257631]: 2025-11-29 08:13:36.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:36 np0005539550 nova_compute[257631]: 2025-11-29 08:13:36.978 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.178 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.179 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.215 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 167 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.331 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.332 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.338 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.338 257641 INFO nova.compute.claims [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.477 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:37.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439980307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.940 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.946 257641 DEBUG nova.compute.provider_tree [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:37 np0005539550 nova_compute[257631]: 2025-11-29 08:13:37.967 257641 DEBUG nova.scheduler.client.report [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.002 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.003 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.065 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.066 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.087 257641 INFO nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.117 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.223 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.224 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.225 257641 INFO nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Creating image(s)#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.249 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.275 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.314 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.318 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.400 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.401 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.401 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.401 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.424 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.427 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.452 257641 DEBUG nova.policy [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '05e59f4debd946ad9b7a4bac0e968bc6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.675 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.739 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] resizing rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.839 257641 DEBUG nova.objects.instance [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'migration_context' on Instance uuid 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.872 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.872 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Ensure instance console log exists: /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.873 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.873 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:38 np0005539550 nova_compute[257631]: 2025-11-29 08:13:38.874 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 202 MiB data, 825 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.2 MiB/s wr, 88 op/s
Nov 29 03:13:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:39 np0005539550 nova_compute[257631]: 2025-11-29 08:13:39.988 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Successfully created port: f890c7dd-6d37-48c2-83ba-547d7fc32c12 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 238 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 130 op/s
Nov 29 03:13:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.691 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Successfully updated port: f890c7dd-6d37-48c2-83ba-547d7fc32c12 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.704 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.705 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquired lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.705 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.728 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.729 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.763 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.863 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.924 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.925 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.932 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:13:41 np0005539550 nova_compute[257631]: 2025-11-29 08:13:41.932 257641 INFO nova.compute.claims [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.149 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.317 257641 DEBUG nova.compute.manager [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Received event network-changed-f890c7dd-6d37-48c2-83ba-547d7fc32c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.318 257641 DEBUG nova.compute.manager [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Refreshing instance network info cache due to event network-changed-f890c7dd-6d37-48c2-83ba-547d7fc32c12. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.318 257641 DEBUG oslo_concurrency.lockutils [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.320 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:42.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:13:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806897331' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.770 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.775 257641 DEBUG nova.compute.provider_tree [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.809 257641 DEBUG nova.scheduler.client.report [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.904 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.905 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.996 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:13:42 np0005539550 nova_compute[257631]: 2025-11-29 08:13:42.997 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:13:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 238 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 117 op/s
Nov 29 03:13:43 np0005539550 nova_compute[257631]: 2025-11-29 08:13:43.459 257641 DEBUG nova.policy [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5dc4cc7160064e9e82d9d21ebfd05d2f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c89019b6f53547259a833925c95b09c1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:13:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:43.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:43 np0005539550 nova_compute[257631]: 2025-11-29 08:13:43.923 257641 INFO nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:13:43 np0005539550 nova_compute[257631]: 2025-11-29 08:13:43.954 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.074 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.075 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.076 257641 INFO nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Creating image(s)#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.102 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.135 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.167 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.173 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.240 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.241 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.242 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.242 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.274 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.279 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eebe492a-dfa8-49b7-83b6-6c8447520afb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.417 257641 DEBUG nova.network.neutron [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updating instance_info_cache with network_info: [{"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.476 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Releasing lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.477 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Instance network_info: |[{"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.477 257641 DEBUG oslo_concurrency.lockutils [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.478 257641 DEBUG nova.network.neutron [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Refreshing network info cache for port f890c7dd-6d37-48c2-83ba-547d7fc32c12 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.480 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Start _get_guest_xml network_info=[{"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '93eccffb-bacd-407f-af6f-64451dee7b21'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.500 257641 WARNING nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:44.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.506 257641 DEBUG nova.virt.libvirt.host [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.507 257641 DEBUG nova.virt.libvirt.host [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.515 257641 DEBUG nova.virt.libvirt.host [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.516 257641 DEBUG nova.virt.libvirt.host [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.517 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.518 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.518 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.518 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.519 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.519 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.519 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.519 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.520 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.520 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.520 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.520 257641 DEBUG nova.virt.hardware [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.523 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.592 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eebe492a-dfa8-49b7-83b6-6c8447520afb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.685 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] resizing rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.798 257641 DEBUG nova.objects.instance [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lazy-loading 'migration_context' on Instance uuid eebe492a-dfa8-49b7-83b6-6c8447520afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.832 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.832 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Ensure instance console log exists: /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.833 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.833 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.833 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.901 257641 DEBUG nova.compute.manager [None req-3ef91605-5793-4d48-a89a-05f12dea5d6b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Nov 29 03:13:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069533894' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:44 np0005539550 nova_compute[257631]: 2025-11-29 08:13:44.988 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.020 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.025 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 309 MiB data, 875 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.8 MiB/s wr, 181 op/s
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.309 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Successfully created port: d6293fb0-6295-4539-9b70-544701b49154 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334488633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.492 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.495 257641 DEBUG nova.virt.libvirt.vif [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1208248196',display_name='tempest-ListServerFiltersTestJSON-instance-1208248196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1208248196',id=91,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-ropy304h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:38Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=53d2b424-4fdc-47e3-afa1-849bfc2c0b5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.495 257641 DEBUG nova.network.os_vif_util [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.496 257641 DEBUG nova.network.os_vif_util [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:45 np0005539550 nova_compute[257631]: 2025-11-29 08:13:45.498 257641 DEBUG nova.objects.instance [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'pci_devices' on Instance uuid 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b6601923-59a5-4411-afac-14bd1a932ebe does not exist
Nov 29 03:13:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 03de6c6e-5e9f-4794-ae98-febd464a09ac does not exist
Nov 29 03:13:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d6eb1156-1eaf-4758-b81c-12fda154373c does not exist
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:13:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:13:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:45.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.199038742 +0000 UTC m=+0.045025730 container create b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:46 np0005539550 systemd[1]: Started libpod-conmon-b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72.scope.
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:13:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.178244781 +0000 UTC m=+0.024231799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.276112469 +0000 UTC m=+0.122099487 container init b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.284337299 +0000 UTC m=+0.130324297 container start b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.288215078 +0000 UTC m=+0.134202126 container attach b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:13:46 np0005539550 suspicious_ellis[313307]: 167 167
Nov 29 03:13:46 np0005539550 systemd[1]: libpod-b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72.scope: Deactivated successfully.
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.290204228 +0000 UTC m=+0.136191226 container died b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:13:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bc06090b8afcaa315414198d7d7cf33b578da267c78d68996cc2a5aaedd9681a-merged.mount: Deactivated successfully.
Nov 29 03:13:46 np0005539550 podman[313291]: 2025-11-29 08:13:46.327380987 +0000 UTC m=+0.173367985 container remove b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:13:46 np0005539550 systemd[1]: libpod-conmon-b35336226d77cfb30af285005828940319eef624579340d456bdcda94bec4c72.scope: Deactivated successfully.
Nov 29 03:13:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:46.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:46 np0005539550 podman[313329]: 2025-11-29 08:13:46.464743872 +0000 UTC m=+0.021942841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:46 np0005539550 podman[313329]: 2025-11-29 08:13:46.672072372 +0000 UTC m=+0.229271311 container create 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:13:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:46 np0005539550 systemd[1]: Started libpod-conmon-640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a.scope.
Nov 29 03:13:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:46 np0005539550 podman[313329]: 2025-11-29 08:13:46.865228321 +0000 UTC m=+0.422427300 container init 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:46 np0005539550 nova_compute[257631]: 2025-11-29 08:13:46.866 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:46 np0005539550 podman[313329]: 2025-11-29 08:13:46.874620221 +0000 UTC m=+0.431819170 container start 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:13:46 np0005539550 podman[313329]: 2025-11-29 08:13:46.878548891 +0000 UTC m=+0.435747840 container attach 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.097 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <uuid>53d2b424-4fdc-47e3-afa1-849bfc2c0b5a</uuid>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <name>instance-0000005b</name>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1208248196</nova:name>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:13:44</nova:creationTime>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:user uuid="05e59f4debd946ad9b7a4bac0e968bc6">tempest-ListServerFiltersTestJSON-825347861-project-member</nova:user>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:project uuid="17c0ff0fdeac43fc8fa0d7bedad67c34">tempest-ListServerFiltersTestJSON-825347861</nova:project>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <nova:port uuid="f890c7dd-6d37-48c2-83ba-547d7fc32c12">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="serial">53d2b424-4fdc-47e3-afa1-849bfc2c0b5a</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="uuid">53d2b424-4fdc-47e3-afa1-849bfc2c0b5a</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:bd:d1:d1"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <target dev="tapf890c7dd-6d"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/console.log" append="off"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:13:47 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:13:47 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:13:47 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:13:47 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.099 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Preparing to wait for external event network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.099 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.100 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.100 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.101 257641 DEBUG nova.virt.libvirt.vif [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1208248196',display_name='tempest-ListServerFiltersTestJSON-instance-1208248196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1208248196',id=91,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-ropy304h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:38Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=53d2b424-4fdc-47e3-afa1-849bfc2c0b5a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.101 257641 DEBUG nova.network.os_vif_util [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.102 257641 DEBUG nova.network.os_vif_util [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.102 257641 DEBUG os_vif [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.103 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.104 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.108 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.108 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf890c7dd-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.109 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf890c7dd-6d, col_values=(('external_ids', {'iface-id': 'f890c7dd-6d37-48c2-83ba-547d7fc32c12', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:d1:d1', 'vm-uuid': '53d2b424-4fdc-47e3-afa1-849bfc2c0b5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539550 NetworkManager[49039]: <info>  [1764404027.1120] manager: (tapf890c7dd-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:47 np0005539550 nova_compute[257631]: 2025-11-29 08:13:47.119 257641 INFO os_vif [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d')#033[00m
Nov 29 03:13:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 370 MiB data, 899 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.6 MiB/s wr, 196 op/s
Nov 29 03:13:47 np0005539550 focused_moser[313345]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:13:47 np0005539550 focused_moser[313345]: --> relative data size: 1.0
Nov 29 03:13:47 np0005539550 focused_moser[313345]: --> All data devices are unavailable
Nov 29 03:13:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:47.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:47 np0005539550 systemd[1]: libpod-640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a.scope: Deactivated successfully.
Nov 29 03:13:47 np0005539550 podman[313329]: 2025-11-29 08:13:47.685318119 +0000 UTC m=+1.242517068 container died 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c01c807b2cd629859710845f7101ae682d3af41c0c114b71e8e881c3b9c518e4-merged.mount: Deactivated successfully.
Nov 29 03:13:47 np0005539550 podman[313329]: 2025-11-29 08:13:47.741824481 +0000 UTC m=+1.299023430 container remove 640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Nov 29 03:13:47 np0005539550 systemd[1]: libpod-conmon-640659e79b804c4f052a1a271cde42344fd988f3a823822a798032d71b6a242a.scope: Deactivated successfully.
Nov 29 03:13:47 np0005539550 podman[313370]: 2025-11-29 08:13:47.785602798 +0000 UTC m=+0.065932984 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:13:47 np0005539550 podman[313363]: 2025-11-29 08:13:47.790714438 +0000 UTC m=+0.078226787 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.331310713 +0000 UTC m=+0.039615541 container create fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:48 np0005539550 systemd[1]: Started libpod-conmon-fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527.scope.
Nov 29 03:13:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.404423149 +0000 UTC m=+0.112728007 container init fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.313788266 +0000 UTC m=+0.022093124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.411600762 +0000 UTC m=+0.119905590 container start fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.414992049 +0000 UTC m=+0.123296877 container attach fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:13:48 np0005539550 competent_bhabha[313563]: 167 167
Nov 29 03:13:48 np0005539550 systemd[1]: libpod-fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527.scope: Deactivated successfully.
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.417994706 +0000 UTC m=+0.126299524 container died fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:13:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f39dee99ae66108e6f2f232f62680c8130d5fa6b8a90d512cb4d6b528b55a407-merged.mount: Deactivated successfully.
Nov 29 03:13:48 np0005539550 podman[313548]: 2025-11-29 08:13:48.452046924 +0000 UTC m=+0.160351752 container remove fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:48 np0005539550 systemd[1]: libpod-conmon-fef68c6ffe9b6e75f73509f9ee42c0a1c53327a1dfa559d64dcb967c1f2ad527.scope: Deactivated successfully.
Nov 29 03:13:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:48.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:48 np0005539550 podman[313586]: 2025-11-29 08:13:48.609974485 +0000 UTC m=+0.041165412 container create 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:48 np0005539550 systemd[1]: Started libpod-conmon-622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75.scope.
Nov 29 03:13:48 np0005539550 nova_compute[257631]: 2025-11-29 08:13:48.671 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:48 np0005539550 nova_compute[257631]: 2025-11-29 08:13:48.673 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:13:48 np0005539550 nova_compute[257631]: 2025-11-29 08:13:48.673 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] No VIF found with MAC fa:16:3e:bd:d1:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:13:48 np0005539550 nova_compute[257631]: 2025-11-29 08:13:48.674 257641 INFO nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Using config drive#033[00m
Nov 29 03:13:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:48 np0005539550 podman[313586]: 2025-11-29 08:13:48.591668407 +0000 UTC m=+0.022859344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6761339572c7d6988d330efb424485c09dac6aab380c3ae640651c3895d082d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6761339572c7d6988d330efb424485c09dac6aab380c3ae640651c3895d082d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6761339572c7d6988d330efb424485c09dac6aab380c3ae640651c3895d082d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6761339572c7d6988d330efb424485c09dac6aab380c3ae640651c3895d082d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:48 np0005539550 podman[313586]: 2025-11-29 08:13:48.710658814 +0000 UTC m=+0.141849761 container init 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:13:48 np0005539550 nova_compute[257631]: 2025-11-29 08:13:48.713 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:48 np0005539550 podman[313586]: 2025-11-29 08:13:48.71715961 +0000 UTC m=+0.148350537 container start 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:13:48 np0005539550 podman[313586]: 2025-11-29 08:13:48.720226358 +0000 UTC m=+0.151417305 container attach 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:13:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 399 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 8.9 MiB/s wr, 219 op/s
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.357 257641 INFO nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Creating config drive at /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config#033[00m
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.361 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvnypjxd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.388 257641 DEBUG nova.network.neutron [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updated VIF entry in instance network info cache for port f890c7dd-6d37-48c2-83ba-547d7fc32c12. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.389 257641 DEBUG nova.network.neutron [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updating instance_info_cache with network_info: [{"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.494 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcvnypjxd" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:49 np0005539550 clever_kilby[313602]: {
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:    "0": [
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:        {
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "devices": [
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "/dev/loop3"
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            ],
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "lv_name": "ceph_lv0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "lv_size": "7511998464",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "name": "ceph_lv0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "tags": {
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.cluster_name": "ceph",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.crush_device_class": "",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.encrypted": "0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.osd_id": "0",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.type": "block",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:                "ceph.vdo": "0"
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            },
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "type": "block",
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:            "vg_name": "ceph_vg0"
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:        }
Nov 29 03:13:49 np0005539550 clever_kilby[313602]:    ]
Nov 29 03:13:49 np0005539550 clever_kilby[313602]: }
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.528 257641 DEBUG nova.storage.rbd_utils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] rbd image 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:49 np0005539550 nova_compute[257631]: 2025-11-29 08:13:49.533 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:49 np0005539550 systemd[1]: libpod-622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75.scope: Deactivated successfully.
Nov 29 03:13:49 np0005539550 podman[313651]: 2025-11-29 08:13:49.591083491 +0000 UTC m=+0.022058664 container died 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:13:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6761339572c7d6988d330efb424485c09dac6aab380c3ae640651c3895d082d3-merged.mount: Deactivated successfully.
Nov 29 03:13:49 np0005539550 podman[313651]: 2025-11-29 08:13:49.645427318 +0000 UTC m=+0.076402461 container remove 622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:13:49 np0005539550 systemd[1]: libpod-conmon-622422c5c2249708fdd3ce1f1ad1be8ee2cbde18c68f36aec4f0438a6aa4cb75.scope: Deactivated successfully.
Nov 29 03:13:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:49.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.187 257641 DEBUG oslo_concurrency.processutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.188 257641 INFO nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Deleting local config drive /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:13:50 np0005539550 kernel: tapf890c7dd-6d: entered promiscuous mode
Nov 29 03:13:50 np0005539550 NetworkManager[49039]: <info>  [1764404030.2368] manager: (tapf890c7dd-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.237 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:50Z|00348|binding|INFO|Claiming lport f890c7dd-6d37-48c2-83ba-547d7fc32c12 for this chassis.
Nov 29 03:13:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:50Z|00349|binding|INFO|f890c7dd-6d37-48c2-83ba-547d7fc32c12: Claiming fa:16:3e:bd:d1:d1 10.100.0.14
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.240458271 +0000 UTC m=+0.042072465 container create bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:50 np0005539550 systemd-udevd[313854]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.285 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:50Z|00350|binding|INFO|Setting lport f890c7dd-6d37-48c2-83ba-547d7fc32c12 ovn-installed in OVS
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:50 np0005539550 systemd[1]: Started libpod-conmon-bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f.scope.
Nov 29 03:13:50 np0005539550 NetworkManager[49039]: <info>  [1764404030.2902] device (tapf890c7dd-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:13:50 np0005539550 NetworkManager[49039]: <info>  [1764404030.2913] device (tapf890c7dd-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:13:50 np0005539550 systemd-machined[216673]: New machine qemu-42-instance-0000005b.
Nov 29 03:13:50 np0005539550 systemd[1]: Started Virtual Machine qemu-42-instance-0000005b.
Nov 29 03:13:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.220595494 +0000 UTC m=+0.022209718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.322111135 +0000 UTC m=+0.123725359 container init bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.331283959 +0000 UTC m=+0.132898163 container start bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.335073615 +0000 UTC m=+0.136687819 container attach bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:13:50 np0005539550 magical_blackwell[313857]: 167 167
Nov 29 03:13:50 np0005539550 systemd[1]: libpod-bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f.scope: Deactivated successfully.
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.337377384 +0000 UTC m=+0.138991588 container died bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:13:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b54c3b38b4c6881b4ed5f24ef6079e9afde087d2e575b678ec4fd9002e71f332-merged.mount: Deactivated successfully.
Nov 29 03:13:50 np0005539550 podman[313825]: 2025-11-29 08:13:50.378128264 +0000 UTC m=+0.179742458 container remove bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_blackwell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:50 np0005539550 systemd[1]: libpod-conmon-bf6fdd6b8c711420d743a4007a3f5f7869fda80de031039848af744908aff48f.scope: Deactivated successfully.
Nov 29 03:13:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:50 np0005539550 podman[313919]: 2025-11-29 08:13:50.555524631 +0000 UTC m=+0.045628845 container create 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:13:50 np0005539550 systemd[1]: Started libpod-conmon-7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8.scope.
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.602 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404030.6023312, 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:50 np0005539550 nova_compute[257631]: 2025-11-29 08:13:50.604 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:13:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3816c7ffcdf9b5efba2df7af6dbe166b6ccb53eb658335d55a0f85551dd6103f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3816c7ffcdf9b5efba2df7af6dbe166b6ccb53eb658335d55a0f85551dd6103f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3816c7ffcdf9b5efba2df7af6dbe166b6ccb53eb658335d55a0f85551dd6103f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3816c7ffcdf9b5efba2df7af6dbe166b6ccb53eb658335d55a0f85551dd6103f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:50 np0005539550 podman[313919]: 2025-11-29 08:13:50.539107992 +0000 UTC m=+0.029212226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:13:50 np0005539550 podman[313919]: 2025-11-29 08:13:50.638797456 +0000 UTC m=+0.128901690 container init 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 29 03:13:50 np0005539550 podman[313919]: 2025-11-29 08:13:50.646629576 +0000 UTC m=+0.136733790 container start 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:13:50 np0005539550 podman[313919]: 2025-11-29 08:13:50.649946641 +0000 UTC m=+0.140050855 container attach 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:13:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 406 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 8.4 MiB/s wr, 201 op/s
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]: {
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:        "osd_id": 0,
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:        "type": "bluestore"
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]:    }
Nov 29 03:13:51 np0005539550 charming_wescoff[313943]: }
Nov 29 03:13:51 np0005539550 systemd[1]: libpod-7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8.scope: Deactivated successfully.
Nov 29 03:13:51 np0005539550 podman[313919]: 2025-11-29 08:13:51.514480892 +0000 UTC m=+1.004585106 container died 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:13:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:51Z|00351|binding|INFO|Setting lport f890c7dd-6d37-48c2-83ba-547d7fc32c12 up in Southbound
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.540 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:d1:d1 10.100.0.14'], port_security=['fa:16:3e:bd:d1:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '53d2b424-4fdc-47e3-afa1-849bfc2c0b5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f890c7dd-6d37-48c2-83ba-547d7fc32c12) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.541 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f890c7dd-6d37-48c2-83ba-547d7fc32c12 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 bound to our chassis#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.543 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5ce08321-9ca9-47d5-b99b-65a439440787#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.555 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3e533f-dc56-4c3d-bbd1-3033fd38c4ae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.556 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5ce08321-91 in ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.557 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5ce08321-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.558 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b0f67d-307c-40e0-8799-1c461fefd001]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.558 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5bc82cb1-3065-42fd-9e6c-2c6e5a5392ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.571 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e68195-9fc5-4a03-89b3-48e43ae5a22a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.597 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dde87071-7c06-4011-970d-6ac32931a43d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3816c7ffcdf9b5efba2df7af6dbe166b6ccb53eb658335d55a0f85551dd6103f-merged.mount: Deactivated successfully.
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.630 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3a1b89-0cd2-4366-aa14-615aff10d5b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.638 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f2c95b-e468-4468-a6e3-919c41583573]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 NetworkManager[49039]: <info>  [1764404031.6395] manager: (tap5ce08321-90): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.640 257641 DEBUG oslo_concurrency.lockutils [req-147955ff-e295-448a-9ae8-0684e4dee688 req-5c303070-0508-4670-af46-70129b0709a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.670 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9d821033-46cc-4a20-aac3-d81cbf3477ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.672 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.673 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e151f4d9-512a-45eb-af9d-4496354110d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:51.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.676 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404030.6025622, 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.677 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:13:51 np0005539550 NetworkManager[49039]: <info>  [1764404031.6938] device (tap5ce08321-90): carrier: link connected
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.698 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.698 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[53285ac8-eb0b-4985-9f03-324cb4e6c2c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.701 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.715 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c6406a2c-01f1-44c4-b225-0195c2e2e1e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707935, 'reachable_time': 30499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313999, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.726 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.731 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3c83d9-85b7-498d-8ae1-71bf6c1c26b5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:bc0c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707935, 'tstamp': 707935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314000, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.750 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[facc30c2-1748-4fc7-91cc-5e746e8b2fc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5ce08321-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:bc:0c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707935, 'reachable_time': 30499, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314001, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 podman[313919]: 2025-11-29 08:13:51.755190345 +0000 UTC m=+1.245294559 container remove 7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:13:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:51 np0005539550 systemd[1]: libpod-conmon-7d998fbbf3c6ec28f5eb51ba115d016172ddeea32dd4451138220466a2b229b8.scope: Deactivated successfully.
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.787 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7dbfdf-251e-498b-8ae6-5b87e0b789f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.852 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[652fff6d-411b-4f07-9600-373159742b16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.854 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.854 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.854 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce08321-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.856 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:51 np0005539550 kernel: tap5ce08321-90: entered promiscuous mode
Nov 29 03:13:51 np0005539550 NetworkManager[49039]: <info>  [1764404031.8568] manager: (tap5ce08321-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Nov 29 03:13:51 np0005539550 auditd[706]: Audit daemon rotating log files
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.859 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5ce08321-90, col_values=(('external_ids', {'iface-id': 'fb53c57a-d19f-4391-add7-afa34095fb59'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.860 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:13:51Z|00352|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:13:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:13:51 np0005539550 nova_compute[257631]: 2025-11-29 08:13:51.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.880 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.881 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ea4e8554-9085-4b3e-ae5d-60a0b9090735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.882 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/5ce08321-9ca9-47d5-b99b-65a439440787.pid.haproxy
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 5ce08321-9ca9-47d5-b99b-65a439440787
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:13:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:51.882 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'env', 'PROCESS_TAG=haproxy-5ce08321-9ca9-47d5-b99b-65a439440787', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5ce08321-9ca9-47d5-b99b-65a439440787.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:13:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 82411be0-c624-435b-9ac2-536568bffada does not exist
Nov 29 03:13:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b23da3f5-88e5-4450-a2f5-07b92e5c15aa does not exist
Nov 29 03:13:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev de6b18c1-70b7-46af-bcf4-ab56ca08a24d does not exist
Nov 29 03:13:52 np0005539550 nova_compute[257631]: 2025-11-29 08:13:52.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:52 np0005539550 nova_compute[257631]: 2025-11-29 08:13:52.183 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Successfully updated port: d6293fb0-6295-4539-9b70-544701b49154 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:13:52 np0005539550 podman[314085]: 2025-11-29 08:13:52.258256502 +0000 UTC m=+0.054749708 container create 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:13:52 np0005539550 systemd[1]: Started libpod-conmon-8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7.scope.
Nov 29 03:13:52 np0005539550 podman[314085]: 2025-11-29 08:13:52.225369213 +0000 UTC m=+0.021862439 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:13:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:13:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67da18e0fa809e92e989aec858dcb48b56ceaf987d21d3ea1c679d6b45c91cab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:13:52 np0005539550 podman[314085]: 2025-11-29 08:13:52.340559352 +0000 UTC m=+0.137052598 container init 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:13:52 np0005539550 podman[314085]: 2025-11-29 08:13:52.345636182 +0000 UTC m=+0.142129398 container start 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:13:52 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [NOTICE]   (314104) : New worker (314106) forked
Nov 29 03:13:52 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [NOTICE]   (314104) : Loading success.
Nov 29 03:13:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:13:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:13:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 406 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 7.1 MiB/s wr, 158 op/s
Nov 29 03:13:53 np0005539550 nova_compute[257631]: 2025-11-29 08:13:53.503 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:53 np0005539550 nova_compute[257631]: 2025-11-29 08:13:53.504 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquired lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:53 np0005539550 nova_compute[257631]: 2025-11-29 08:13:53.504 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:13:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:13:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:53.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:53 np0005539550 nova_compute[257631]: 2025-11-29 08:13:53.846 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:13:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:54 np0005539550 nova_compute[257631]: 2025-11-29 08:13:54.627 257641 DEBUG nova.compute.manager [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-changed-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:54 np0005539550 nova_compute[257631]: 2025-11-29 08:13:54.627 257641 DEBUG nova.compute.manager [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Refreshing instance network info cache due to event network-changed-d6293fb0-6295-4539-9b70-544701b49154. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:13:54 np0005539550 nova_compute[257631]: 2025-11-29 08:13:54.628 257641 DEBUG oslo_concurrency.lockutils [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:13:54 np0005539550 nova_compute[257631]: 2025-11-29 08:13:54.860 257641 DEBUG nova.network.neutron [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Updating instance_info_cache with network_info: [{"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 430 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 8.4 MiB/s wr, 245 op/s
Nov 29 03:13:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:55.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:56.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.881 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Releasing lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.882 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Instance network_info: |[{"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.882 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.884 257641 DEBUG oslo_concurrency.lockutils [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.884 257641 DEBUG nova.network.neutron [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Refreshing network info cache for port d6293fb0-6295-4539-9b70-544701b49154 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.888 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Start _get_guest_xml network_info=[{"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.893 257641 WARNING nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.903 257641 DEBUG nova.virt.libvirt.host [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.904 257641 DEBUG nova.virt.libvirt.host [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.907 257641 DEBUG nova.virt.libvirt.host [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.907 257641 DEBUG nova.virt.libvirt.host [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.909 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.909 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.910 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.910 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.911 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.911 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.911 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.912 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.912 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.912 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.913 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.913 257641 DEBUG nova.virt.hardware [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:13:56 np0005539550 nova_compute[257631]: 2025-11-29 08:13:56.917 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 432 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.4 MiB/s wr, 195 op/s
Nov 29 03:13:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1493600216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.347 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.369 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.372 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:13:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:57.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:13:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:13:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745152505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.789 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.791 257641 DEBUG nova.virt.libvirt.vif [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-225192304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-225192304',id=94,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c89019b6f53547259a833925c95b09c1',ramdisk_id='',reservation_id='r-ly2816ty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-58601818',owner_user_name='tempest-InstanceActionsV221TestJSON-58601818-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:44Z,user_data=None,user_id='5dc4cc7160064e9e82d9d21ebfd05d2f',uuid=eebe492a-dfa8-49b7-83b6-6c8447520afb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.791 257641 DEBUG nova.network.os_vif_util [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converting VIF {"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.792 257641 DEBUG nova.network.os_vif_util [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:57 np0005539550 nova_compute[257631]: 2025-11-29 08:13:57.794 257641 DEBUG nova.objects.instance [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lazy-loading 'pci_devices' on Instance uuid eebe492a-dfa8-49b7-83b6-6c8447520afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:13:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:13:58 np0005539550 nova_compute[257631]: 2025-11-29 08:13:58.420 257641 DEBUG nova.network.neutron [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Updated VIF entry in instance network info cache for port d6293fb0-6295-4539-9b70-544701b49154. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:13:58 np0005539550 nova_compute[257631]: 2025-11-29 08:13:58.421 257641 DEBUG nova.network.neutron [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Updating instance_info_cache with network_info: [{"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:13:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:13:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:13:58.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 432 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.5 MiB/s wr, 189 op/s
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:13:59
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'images']
Nov 29 03:13:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.412 257641 DEBUG nova.compute.manager [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Received event network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.412 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.412 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.412 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.413 257641 DEBUG nova.compute.manager [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Processing event network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.413 257641 DEBUG nova.compute.manager [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Received event network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.413 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.413 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.413 257641 DEBUG oslo_concurrency.lockutils [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.414 257641 DEBUG nova.compute.manager [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] No waiting events found dispatching network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.414 257641 WARNING nova.compute.manager [req-051a6ec6-9e20-4891-b139-1f2fbb9c974e req-d83775c1-2d89-452a-bab8-77ee2f5d4df0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Received unexpected event network-vif-plugged-f890c7dd-6d37-48c2-83ba-547d7fc32c12 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.415 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.423 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404039.422369, 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.424 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.427 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.433 257641 INFO nova.virt.libvirt.driver [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Instance spawned successfully.#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.434 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:13:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:59.449 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:13:59.451 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.568 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <uuid>eebe492a-dfa8-49b7-83b6-6c8447520afb</uuid>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <name>instance-0000005e</name>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:name>tempest-InstanceActionsV221TestJSON-server-225192304</nova:name>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:13:56</nova:creationTime>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:user uuid="5dc4cc7160064e9e82d9d21ebfd05d2f">tempest-InstanceActionsV221TestJSON-58601818-project-member</nova:user>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:project uuid="c89019b6f53547259a833925c95b09c1">tempest-InstanceActionsV221TestJSON-58601818</nova:project>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <nova:port uuid="d6293fb0-6295-4539-9b70-544701b49154">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="serial">eebe492a-dfa8-49b7-83b6-6c8447520afb</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="uuid">eebe492a-dfa8-49b7-83b6-6c8447520afb</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/eebe492a-dfa8-49b7-83b6-6c8447520afb_disk">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:14:73:49"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <target dev="tapd6293fb0-62"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/console.log" append="off"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:13:59 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:13:59 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:13:59 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:13:59 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.569 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Preparing to wait for external event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.569 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.569 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.569 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.570 257641 DEBUG nova.virt.libvirt.vif [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-225192304',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-225192304',id=94,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c89019b6f53547259a833925c95b09c1',ramdisk_id='',reservation_id='r-ly2816ty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-58601818',owner_user_name='tempest-InstanceActionsV221TestJSON-58601818-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:13:44Z,user_data=None,user_id='5dc4cc7160064e9e82d9d21ebfd05d2f',uuid=eebe492a-dfa8-49b7-83b6-6c8447520afb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.570 257641 DEBUG nova.network.os_vif_util [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converting VIF {"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.571 257641 DEBUG nova.network.os_vif_util [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.571 257641 DEBUG os_vif [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.571 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.572 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.572 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.575 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6293fb0-62, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.575 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6293fb0-62, col_values=(('external_ids', {'iface-id': 'd6293fb0-6295-4539-9b70-544701b49154', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:73:49', 'vm-uuid': 'eebe492a-dfa8-49b7-83b6-6c8447520afb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.577 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539550 NetworkManager[49039]: <info>  [1764404039.5783] manager: (tapd6293fb0-62): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.579 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.583 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.584 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.584 257641 INFO os_vif [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62')#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.596 257641 DEBUG oslo_concurrency.lockutils [req-7f0fe67d-f25b-425c-973d-b4af6f8ed6d8 req-d76c95b3-466e-48d4-81be-d97ca784d6cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eebe492a-dfa8-49b7-83b6-6c8447520afb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.621 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.622 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.622 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.623 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.623 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.624 257641 DEBUG nova.virt.libvirt.driver [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:13:59 np0005539550 nova_compute[257631]: 2025-11-29 08:13:59.626 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:13:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:13:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:13:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:13:59.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:00.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:00 np0005539550 podman[314236]: 2025-11-29 08:14:00.564447151 +0000 UTC m=+0.091694961 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.567 257641 DEBUG oslo_concurrency.lockutils [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.568 257641 DEBUG oslo_concurrency.lockutils [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.568 257641 DEBUG nova.compute.manager [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.572 257641 DEBUG nova.compute.manager [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.573 257641 DEBUG nova.objects.instance [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.599 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.600 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.600 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] No VIF found with MAC fa:16:3e:14:73:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.600 257641 INFO nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Using config drive#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.626 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.664 257641 DEBUG nova.virt.libvirt.driver [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.690 257641 INFO nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Took 22.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.691 257641 DEBUG nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.845 257641 INFO nova.compute.manager [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Took 23.57 seconds to build instance.#033[00m
Nov 29 03:14:00 np0005539550 nova_compute[257631]: 2025-11-29 08:14:00.880 257641 DEBUG oslo_concurrency.lockutils [None req-4c0a4947-a6af-4044-bbe8-2abd526b7cbf 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.166 257641 INFO nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Creating config drive at /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.175 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpviqhfgny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 432 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 209 op/s
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.315 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpviqhfgny" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.346 257641 DEBUG nova.storage.rbd_utils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] rbd image eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.350 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:01.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.823 257641 DEBUG oslo_concurrency.processutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config eebe492a-dfa8-49b7-83b6-6c8447520afb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.825 257641 INFO nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Deleting local config drive /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:01 np0005539550 kernel: tapd6293fb0-62: entered promiscuous mode
Nov 29 03:14:01 np0005539550 NetworkManager[49039]: <info>  [1764404041.8793] manager: (tapd6293fb0-62): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Nov 29 03:14:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:01Z|00353|binding|INFO|Claiming lport d6293fb0-6295-4539-9b70-544701b49154 for this chassis.
Nov 29 03:14:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:01Z|00354|binding|INFO|d6293fb0-6295-4539-9b70-544701b49154: Claiming fa:16:3e:14:73:49 10.100.0.5
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.889 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.913 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:73:49 10.100.0.5'], port_security=['fa:16:3e:14:73:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eebe492a-dfa8-49b7-83b6-6c8447520afb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c89019b6f53547259a833925c95b09c1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9793a23-634a-4abd-b678-ad17e2636356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70e5123c-3fbb-458b-a489-845a97846bc3, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d6293fb0-6295-4539-9b70-544701b49154) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.915 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d6293fb0-6295-4539-9b70-544701b49154 in datapath 3fa162c3-f2e4-4f96-916a-065ce8aa6f3a bound to our chassis#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.917 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3fa162c3-f2e4-4f96-916a-065ce8aa6f3a#033[00m
Nov 29 03:14:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:01Z|00355|binding|INFO|Setting lport d6293fb0-6295-4539-9b70-544701b49154 up in Southbound
Nov 29 03:14:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:01Z|00356|binding|INFO|Setting lport d6293fb0-6295-4539-9b70-544701b49154 ovn-installed in OVS
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539550 nova_compute[257631]: 2025-11-29 08:14:01.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:01 np0005539550 systemd-machined[216673]: New machine qemu-43-instance-0000005e.
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.928 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c845b922-2de3-49ce-b0d5-f8041456ed1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.929 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3fa162c3-f1 in ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.930 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3fa162c3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.931 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd39445-345d-4a33-b159-dcd5558323ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.931 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b629eca4-de3f-4d36-a6c1-899214d6103c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539550 systemd[1]: Started Virtual Machine qemu-43-instance-0000005e.
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.941 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[73ddbd7c-019e-48df-b732-5a8341e96cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539550 systemd-udevd[314335]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:01 np0005539550 NetworkManager[49039]: <info>  [1764404041.9606] device (tapd6293fb0-62): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:01 np0005539550 NetworkManager[49039]: <info>  [1764404041.9616] device (tapd6293fb0-62): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.967 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a3fdeb7b-86fd-49eb-bd2f-3fdd3f90eecc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:01.995 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[924c7c5b-9bd9-4f92-a285-9b7dca4f632a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.002 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1719010a-7d52-420f-bb32-8caba86f56d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 NetworkManager[49039]: <info>  [1764404042.0026] manager: (tap3fa162c3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.036 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9a302546-7d4d-419e-92e6-e91c1d593e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.039 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[73b8a36a-f1cb-4f1b-ad21-3b1e290994f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 NetworkManager[49039]: <info>  [1764404042.0746] device (tap3fa162c3-f0): carrier: link connected
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.085 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[77127803-9d6e-4b21-b9c5-2c6827f05eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.105 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0faed45c-5b96-438d-8131-acc309c119f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3fa162c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:fc:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708973, 'reachable_time': 39244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314367, 'error': None, 'target': 'ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.131 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[361c6403-cb57-46b5-aa20-1f99ab9da89e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:fc8e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 708973, 'tstamp': 708973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314368, 'error': None, 'target': 'ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.148 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ce2758-a927-4efc-910b-d13e4164be6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3fa162c3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:fc:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708973, 'reachable_time': 39244, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314369, 'error': None, 'target': 'ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.197 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e4e5dc-2bd2-46dd-8267-d4c88accb8c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.257 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfccfe8-c2ae-4baf-9a7f-19a3c610e109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.259 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fa162c3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.260 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.260 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fa162c3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539550 NetworkManager[49039]: <info>  [1764404042.2634] manager: (tap3fa162c3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Nov 29 03:14:02 np0005539550 kernel: tap3fa162c3-f0: entered promiscuous mode
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.265 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.268 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3fa162c3-f0, col_values=(('external_ids', {'iface-id': '67d04c42-e306-4bc2-a55b-32fd964645de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:02Z|00357|binding|INFO|Releasing lport 67d04c42-e306-4bc2-a55b-32fd964645de from this chassis (sb_readonly=0)
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.271 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.272 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3fa162c3-f2e4-4f96-916a-065ce8aa6f3a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3fa162c3-f2e4-4f96-916a-065ce8aa6f3a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.275 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[202c941a-66d6-4fce-a0ed-853be0b8e880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.276 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/3fa162c3-f2e4-4f96-916a-065ce8aa6f3a.pid.haproxy
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 3fa162c3-f2e4-4f96-916a-065ce8aa6f3a
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:14:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:02.277 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'env', 'PROCESS_TAG=haproxy-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3fa162c3-f2e4-4f96-916a-065ce8aa6f3a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.287 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.510 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404042.5094557, eebe492a-dfa8-49b7-83b6-6c8447520afb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.510 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] VM Started (Lifecycle Event)#033[00m
Nov 29 03:14:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:02.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.592 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.596 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404042.5097032, eebe492a-dfa8-49b7-83b6-6c8447520afb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.596 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.657 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.660 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:02 np0005539550 podman[314443]: 2025-11-29 08:14:02.690004991 +0000 UTC m=+0.053784223 container create 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:14:02 np0005539550 systemd[1]: Started libpod-conmon-18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6.scope.
Nov 29 03:14:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:02 np0005539550 podman[314443]: 2025-11-29 08:14:02.663134856 +0000 UTC m=+0.026914108 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:14:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de863b7763249db288334b6a3288d2961fcd6d4cd4fcf436f377004c70db02e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:02 np0005539550 podman[314443]: 2025-11-29 08:14:02.784907313 +0000 UTC m=+0.148686575 container init 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:14:02 np0005539550 podman[314443]: 2025-11-29 08:14:02.790714821 +0000 UTC m=+0.154494053 container start 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:14:02 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [NOTICE]   (314463) : New worker (314465) forked
Nov 29 03:14:02 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [NOTICE]   (314463) : Loading success.
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.896 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.994 257641 DEBUG nova.compute.manager [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.994 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.994 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.995 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.995 257641 DEBUG nova.compute.manager [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Processing event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.995 257641 DEBUG nova.compute.manager [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.995 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.995 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.996 257641 DEBUG oslo_concurrency.lockutils [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.996 257641 DEBUG nova.compute.manager [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] No waiting events found dispatching network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.996 257641 WARNING nova.compute.manager [req-117a02bf-4d46-4446-a2fc-99a9276f4fd6 req-add0d380-b5dd-4b3b-b83a-704a346757dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received unexpected event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:14:02 np0005539550 nova_compute[257631]: 2025-11-29 08:14:02.997 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.000 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404043.0006363, eebe492a-dfa8-49b7-83b6-6c8447520afb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.001 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.003 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.007 257641 INFO nova.virt.libvirt.driver [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Instance spawned successfully.#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.008 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.064 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.068 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:03 np0005539550 kernel: tapd8b38a34-82 (unregistering): left promiscuous mode
Nov 29 03:14:03 np0005539550 NetworkManager[49039]: <info>  [1764404043.1518] device (tapd8b38a34-82): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.157 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:14:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:03Z|00358|binding|INFO|Releasing lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 from this chassis (sb_readonly=0)
Nov 29 03:14:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:03Z|00359|binding|INFO|Setting lport d8b38a34-8274-43e4-8ebd-3924de5c5ba7 down in Southbound
Nov 29 03:14:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:03Z|00360|binding|INFO|Removing iface tapd8b38a34-82 ovn-installed in OVS
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.238 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.242 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.242 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.243 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.243 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.244 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.244 257641 DEBUG nova.virt.libvirt.driver [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.250 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:2f:2f 10.100.0.6'], port_security=['fa:16:3e:de:2f:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '19e85fae-c57e-409b-95f7-b53ddb4c928e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d8b38a34-8274-43e4-8ebd-3924de5c5ba7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.251 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.253 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.253 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.254 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b6607278-15e2-4e3c-a6ea-f6ee87f9d42d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.255 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:14:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 432 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.3 MiB/s wr, 168 op/s
Nov 29 03:14:03 np0005539550 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000055.scope: Deactivated successfully.
Nov 29 03:14:03 np0005539550 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000055.scope: Consumed 17.747s CPU time.
Nov 29 03:14:03 np0005539550 systemd-machined[216673]: Machine qemu-41-instance-00000055 terminated.
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.323 257641 INFO nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Took 19.25 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.324 257641 DEBUG nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.386 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.406 257641 INFO nova.compute.manager [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Took 21.52 seconds to build instance.#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.428 257641 DEBUG oslo_concurrency.lockutils [None req-8e39f682-d08e-4d0d-b76c-d4e408223979 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:03.452 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:03 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [NOTICE]   (310732) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:03 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [NOTICE]   (310732) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:03 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [WARNING]  (310732) : Exiting Master process...
Nov 29 03:14:03 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [ALERT]    (310732) : Current worker (310734) exited with code 143 (Terminated)
Nov 29 03:14:03 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[310728]: [WARNING]  (310732) : All workers exited. Exiting... (0)
Nov 29 03:14:03 np0005539550 systemd[1]: libpod-31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348.scope: Deactivated successfully.
Nov 29 03:14:03 np0005539550 conmon[310728]: conmon 31d28b78fbc7b5b71649 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348.scope/container/memory.events
Nov 29 03:14:03 np0005539550 podman[314494]: 2025-11-29 08:14:03.563791779 +0000 UTC m=+0.200349164 container died 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:14:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e72b55267421d17145402cebf9fd790f9c93851e89c2f54b0219e156e43a0eb6-merged.mount: Deactivated successfully.
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.689 257641 INFO nova.virt.libvirt.driver [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:14:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.694 257641 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance destroyed successfully.#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.694 257641 DEBUG nova.objects.instance [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:03.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.720 257641 DEBUG nova.compute.manager [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:03 np0005539550 nova_compute[257631]: 2025-11-29 08:14:03.815 257641 DEBUG oslo_concurrency.lockutils [None req-cb93c52f-5d20-429e-b5ca-567d91389e61 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:04 np0005539550 podman[314494]: 2025-11-29 08:14:04.156071343 +0000 UTC m=+0.792628738 container cleanup 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:14:04 np0005539550 systemd[1]: libpod-conmon-31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348.scope: Deactivated successfully.
Nov 29 03:14:04 np0005539550 podman[314533]: 2025-11-29 08:14:04.494734394 +0000 UTC m=+0.314002853 container remove 31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.503 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e99ca07f-476e-4468-b2df-8b167377d7df]: (4, ('Sat Nov 29 08:14:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348)\n31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348\nSat Nov 29 08:14:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348)\n31d28b78fbc7b5b716494c444855dcc8643da6c9d6bbcc327157d3e43b3d1348\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.505 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a1517e70-1fc1-4322-bdc3-6a16c09060d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.506 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:04 np0005539550 nova_compute[257631]: 2025-11-29 08:14:04.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:04 np0005539550 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:14:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:04.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:04 np0005539550 nova_compute[257631]: 2025-11-29 08:14:04.528 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9755b0-e3ca-4673-80b5-8e2c26cd96b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.549 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fd3ce0-2644-4952-89ba-356278061e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.550 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[68eabda9-2bda-473b-9700-740bdd2d9020]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.566 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e46f242a-3e06-4846-88f3-fd48dba7a63f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699157, 'reachable_time': 35673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314549, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.571 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:04.571 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[1d055f1b-36af-4cc7-a116-deded49b56eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:04 np0005539550 nova_compute[257631]: 2025-11-29 08:14:04.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 454 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.9 MiB/s wr, 284 op/s
Nov 29 03:14:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.811 257641 DEBUG nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-unplugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.811 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.812 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.812 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.812 257641 DEBUG nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] No waiting events found dispatching network-vif-unplugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.812 257641 WARNING nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received unexpected event network-vif-unplugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.812 257641 DEBUG nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.813 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.813 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.813 257641 DEBUG oslo_concurrency.lockutils [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.813 257641 DEBUG nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] No waiting events found dispatching network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:05 np0005539550 nova_compute[257631]: 2025-11-29 08:14:05.813 257641 WARNING nova.compute.manager [req-0b3502e8-928d-4a2c-b20d-041ec9f7b608 req-c72d791c-dad8-4db9-a3ea-e6412f91bdcf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received unexpected event network-vif-plugged-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:14:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:06.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:06 np0005539550 nova_compute[257631]: 2025-11-29 08:14:06.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 465 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 292 op/s
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.295 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.296 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.296 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.297 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.297 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.298 257641 INFO nova.compute.manager [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Terminating instance#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.299 257641 DEBUG nova.compute.manager [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:07.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:07 np0005539550 kernel: tapd6293fb0-62 (unregistering): left promiscuous mode
Nov 29 03:14:07 np0005539550 NetworkManager[49039]: <info>  [1764404047.9405] device (tapd6293fb0-62): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:07Z|00361|binding|INFO|Releasing lport d6293fb0-6295-4539-9b70-544701b49154 from this chassis (sb_readonly=0)
Nov 29 03:14:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:07Z|00362|binding|INFO|Setting lport d6293fb0-6295-4539-9b70-544701b49154 down in Southbound
Nov 29 03:14:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:07Z|00363|binding|INFO|Removing iface tapd6293fb0-62 ovn-installed in OVS
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539550 nova_compute[257631]: 2025-11-29 08:14:07.968 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:07.971 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:73:49 10.100.0.5'], port_security=['fa:16:3e:14:73:49 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eebe492a-dfa8-49b7-83b6-6c8447520afb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c89019b6f53547259a833925c95b09c1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9793a23-634a-4abd-b678-ad17e2636356', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70e5123c-3fbb-458b-a489-845a97846bc3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d6293fb0-6295-4539-9b70-544701b49154) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:07.972 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d6293fb0-6295-4539-9b70-544701b49154 in datapath 3fa162c3-f2e4-4f96-916a-065ce8aa6f3a unbound from our chassis#033[00m
Nov 29 03:14:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:07.974 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:07.975 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3f5523de-34f8-4762-917a-5e5e26a92897]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:07.976 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a namespace which is not needed anymore#033[00m
Nov 29 03:14:07 np0005539550 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Nov 29 03:14:07 np0005539550 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000005e.scope: Consumed 4.954s CPU time.
Nov 29 03:14:08 np0005539550 systemd-machined[216673]: Machine qemu-43-instance-0000005e terminated.
Nov 29 03:14:08 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [NOTICE]   (314463) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:08 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [NOTICE]   (314463) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:08 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [ALERT]    (314463) : Current worker (314465) exited with code 143 (Terminated)
Nov 29 03:14:08 np0005539550 neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a[314459]: [WARNING]  (314463) : All workers exited. Exiting... (0)
Nov 29 03:14:08 np0005539550 systemd[1]: libpod-18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6.scope: Deactivated successfully.
Nov 29 03:14:08 np0005539550 conmon[314459]: conmon 18fcad769e8de87b0134 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6.scope/container/memory.events
Nov 29 03:14:08 np0005539550 podman[314578]: 2025-11-29 08:14:08.105001791 +0000 UTC m=+0.046238661 container died 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.123 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.130 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.135 257641 INFO nova.virt.libvirt.driver [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Instance destroyed successfully.#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.136 257641 DEBUG nova.objects.instance [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lazy-loading 'resources' on Instance uuid eebe492a-dfa8-49b7-83b6-6c8447520afb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.152 257641 DEBUG nova.virt.libvirt.vif [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-225192304',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-225192304',id=94,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c89019b6f53547259a833925c95b09c1',ramdisk_id='',reservation_id='r-ly2816ty',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-58601818',owner_user_name='tempest-InstanceActionsV221TestJSON-58601818-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:03Z,user_data=None,user_id='5dc4cc7160064e9e82d9d21ebfd05d2f',uuid=eebe492a-dfa8-49b7-83b6-6c8447520afb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.153 257641 DEBUG nova.network.os_vif_util [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converting VIF {"id": "d6293fb0-6295-4539-9b70-544701b49154", "address": "fa:16:3e:14:73:49", "network": {"id": "3fa162c3-f2e4-4f96-916a-065ce8aa6f3a", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-907363348-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c89019b6f53547259a833925c95b09c1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6293fb0-62", "ovs_interfaceid": "d6293fb0-6295-4539-9b70-544701b49154", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.154 257641 DEBUG nova.network.os_vif_util [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.154 257641 DEBUG os_vif [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.156 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.157 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6293fb0-62, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.158 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.165 257641 INFO os_vif [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:73:49,bridge_name='br-int',has_traffic_filtering=True,id=d6293fb0-6295-4539-9b70-544701b49154,network=Network(3fa162c3-f2e4-4f96-916a-065ce8aa6f3a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd6293fb0-62')#033[00m
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:14:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-de863b7763249db288334b6a3288d2961fcd6d4cd4fcf436f377004c70db02e5-merged.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:14:08 np0005539550 podman[314578]: 2025-11-29 08:14:08.225534207 +0000 UTC m=+0.166771077 container cleanup 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:14:08 np0005539550 systemd[1]: libpod-conmon-18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6.scope: Deactivated successfully.
Nov 29 03:14:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:08.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:08 np0005539550 podman[314639]: 2025-11-29 08:14:08.710511143 +0000 UTC m=+0.461764714 container remove 18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.716 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[232820ff-26a0-4cd7-b8d2-f55060d86ea8]: (4, ('Sat Nov 29 08:14:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a (18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6)\n18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6\nSat Nov 29 08:14:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a (18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6)\n18fcad769e8de87b01345725b9ad587f18f11f99628877947d7a62758dc11de6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.719 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9033130-436d-405a-969e-9b1481108dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.720 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fa162c3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.722 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 kernel: tap3fa162c3-f0: left promiscuous mode
Nov 29 03:14:08 np0005539550 nova_compute[257631]: 2025-11-29 08:14:08.745 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.747 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28a66bff-f1df-406a-802a-b562a07a6695]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.758 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb97c2a-77c3-4269-9be4-666458918520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.760 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[202905d3-985a-4ba4-b76b-686bafd08453]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.777 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[109ea512-b309-4846-b12d-29d81700856e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 708965, 'reachable_time': 15366, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314657, 'error': None, 'target': 'ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.780 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3fa162c3-f2e4-4f96-916a-065ce8aa6f3a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:08.780 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b5ce5f4c-e7f4-414d-a3c8-e82c3ed3e512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:08 np0005539550 systemd[1]: run-netns-ovnmeta\x2d3fa162c3\x2df2e4\x2d4f96\x2d916a\x2d065ce8aa6f3a.mount: Deactivated successfully.
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:14:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:14:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 465 MiB data, 954 MiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.2 MiB/s wr, 354 op/s
Nov 29 03:14:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:09.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.867 257641 DEBUG nova.compute.manager [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-unplugged-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.869 257641 DEBUG oslo_concurrency.lockutils [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.869 257641 DEBUG oslo_concurrency.lockutils [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.870 257641 DEBUG oslo_concurrency.lockutils [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.870 257641 DEBUG nova.compute.manager [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] No waiting events found dispatching network-vif-unplugged-d6293fb0-6295-4539-9b70-544701b49154 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:09 np0005539550 nova_compute[257631]: 2025-11-29 08:14:09.870 257641 DEBUG nova.compute.manager [req-58356878-b337-4a18-8234-354a36d4180e req-a5d5531f-c7f8-49d5-aa33-6284c4eeaf25 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-unplugged-d6293fb0-6295-4539-9b70-544701b49154 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:14:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:10.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 452 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.9 MiB/s wr, 341 op/s
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.656 257641 INFO nova.virt.libvirt.driver [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Deleting instance files /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb_del#033[00m
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.657 257641 INFO nova.virt.libvirt.driver [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Deletion of /var/lib/nova/instances/eebe492a-dfa8-49b7-83b6-6c8447520afb_del complete#033[00m
Nov 29 03:14:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:11.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.902 257641 INFO nova.compute.manager [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Took 4.60 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.903 257641 DEBUG oslo.service.loopingcall [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.903 257641 DEBUG nova.compute.manager [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.903 257641 DEBUG nova.network.neutron [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:11 np0005539550 nova_compute[257631]: 2025-11-29 08:14:11.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.417 257641 DEBUG nova.compute.manager [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.417 257641 DEBUG oslo_concurrency.lockutils [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.418 257641 DEBUG oslo_concurrency.lockutils [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.418 257641 DEBUG oslo_concurrency.lockutils [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.418 257641 DEBUG nova.compute.manager [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] No waiting events found dispatching network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.419 257641 WARNING nova.compute.manager [req-881a7c31-2cc8-4e39-8475-b09c09b1d1b4 req-ab101321-a0bd-4c1e-93bd-62bbf5f5bbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received unexpected event network-vif-plugged-d6293fb0-6295-4539-9b70-544701b49154 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.476 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.477 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:12 np0005539550 nova_compute[257631]: 2025-11-29 08:14:12.477 257641 DEBUG nova.network.neutron [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:12.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.159 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 452 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.9 MiB/s wr, 297 op/s
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.566 257641 DEBUG nova.network.neutron [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.585 257641 INFO nova.compute.manager [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Took 1.68 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.667 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.668 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:13.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:13 np0005539550 nova_compute[257631]: 2025-11-29 08:14:13.810 257641 DEBUG oslo_concurrency.processutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520042072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.250 257641 DEBUG oslo_concurrency.processutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.257 257641 DEBUG nova.compute.provider_tree [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.280 257641 DEBUG nova.scheduler.client.report [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.314 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.348 257641 INFO nova.scheduler.client.report [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Deleted allocations for instance eebe492a-dfa8-49b7-83b6-6c8447520afb#033[00m
Nov 29 03:14:14 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.438 257641 DEBUG oslo_concurrency.lockutils [None req-ba52b56f-8110-47ab-8402-e6ad95d8f9cc 5dc4cc7160064e9e82d9d21ebfd05d2f c89019b6f53547259a833925c95b09c1 - - default default] Lock "eebe492a-dfa8-49b7-83b6-6c8447520afb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.517 257641 DEBUG nova.compute.manager [req-397054f6-f92b-4afd-939d-a7e6325600c5 req-de770909-724b-4038-80fc-6326be47c6f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Received event network-vif-deleted-d6293fb0-6295-4539-9b70-544701b49154 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:14.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.534 257641 DEBUG nova.network.neutron [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.560 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.645 257641 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.646 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Creating file /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/8c502d49136d4851b784a75f411fa513.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:14:14 np0005539550 nova_compute[257631]: 2025-11-29 08:14:14.647 257641 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/8c502d49136d4851b784a75f411fa513.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.103 257641 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/8c502d49136d4851b784a75f411fa513.tmp" returned: 1 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.104 257641 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e/8c502d49136d4851b784a75f411fa513.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.105 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Creating directory /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.105 257641 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 446 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.8 MiB/s wr, 360 op/s
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.333 257641 DEBUG oslo_concurrency.processutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/19e85fae-c57e-409b-95f7-b53ddb4c928e" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.337 257641 INFO nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance already shutdown.#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.343 257641 INFO nova.virt.libvirt.driver [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Instance destroyed successfully.#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.344 257641 DEBUG nova.virt.libvirt.vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:12:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.345 257641 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:de:2f:2f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.346 257641 DEBUG nova.network.os_vif_util [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.347 257641 DEBUG os_vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.351 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8b38a34-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.353 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.357 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.361 257641 INFO os_vif [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.367 257641 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.367 257641 DEBUG nova.virt.libvirt.driver [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:15Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:d1:d1 10.100.0.14
Nov 29 03:14:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:15Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:d1:d1 10.100.0.14
Nov 29 03:14:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:15.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:15 np0005539550 nova_compute[257631]: 2025-11-29 08:14:15.984 257641 DEBUG neutronclient.v2_0.client [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:14:16 np0005539550 nova_compute[257631]: 2025-11-29 08:14:16.106 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:16 np0005539550 nova_compute[257631]: 2025-11-29 08:14:16.106 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:16 np0005539550 nova_compute[257631]: 2025-11-29 08:14:16.106 257641 DEBUG oslo_concurrency.lockutils [None req-0729e93c-a3dd-4d1f-a557-fadba9399f2d c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:16.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:16 np0005539550 nova_compute[257631]: 2025-11-29 08:14:16.923 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 477 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.5 MiB/s wr, 280 op/s
Nov 29 03:14:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:17.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:18 np0005539550 podman[314742]: 2025-11-29 08:14:18.321100008 +0000 UTC m=+0.055544883 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:18 np0005539550 podman[314741]: 2025-11-29 08:14:18.328554728 +0000 UTC m=+0.062999313 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.368 257641 DEBUG nova.compute.manager [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Received event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.368 257641 DEBUG nova.compute.manager [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing instance network info cache due to event network-changed-d8b38a34-8274-43e4-8ebd-3924de5c5ba7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.369 257641 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.369 257641 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.369 257641 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Refreshing network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.397 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404043.39627, 19e85fae-c57e-409b-95f7-b53ddb4c928e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.397 257641 INFO nova.compute.manager [-] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.445 257641 DEBUG nova.compute.manager [None req-873d660f-83d1-48e7-a5e4-d6f94be52a04 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.448 257641 DEBUG nova.compute.manager [None req-873d660f-83d1-48e7-a5e4-d6f94be52a04 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_migrated, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.484 257641 INFO nova.compute.manager [None req-873d660f-83d1-48e7-a5e4-d6f94be52a04 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Nov 29 03:14:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:18.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:18 np0005539550 nova_compute[257631]: 2025-11-29 08:14:18.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:14:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:18.946 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:18.947 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:18.947 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 507 MiB data, 1024 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.2 MiB/s wr, 248 op/s
Nov 29 03:14:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:19.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.012925957914572864 of space, bias 1.0, pg target 3.877787374371859 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:14:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:14:20 np0005539550 nova_compute[257631]: 2025-11-29 08:14:20.354 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:20.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.014 257641 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updated VIF entry in instance network info cache for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.014 257641 DEBUG nova.network.neutron [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.045 257641 DEBUG oslo_concurrency.lockutils [req-2884a826-bf6a-485f-8754-ffc41a7923aa req-bf7e74b4-2272-42ff-8c66-3d29101dfda2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:21Z|00364|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.238 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 514 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 6.4 MiB/s wr, 194 op/s
Nov 29 03:14:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 29 03:14:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:21Z|00365|binding|INFO|Releasing lport fb53c57a-d19f-4391-add7-afa34095fb59 from this chassis (sb_readonly=0)
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.926 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.942 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 29 03:14:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.983 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.983 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.983 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.984 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:14:21 np0005539550 nova_compute[257631]: 2025-11-29 08:14:21.984 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3635623637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.455 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:22.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.689 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.689 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.692 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.692 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.861 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.862 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4239MB free_disk=20.715377807617188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.862 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.863 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.940 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration for instance 19e85fae-c57e-409b-95f7-b53ddb4c928e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.975 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating resource usage from migration 0444e4d7-f620-4d8e-910b-fc76bd03562e#033[00m
Nov 29 03:14:22 np0005539550 nova_compute[257631]: 2025-11-29 08:14:22.975 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Starting to track outgoing migration 0444e4d7-f620-4d8e-910b-fc76bd03562e with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.131 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404048.129775, eebe492a-dfa8-49b7-83b6-6c8447520afb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.132 257641 INFO nova.compute.manager [-] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.155 257641 DEBUG nova.compute.manager [None req-45c4bd61-6322-4dae-8a72-367b788380ba - - - - - -] [instance: eebe492a-dfa8-49b7-83b6-6c8447520afb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.170 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.170 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Migration 0444e4d7-f620-4d8e-910b-fc76bd03562e is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.171 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.171 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:14:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 514 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 6.8 MiB/s wr, 220 op/s
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.445 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764695445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.948 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.965 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:23 np0005539550 nova_compute[257631]: 2025-11-29 08:14:23.999 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:14:24 np0005539550 nova_compute[257631]: 2025-11-29 08:14:24.000 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:24.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:24 np0005539550 nova_compute[257631]: 2025-11-29 08:14:24.977 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:24 np0005539550 nova_compute[257631]: 2025-11-29 08:14:24.977 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:14:24 np0005539550 nova_compute[257631]: 2025-11-29 08:14:24.978 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:14:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 587 KiB/s rd, 4.6 MiB/s wr, 159 op/s
Nov 29 03:14:25 np0005539550 nova_compute[257631]: 2025-11-29 08:14:25.366 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:25 np0005539550 nova_compute[257631]: 2025-11-29 08:14:25.563 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:25 np0005539550 nova_compute[257631]: 2025-11-29 08:14:25.563 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:25 np0005539550 nova_compute[257631]: 2025-11-29 08:14:25.563 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:14:25 np0005539550 nova_compute[257631]: 2025-11-29 08:14:25.564 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:25.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:26 np0005539550 nova_compute[257631]: 2025-11-29 08:14:26.927 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:27 np0005539550 nova_compute[257631]: 2025-11-29 08:14:27.167 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:27 np0005539550 nova_compute[257631]: 2025-11-29 08:14:27.167 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:27 np0005539550 nova_compute[257631]: 2025-11-29 08:14:27.168 257641 DEBUG nova.compute.manager [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:14:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 472 KiB/s rd, 1.9 MiB/s wr, 133 op/s
Nov 29 03:14:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:27.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.390 257641 DEBUG neutronclient.v2_0.client [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port d8b38a34-8274-43e4-8ebd-3924de5c5ba7 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.390 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.391 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.391 257641 DEBUG nova.network.neutron [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.391 257641 DEBUG nova.objects.instance [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'info_cache' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.470 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updating instance_info_cache with network_info: [{"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.503 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.504 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.504 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.504 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.505 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.505 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.505 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.505 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.525 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:14:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:28.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.939 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.940 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:28 np0005539550 nova_compute[257631]: 2025-11-29 08:14:28.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:14:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 257 KiB/s wr, 128 op/s
Nov 29 03:14:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:29.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:30 np0005539550 nova_compute[257631]: 2025-11-29 08:14:30.369 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:30.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:30 np0005539550 nova_compute[257631]: 2025-11-29 08:14:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.078 257641 DEBUG nova.network.neutron [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Updating instance_info_cache with network_info: [{"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.113 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-19e85fae-c57e-409b-95f7-b53ddb4c928e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.114 257641 DEBUG nova.objects.instance [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 19e85fae-c57e-409b-95f7-b53ddb4c928e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.215 257641 DEBUG nova.storage.rbd_utils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(nova-resize) on rbd image(19e85fae-c57e-409b-95f7-b53ddb4c928e_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:14:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 119 op/s
Nov 29 03:14:31 np0005539550 podman[314870]: 2025-11-29 08:14:31.370090007 +0000 UTC m=+0.111713041 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 29 03:14:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 29 03:14:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 29 03:14:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:31.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.869 257641 DEBUG nova.virt.libvirt.vif [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:12:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1456117084',display_name='tempest-ServerActionsTestOtherB-server-1456117084',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1456117084',id=85,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-h2yqalhl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=19e85fae-c57e-409b-95f7-b53ddb4c928e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.870 257641 DEBUG nova.network.os_vif_util [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "address": "fa:16:3e:de:2f:2f", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b38a34-82", "ovs_interfaceid": "d8b38a34-8274-43e4-8ebd-3924de5c5ba7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.870 257641 DEBUG nova.network.os_vif_util [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.871 257641 DEBUG os_vif [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.873 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.873 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8b38a34-82, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.874 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.876 257641 INFO os_vif [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:2f:2f,bridge_name='br-int',has_traffic_filtering=True,id=d8b38a34-8274-43e4-8ebd-3924de5c5ba7,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b38a34-82')#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.876 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.877 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:31 np0005539550 nova_compute[257631]: 2025-11-29 08:14:31.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.010 257641 DEBUG oslo_concurrency.processutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290807649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.456 257641 DEBUG oslo_concurrency.processutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.463 257641 DEBUG nova.compute.provider_tree [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.481 257641 DEBUG nova.scheduler.client.report [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:14:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:32.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.553 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.554 257641 DEBUG nova.compute.manager [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 19e85fae-c57e-409b-95f7-b53ddb4c928e] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.714 257641 INFO nova.scheduler.client.report [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocation for migration 0444e4d7-f620-4d8e-910b-fc76bd03562e#033[00m
Nov 29 03:14:32 np0005539550 nova_compute[257631]: 2025-11-29 08:14:32.805 257641 DEBUG oslo_concurrency.lockutils [None req-d4f85fff-5a56-495c-bff8-68df927ef00e c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "19e85fae-c57e-409b-95f7-b53ddb4c928e" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 5.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 52 KiB/s wr, 119 op/s
Nov 29 03:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:14:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3601.4 total, 600.0 interval#012Cumulative writes: 29K writes, 113K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.93 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 39.54 MB, 0.07 MB/s#012Interval WAL: 10K writes, 4177 syncs, 2.48 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:14:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/493973315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:34.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 34 KiB/s wr, 106 op/s
Nov 29 03:14:35 np0005539550 nova_compute[257631]: 2025-11-29 08:14:35.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:35.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:36.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:36 np0005539550 nova_compute[257631]: 2025-11-29 08:14:36.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:36 np0005539550 nova_compute[257631]: 2025-11-29 08:14:36.930 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 517 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 98 op/s
Nov 29 03:14:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:37.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:38.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 465 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 21 KiB/s wr, 69 op/s
Nov 29 03:14:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:39 np0005539550 nova_compute[257631]: 2025-11-29 08:14:39.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:40 np0005539550 nova_compute[257631]: 2025-11-29 08:14:40.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:40.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 438 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 282 KiB/s rd, 19 KiB/s wr, 84 op/s
Nov 29 03:14:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 29 03:14:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 29 03:14:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 29 03:14:41 np0005539550 nova_compute[257631]: 2025-11-29 08:14:41.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:42.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.267 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.268 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.268 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.268 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.269 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.270 257641 INFO nova.compute.manager [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Terminating instance#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.270 257641 DEBUG nova.compute.manager [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:14:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 438 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 816 KiB/s rd, 30 KiB/s wr, 109 op/s
Nov 29 03:14:43 np0005539550 kernel: tapf890c7dd-6d (unregistering): left promiscuous mode
Nov 29 03:14:43 np0005539550 NetworkManager[49039]: <info>  [1764404083.3287] device (tapf890c7dd-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.338 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:43Z|00366|binding|INFO|Releasing lport f890c7dd-6d37-48c2-83ba-547d7fc32c12 from this chassis (sb_readonly=0)
Nov 29 03:14:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:43Z|00367|binding|INFO|Setting lport f890c7dd-6d37-48c2-83ba-547d7fc32c12 down in Southbound
Nov 29 03:14:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:43Z|00368|binding|INFO|Removing iface tapf890c7dd-6d ovn-installed in OVS
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.340 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.345 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:d1:d1 10.100.0.14'], port_security=['fa:16:3e:bd:d1:d1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '53d2b424-4fdc-47e3-afa1-849bfc2c0b5a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5ce08321-9ca9-47d5-b99b-65a439440787', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '17c0ff0fdeac43fc8fa0d7bedad67c34', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e0588e8-cc01-4cf1-ba71-74f90ca3214d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=65c90a62-2d0d-4ced-b7e5-a1b1d91ba84b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f890c7dd-6d37-48c2-83ba-547d7fc32c12) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.346 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f890c7dd-6d37-48c2-83ba-547d7fc32c12 in datapath 5ce08321-9ca9-47d5-b99b-65a439440787 unbound from our chassis#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.348 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5ce08321-9ca9-47d5-b99b-65a439440787, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.349 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2812d658-439f-4001-92e0-d00b33b11ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.350 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 namespace which is not needed anymore#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.361 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539550 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d0000005b.scope: Consumed 15.918s CPU time.
Nov 29 03:14:43 np0005539550 systemd-machined[216673]: Machine qemu-42-instance-0000005b terminated.
Nov 29 03:14:43 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [NOTICE]   (314104) : haproxy version is 2.8.14-c23fe91
Nov 29 03:14:43 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [NOTICE]   (314104) : path to executable is /usr/sbin/haproxy
Nov 29 03:14:43 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [WARNING]  (314104) : Exiting Master process...
Nov 29 03:14:43 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [ALERT]    (314104) : Current worker (314106) exited with code 143 (Terminated)
Nov 29 03:14:43 np0005539550 neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787[314100]: [WARNING]  (314104) : All workers exited. Exiting... (0)
Nov 29 03:14:43 np0005539550 systemd[1]: libpod-8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539550 podman[314999]: 2025-11-29 08:14:43.503466741 +0000 UTC m=+0.051215143 container died 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.506 257641 INFO nova.virt.libvirt.driver [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Instance destroyed successfully.#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.507 257641 DEBUG nova.objects.instance [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lazy-loading 'resources' on Instance uuid 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.528 257641 DEBUG nova.virt.libvirt.vif [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:13:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1208248196',display_name='tempest-ListServerFiltersTestJSON-instance-1208248196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1208248196',id=91,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:14:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='17c0ff0fdeac43fc8fa0d7bedad67c34',ramdisk_id='',reservation_id='r-ropy304h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-825347861',owner_user_name='tempest-ListServerFiltersTestJSON-825347861-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:14:00Z,user_data=None,user_id='05e59f4debd946ad9b7a4bac0e968bc6',uuid=53d2b424-4fdc-47e3-afa1-849bfc2c0b5a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.528 257641 DEBUG nova.network.os_vif_util [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converting VIF {"id": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "address": "fa:16:3e:bd:d1:d1", "network": {"id": "5ce08321-9ca9-47d5-b99b-65a439440787", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1544923692-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "17c0ff0fdeac43fc8fa0d7bedad67c34", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf890c7dd-6d", "ovs_interfaceid": "f890c7dd-6d37-48c2-83ba-547d7fc32c12", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.529 257641 DEBUG nova.network.os_vif_util [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.530 257641 DEBUG os_vif [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.531 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf890c7dd-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7-userdata-shm.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.536 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.537 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.540 257641 INFO os_vif [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:d1:d1,bridge_name='br-int',has_traffic_filtering=True,id=f890c7dd-6d37-48c2-83ba-547d7fc32c12,network=Network(5ce08321-9ca9-47d5-b99b-65a439440787),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf890c7dd-6d')#033[00m
Nov 29 03:14:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-67da18e0fa809e92e989aec858dcb48b56ceaf987d21d3ea1c679d6b45c91cab-merged.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539550 podman[314999]: 2025-11-29 08:14:43.551717097 +0000 UTC m=+0.099465479 container cleanup 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:14:43 np0005539550 systemd[1]: libpod-conmon-8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7.scope: Deactivated successfully.
Nov 29 03:14:43 np0005539550 podman[315055]: 2025-11-29 08:14:43.616086514 +0000 UTC m=+0.044238826 container remove 8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.621 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[81ab6951-3ed8-4023-b62a-a6f114d3f50d]: (4, ('Sat Nov 29 08:14:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7)\n8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7\nSat Nov 29 08:14:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 (8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7)\n8f2004c47808d0f8ee5f8d65eb1873b9fb8d906a3f529b7030bcb2b5ea9857d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.623 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[00496a6f-193d-409e-be86-bebf988b918c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.624 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce08321-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.626 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 kernel: tap5ce08321-90: left promiscuous mode
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.655 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.657 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[62f4f138-78f8-4adb-bccc-1a452a0902ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.671 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ed98e16e-bd14-4d2e-8a39-8dbe95af9d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.672 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[69e1636f-be33-438c-8c8b-303f7988e83b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.687 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0511f632-5768-4b34-a5a9-01d8daf7da74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707928, 'reachable_time': 16163, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315073, 'error': None, 'target': 'ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.689 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5ce08321-9ca9-47d5-b99b-65a439440787 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:14:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:43.689 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fab2bb-32fe-4ed5-b529-6510e5c8f0c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:43 np0005539550 systemd[1]: run-netns-ovnmeta\x2d5ce08321\x2d9ca9\x2d47d5\x2db99b\x2d65a439440787.mount: Deactivated successfully.
Nov 29 03:14:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:43.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.954 257641 INFO nova.virt.libvirt.driver [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Deleting instance files /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_del#033[00m
Nov 29 03:14:43 np0005539550 nova_compute[257631]: 2025-11-29 08:14:43.955 257641 INFO nova.virt.libvirt.driver [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Deletion of /var/lib/nova/instances/53d2b424-4fdc-47e3-afa1-849bfc2c0b5a_del complete#033[00m
Nov 29 03:14:44 np0005539550 nova_compute[257631]: 2025-11-29 08:14:44.003 257641 INFO nova.compute.manager [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:14:44 np0005539550 nova_compute[257631]: 2025-11-29 08:14:44.004 257641 DEBUG oslo.service.loopingcall [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:14:44 np0005539550 nova_compute[257631]: 2025-11-29 08:14:44.005 257641 DEBUG nova.compute.manager [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:14:44 np0005539550 nova_compute[257631]: 2025-11-29 08:14:44.005 257641 DEBUG nova.network.neutron [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:14:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 438 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 30 KiB/s wr, 159 op/s
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.440 257641 DEBUG nova.network.neutron [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.467 257641 INFO nova.compute.manager [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Took 1.46 seconds to deallocate network for instance.#033[00m
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.539 257641 DEBUG nova.compute.manager [req-086cee12-6ed5-4662-9c24-22502ff9da06 req-0d696e5b-58bc-44c4-b893-00fb3e4e8494 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Received event network-vif-deleted-f890c7dd-6d37-48c2-83ba-547d7fc32c12 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.553 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.554 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.613 257641 DEBUG oslo_concurrency.processutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:45 np0005539550 nova_compute[257631]: 2025-11-29 08:14:45.823 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297258083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.055 257641 DEBUG oslo_concurrency.processutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.062 257641 DEBUG nova.compute.provider_tree [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.077 257641 DEBUG nova.scheduler.client.report [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.101 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.150 257641 INFO nova.scheduler.client.report [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Deleted allocations for instance 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a#033[00m
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.217 257641 DEBUG oslo_concurrency.lockutils [None req-a048dae1-acb5-4773-b8cb-08c1d29e3103 05e59f4debd946ad9b7a4bac0e968bc6 17c0ff0fdeac43fc8fa0d7bedad67c34 - - default default] Lock "53d2b424-4fdc-47e3-afa1-849bfc2c0b5a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:46.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:46 np0005539550 nova_compute[257631]: 2025-11-29 08:14:46.937 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 240 MiB data, 885 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 38 KiB/s wr, 245 op/s
Nov 29 03:14:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:14:48 np0005539550 nova_compute[257631]: 2025-11-29 08:14:48.537 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:48.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 202 MiB data, 861 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 33 KiB/s wr, 233 op/s
Nov 29 03:14:49 np0005539550 podman[315101]: 2025-11-29 08:14:49.353683154 +0000 UTC m=+0.049571792 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:14:49 np0005539550 podman[315100]: 2025-11-29 08:14:49.361358299 +0000 UTC m=+0.057231376 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:14:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:49.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:50.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.617 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.617 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.634 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.697 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.698 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.707 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.707 257641 INFO nova.compute.claims [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:14:50 np0005539550 nova_compute[257631]: 2025-11-29 08:14:50.946 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 142 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 33 KiB/s wr, 227 op/s
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.326 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.327 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.345 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:14:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1529092516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.397 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.403 257641 DEBUG nova.compute.provider_tree [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.423 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.430 257641 DEBUG nova.scheduler.client.report [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.457 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.458 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.461 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.469 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.469 257641 INFO nova.compute.claims [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.518 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.518 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.541 257641 INFO nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.558 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.602 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.680 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.681 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.682 257641 INFO nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Creating image(s)#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.707 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.741 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:51.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.774 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.779 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.804 257641 DEBUG nova.policy [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.846 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.848 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.849 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.849 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.876 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.879 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:51 np0005539550 nova_compute[257631]: 2025-11-29 08:14:51.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:14:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028088764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.045 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.052 257641 DEBUG nova.compute.provider_tree [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.068 257641 DEBUG nova.scheduler.client.report [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.118 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.121 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.185 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.186 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.264 257641 INFO nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.284 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.287 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.296 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.381 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] resizing rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.435 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.437 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.437 257641 INFO nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Creating image(s)#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.463 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.493 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.517 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.521 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.545 257641 DEBUG nova.policy [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f4c89c9953854ecf96a802dc6055db9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:14:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:52.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.586 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.587 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.588 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.588 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.614 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.617 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cc3dd9da-bb7d-4885-8555-d724b05677fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.677 257641 DEBUG nova.objects.instance [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.703 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.704 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Ensure instance console log exists: /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.705 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.706 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.706 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.737 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Successfully created port: c4a22b7d-1070-4870-a1f6-21f50729504a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.901 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cc3dd9da-bb7d-4885-8555-d724b05677fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:52 np0005539550 nova_compute[257631]: 2025-11-29 08:14:52.983 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] resizing rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:14:53 np0005539550 podman[315654]: 2025-11-29 08:14:53.055700146 +0000 UTC m=+0.059981706 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:14:53 np0005539550 podman[315654]: 2025-11-29 08:14:53.162275925 +0000 UTC m=+0.166557465 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:14:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 121 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 29 KiB/s wr, 226 op/s
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.305 257641 DEBUG nova.objects.instance [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'migration_context' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.318 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.318 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Ensure instance console log exists: /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.319 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.319 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.319 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.402 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Successfully created port: 5be36804-6ed4-419f-9b92-e268a97799f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:14:53 np0005539550 nova_compute[257631]: 2025-11-29 08:14:53.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:14:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:14:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:53 np0005539550 podman[315843]: 2025-11-29 08:14:53.741036428 +0000 UTC m=+0.057296997 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:14:53 np0005539550 podman[315843]: 2025-11-29 08:14:53.750965411 +0000 UTC m=+0.067225960 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:14:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:53.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:53 np0005539550 podman[315908]: 2025-11-29 08:14:53.952123415 +0000 UTC m=+0.048029602 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 29 03:14:53 np0005539550 podman[315908]: 2025-11-29 08:14:53.964231872 +0000 UTC m=+0.060138059 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, release=1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.368 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Successfully updated port: c4a22b7d-1070-4870-a1f6-21f50729504a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.402 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.403 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.403 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:54.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.584 257641 DEBUG nova.compute.manager [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-changed-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.585 257641 DEBUG nova.compute.manager [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Refreshing instance network info cache due to event network-changed-c4a22b7d-1070-4870-a1f6-21f50729504a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.585 257641 DEBUG oslo_concurrency.lockutils [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.683 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Successfully updated port: 5be36804-6ed4-419f-9b92-e268a97799f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.719 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.720 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquired lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:54 np0005539550 nova_compute[257631]: 2025-11-29 08:14:54.720 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a7352d4-a852-4c57-b523-3e00fff456f3 does not exist
Nov 29 03:14:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5627705c-bc96-4274-8fef-0f07d92f8da2 does not exist
Nov 29 03:14:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d9e24588-72a1-47cf-a207-850065df97f0 does not exist
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:14:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:14:55 np0005539550 nova_compute[257631]: 2025-11-29 08:14:55.030 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:55 np0005539550 nova_compute[257631]: 2025-11-29 08:14:55.091 257641 DEBUG nova.compute.manager [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-changed-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:14:55 np0005539550 nova_compute[257631]: 2025-11-29 08:14:55.091 257641 DEBUG nova.compute.manager [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Refreshing instance network info cache due to event network-changed-5be36804-6ed4-419f-9b92-e268a97799f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:14:55 np0005539550 nova_compute[257631]: 2025-11-29 08:14:55.091 257641 DEBUG oslo_concurrency.lockutils [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:14:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 121 MiB data, 797 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 196 op/s
Nov 29 03:14:55 np0005539550 nova_compute[257631]: 2025-11-29 08:14:55.369 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.416204784 +0000 UTC m=+0.037610418 container create 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:14:55 np0005539550 systemd[1]: Started libpod-conmon-24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc.scope.
Nov 29 03:14:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.399504229 +0000 UTC m=+0.020909873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.501704807 +0000 UTC m=+0.123110441 container init 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.508032658 +0000 UTC m=+0.129438312 container start 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:55 np0005539550 nostalgic_merkle[316281]: 167 167
Nov 29 03:14:55 np0005539550 systemd[1]: libpod-24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc.scope: Deactivated successfully.
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.515058687 +0000 UTC m=+0.136464321 container attach 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.516458122 +0000 UTC m=+0.137863776 container died 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:14:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d08b24caf79369b76bf5701b4cfdb2d5f641383e0120314040d129335e08db92-merged.mount: Deactivated successfully.
Nov 29 03:14:55 np0005539550 podman[316265]: 2025-11-29 08:14:55.5596278 +0000 UTC m=+0.181033434 container remove 24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:14:55 np0005539550 systemd[1]: libpod-conmon-24b75ace1cf2d5449433069952c53274bb6d0a5131314f03b0f4ca03c4f5ebfc.scope: Deactivated successfully.
Nov 29 03:14:55 np0005539550 podman[316305]: 2025-11-29 08:14:55.724316196 +0000 UTC m=+0.043011204 container create 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:14:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:14:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:14:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:14:55 np0005539550 systemd[1]: Started libpod-conmon-52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa.scope.
Nov 29 03:14:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:55.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:55 np0005539550 podman[316305]: 2025-11-29 08:14:55.703465226 +0000 UTC m=+0.022160254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:55 np0005539550 podman[316305]: 2025-11-29 08:14:55.814102969 +0000 UTC m=+0.132798017 container init 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:14:55 np0005539550 podman[316305]: 2025-11-29 08:14:55.821893717 +0000 UTC m=+0.140588735 container start 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:14:55 np0005539550 podman[316305]: 2025-11-29 08:14:55.828485275 +0000 UTC m=+0.147180283 container attach 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:14:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:56.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:56 np0005539550 hungry_lovelace[316321]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:14:56 np0005539550 hungry_lovelace[316321]: --> relative data size: 1.0
Nov 29 03:14:56 np0005539550 hungry_lovelace[316321]: --> All data devices are unavailable
Nov 29 03:14:56 np0005539550 systemd[1]: libpod-52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa.scope: Deactivated successfully.
Nov 29 03:14:56 np0005539550 podman[316305]: 2025-11-29 08:14:56.766347187 +0000 UTC m=+1.085042205 container died 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:14:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:14:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5ecdc40357cbac90413f2c4d95adaa78336ed0aa736d89bd2ec627db9ea19e5c-merged.mount: Deactivated successfully.
Nov 29 03:14:56 np0005539550 podman[316305]: 2025-11-29 08:14:56.82430686 +0000 UTC m=+1.143001868 container remove 52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:14:56 np0005539550 systemd[1]: libpod-conmon-52cb0c491a2e6c096aa8a1877ee417ce4d585c173c7724e9c01e42589a41bcfa.scope: Deactivated successfully.
Nov 29 03:14:56 np0005539550 nova_compute[257631]: 2025-11-29 08:14:56.941 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.296 257641 DEBUG nova.network.neutron [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updating instance_info_cache with network_info: [{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 204 MiB data, 837 MiB used, 20 GiB / 21 GiB avail; 806 KiB/s rd, 3.4 MiB/s wr, 197 op/s
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.340 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Releasing lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.340 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance network_info: |[{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.341 257641 DEBUG oslo_concurrency.lockutils [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.341 257641 DEBUG nova.network.neutron [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Refreshing network info cache for port 5be36804-6ed4-419f-9b92-e268a97799f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.344 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start _get_guest_xml network_info=[{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.350 257641 WARNING nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.361 257641 DEBUG nova.virt.libvirt.host [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.362 257641 DEBUG nova.virt.libvirt.host [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.367 257641 DEBUG nova.virt.libvirt.host [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.367 257641 DEBUG nova.virt.libvirt.host [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.368 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.369 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.369 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.369 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.370 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.370 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.370 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.370 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.371 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.371 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.371 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.371 257641 DEBUG nova.virt.hardware [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.376 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.408 257641 DEBUG nova.network.neutron [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.433 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.434 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance network_info: |[{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.434 257641 DEBUG oslo_concurrency.lockutils [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.435 257641 DEBUG nova.network.neutron [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Refreshing network info cache for port c4a22b7d-1070-4870-a1f6-21f50729504a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.437 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start _get_guest_xml network_info=[{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.441 257641 WARNING nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.450 257641 DEBUG nova.virt.libvirt.host [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.451 257641 DEBUG nova.virt.libvirt.host [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.454 257641 DEBUG nova.virt.libvirt.host [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.455 257641 DEBUG nova.virt.libvirt.host [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.456 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.456 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.457 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.457 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.457 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.458 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.458 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.458 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.458 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.459 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.459 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.459 257641 DEBUG nova.virt.hardware [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.462 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.520007926 +0000 UTC m=+0.043634810 container create df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:14:57 np0005539550 systemd[1]: Started libpod-conmon-df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4.scope.
Nov 29 03:14:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.592000587 +0000 UTC m=+0.115627491 container init df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.503969359 +0000 UTC m=+0.027596263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.604881324 +0000 UTC m=+0.128508218 container start df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:14:57 np0005539550 vigorous_ardinghelli[316529]: 167 167
Nov 29 03:14:57 np0005539550 systemd[1]: libpod-df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4.scope: Deactivated successfully.
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.612289122 +0000 UTC m=+0.135916096 container attach df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.612821706 +0000 UTC m=+0.136448620 container died df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:14:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4265cee6656375c9c8df3f36b41d7825e94d8b648c64144cbae48bb728ea5d55-merged.mount: Deactivated successfully.
Nov 29 03:14:57 np0005539550 podman[316492]: 2025-11-29 08:14:57.666432049 +0000 UTC m=+0.190058933 container remove df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:14:57 np0005539550 systemd[1]: libpod-conmon-df331e1e9f5453e57f221c4aaac7c2d5971f292ff7ebebfdf079215f77354cd4.scope: Deactivated successfully.
Nov 29 03:14:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:57.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604803242' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.836 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:57 np0005539550 podman[316570]: 2025-11-29 08:14:57.848067366 +0000 UTC m=+0.051039258 container create 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.877 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.881 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:57 np0005539550 systemd[1]: Started libpod-conmon-2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e.scope.
Nov 29 03:14:57 np0005539550 podman[316570]: 2025-11-29 08:14:57.820949907 +0000 UTC m=+0.023921809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a4921f93e24ec028fa8faf9da3d3af17f422e7c74c6eed2d1f71dcc7840a796/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a4921f93e24ec028fa8faf9da3d3af17f422e7c74c6eed2d1f71dcc7840a796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a4921f93e24ec028fa8faf9da3d3af17f422e7c74c6eed2d1f71dcc7840a796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a4921f93e24ec028fa8faf9da3d3af17f422e7c74c6eed2d1f71dcc7840a796/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:14:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876194095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:57 np0005539550 podman[316570]: 2025-11-29 08:14:57.941463011 +0000 UTC m=+0.144434893 container init 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:14:57 np0005539550 podman[316570]: 2025-11-29 08:14:57.94852772 +0000 UTC m=+0.151499602 container start 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.948 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:57 np0005539550 podman[316570]: 2025-11-29 08:14:57.953349433 +0000 UTC m=+0.156321325 container attach 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.974 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:57 np0005539550 nova_compute[257631]: 2025-11-29 08:14:57.977 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:14:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646539737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.318 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.321 257641 DEBUG nova.virt.libvirt.vif [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-513044691',display_name='tempest-ServerRescueTestJSON-server-513044691',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-513044691',id=96,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-b20jon9y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:52Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=cc3dd9da-bb7d-4885-8555-d724b05677fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.321 257641 DEBUG nova.network.os_vif_util [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.322 257641 DEBUG nova.network.os_vif_util [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.323 257641 DEBUG nova.objects.instance [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'pci_devices' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.343 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <uuid>cc3dd9da-bb7d-4885-8555-d724b05677fd</uuid>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <name>instance-00000060</name>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerRescueTestJSON-server-513044691</nova:name>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:14:57</nova:creationTime>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:user uuid="f4c89c9953854ecf96a802dc6055db9d">tempest-ServerRescueTestJSON-640276387-project-member</nova:user>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:project uuid="4da7fb77734a4135a6f8b5b70bed7a2f">tempest-ServerRescueTestJSON-640276387</nova:project>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:port uuid="5be36804-6ed4-419f-9b92-e268a97799f5">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="serial">cc3dd9da-bb7d-4885-8555-d724b05677fd</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="uuid">cc3dd9da-bb7d-4885-8555-d724b05677fd</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cc3dd9da-bb7d-4885-8555-d724b05677fd_disk">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a3:26:00"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="tap5be36804-6e"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/console.log" append="off"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:14:58 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:14:58 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.345 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Preparing to wait for external event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.345 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.346 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.346 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.347 257641 DEBUG nova.virt.libvirt.vif [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-513044691',display_name='tempest-ServerRescueTestJSON-server-513044691',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-513044691',id=96,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-b20jon9y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:52Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=cc3dd9da-bb7d-4885-8555-d724b05677fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.347 257641 DEBUG nova.network.os_vif_util [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.348 257641 DEBUG nova.network.os_vif_util [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.348 257641 DEBUG os_vif [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.349 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.349 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.353 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.353 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5be36804-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.354 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5be36804-6e, col_values=(('external_ids', {'iface-id': '5be36804-6ed4-419f-9b92-e268a97799f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:26:00', 'vm-uuid': 'cc3dd9da-bb7d-4885-8555-d724b05677fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.355 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 NetworkManager[49039]: <info>  [1764404098.3565] manager: (tap5be36804-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.358 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.363 257641 INFO os_vif [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e')#033[00m
Nov 29 03:14:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:14:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295448258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.413 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.415 257641 DEBUG nova.virt.libvirt.vif [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.415 257641 DEBUG nova.network.os_vif_util [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.416 257641 DEBUG nova.network.os_vif_util [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.420 257641 DEBUG nova.objects.instance [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.427 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.428 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.428 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No VIF found with MAC fa:16:3e:a3:26:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.429 257641 INFO nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Using config drive#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.462 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.474 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <uuid>886e1d81-4445-45c7-8c0a-4838eb595ab1</uuid>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <name>instance-0000005f</name>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-431627369</nova:name>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:14:57</nova:creationTime>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <nova:port uuid="c4a22b7d-1070-4870-a1f6-21f50729504a">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="serial">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="uuid">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:26:59:1a"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <target dev="tapc4a22b7d-10"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/console.log" append="off"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:14:58 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:14:58 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:14:58 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:14:58 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.475 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Preparing to wait for external event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.476 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.476 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.476 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.477 257641 DEBUG nova.virt.libvirt.vif [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.477 257641 DEBUG nova.network.os_vif_util [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.478 257641 DEBUG nova.network.os_vif_util [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.479 257641 DEBUG os_vif [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.480 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.481 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.483 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.484 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a22b7d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.484 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4a22b7d-10, col_values=(('external_ids', {'iface-id': 'c4a22b7d-1070-4870-a1f6-21f50729504a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:59:1a', 'vm-uuid': '886e1d81-4445-45c7-8c0a-4838eb595ab1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.485 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 NetworkManager[49039]: <info>  [1764404098.4867] manager: (tapc4a22b7d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.499 257641 INFO os_vif [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.504 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404083.5040112, 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.504 257641 INFO nova.compute.manager [-] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.535 257641 DEBUG nova.compute.manager [None req-cbe7a1fa-9e65-4439-8e5e-95aa1483aadc - - - - - -] [instance: 53d2b424-4fdc-47e3-afa1-849bfc2c0b5a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.559 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.560 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.560 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:26:59:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.560 257641 INFO nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Using config drive#033[00m
Nov 29 03:14:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:14:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:14:58.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:14:58 np0005539550 nova_compute[257631]: 2025-11-29 08:14:58.583 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]: {
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:    "0": [
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:        {
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "devices": [
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "/dev/loop3"
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            ],
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "lv_name": "ceph_lv0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "lv_size": "7511998464",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "name": "ceph_lv0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "tags": {
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.cluster_name": "ceph",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.crush_device_class": "",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.encrypted": "0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.osd_id": "0",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.type": "block",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:                "ceph.vdo": "0"
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            },
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "type": "block",
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:            "vg_name": "ceph_vg0"
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:        }
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]:    ]
Nov 29 03:14:58 np0005539550 wizardly_mcclintock[316607]: }
Nov 29 03:14:58 np0005539550 systemd[1]: libpod-2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e.scope: Deactivated successfully.
Nov 29 03:14:58 np0005539550 podman[316721]: 2025-11-29 08:14:58.761200179 +0000 UTC m=+0.024947625 container died 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:14:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8a4921f93e24ec028fa8faf9da3d3af17f422e7c74c6eed2d1f71dcc7840a796-merged.mount: Deactivated successfully.
Nov 29 03:14:58 np0005539550 podman[316721]: 2025-11-29 08:14:58.82104517 +0000 UTC m=+0.084792596 container remove 2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mcclintock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:14:58 np0005539550 systemd[1]: libpod-conmon-2c482e90483f832168cafc503519ff4bf27603054303f7408bc461789144855e.scope: Deactivated successfully.
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.226 257641 INFO nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Creating config drive at /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.236 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2xb3yld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.274 257641 INFO nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Creating config drive at /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.279 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5ilpznj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 3.5 MiB/s wr, 120 op/s
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:14:59
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'images']
Nov 29 03:14:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.382 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2xb3yld" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.411 257641 DEBUG nova.storage.rbd_utils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.416 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.444 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv5ilpznj" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.482 257641 DEBUG nova.storage.rbd_utils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.490 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.491966527 +0000 UTC m=+0.047059078 container create 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:14:59 np0005539550 systemd[1]: Started libpod-conmon-93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6.scope.
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.471425974 +0000 UTC m=+0.026518555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.589220299 +0000 UTC m=+0.144312870 container init 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.597451238 +0000 UTC m=+0.152543789 container start 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.601091511 +0000 UTC m=+0.156184212 container attach 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:14:59 np0005539550 elated_ganguly[316954]: 167 167
Nov 29 03:14:59 np0005539550 systemd[1]: libpod-93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6.scope: Deactivated successfully.
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.607103814 +0000 UTC m=+0.162196355 container died 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:14:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d89837d636aef4eaa653eec80d690ce0fb04aef9487a6864b3cc8e22e38fa3d2-merged.mount: Deactivated successfully.
Nov 29 03:14:59 np0005539550 podman[316902]: 2025-11-29 08:14:59.650488827 +0000 UTC m=+0.205581378 container remove 93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ganguly, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.656 257641 DEBUG oslo_concurrency.processutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config 886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.656 257641 INFO nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Deleting local config drive /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:59 np0005539550 systemd[1]: libpod-conmon-93475a4cef288961465e63bebd78d90769f53c0204bcce5aff5f88bb5e9d35d6.scope: Deactivated successfully.
Nov 29 03:14:59 np0005539550 NetworkManager[49039]: <info>  [1764404099.7320] manager: (tapc4a22b7d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Nov 29 03:14:59 np0005539550 kernel: tapc4a22b7d-10: entered promiscuous mode
Nov 29 03:14:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:59Z|00369|binding|INFO|Claiming lport c4a22b7d-1070-4870-a1f6-21f50729504a for this chassis.
Nov 29 03:14:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:59Z|00370|binding|INFO|c4a22b7d-1070-4870-a1f6-21f50729504a: Claiming fa:16:3e:26:59:1a 10.100.0.8
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.737 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.753 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.754 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.756 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:14:59 np0005539550 systemd-udevd[317010]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.769 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[857dcf06-9b99-4f9e-8b94-fe50642c64ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.770 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:14:59 np0005539550 systemd-machined[216673]: New machine qemu-44-instance-0000005f.
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.773 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.773 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7c2fc0e6-7869-49b6-a1ec-719cc29dc602]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.777 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e75aa1ce-e881-4ed4-9d28-bf56deaac501]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 NetworkManager[49039]: <info>  [1764404099.7793] device (tapc4a22b7d-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:14:59 np0005539550 NetworkManager[49039]: <info>  [1764404099.7803] device (tapc4a22b7d-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:14:59 np0005539550 systemd[1]: Started Virtual Machine qemu-44-instance-0000005f.
Nov 29 03:14:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:14:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:14:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:14:59.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.781 257641 DEBUG nova.network.neutron [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updated VIF entry in instance network info cache for port 5be36804-6ed4-419f-9b92-e268a97799f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.782 257641 DEBUG nova.network.neutron [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updating instance_info_cache with network_info: [{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.788 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b6304c34-01c4-4bc7-beea-fe29efb84d99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.811 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07000243-40fd-4fc5-8435-ae253b733cf8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.822 257641 DEBUG oslo_concurrency.lockutils [req-7c539444-263f-45de-91c9-d0772460bf54 req-45cde6d1-b077-4eeb-8f11-784eb6307f19 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:59Z|00371|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a ovn-installed in OVS
Nov 29 03:14:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:14:59Z|00372|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a up in Southbound
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.841 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.846 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5508b026-3bc4-4b20-935b-afdb568ea23a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 NetworkManager[49039]: <info>  [1764404099.8532] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/165)
Nov 29 03:14:59 np0005539550 systemd-udevd[317016]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.851 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a81be0f-2424-4a6c-9655-878e6eb4aae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.864 257641 DEBUG nova.network.neutron [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updated VIF entry in instance network info cache for port c4a22b7d-1070-4870-a1f6-21f50729504a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.864 257641 DEBUG nova.network.neutron [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.883 257641 DEBUG oslo_concurrency.lockutils [req-52c2155e-79df-42cf-853e-515c2bc9ccd1 req-e0511100-e0ae-4747-abc0-e850d81ac5e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.894 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[88b6baaf-2cc5-4248-a4aa-321884984cb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.897 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[763f7129-8142-4057-92b3-fdad8b5830c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 podman[317014]: 2025-11-29 08:14:59.811699455 +0000 UTC m=+0.024583566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:14:59 np0005539550 NetworkManager[49039]: <info>  [1764404099.9223] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.928 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4cfb6ada-d7ea-4c24-9fda-2f7f32e201a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.943 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78555ace-da24-439c-bda0-3cb0b216c33d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714758, 'reachable_time': 33487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317060, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.958 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5be910d5-86f6-4d1f-918b-be9db366edb0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714758, 'tstamp': 714758}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317061, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:14:59 np0005539550 podman[317014]: 2025-11-29 08:14:59.972013621 +0000 UTC m=+0.184897712 container create 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.972 257641 DEBUG oslo_concurrency.processutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:14:59 np0005539550 nova_compute[257631]: 2025-11-29 08:14:59.972 257641 INFO nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Deleting local config drive /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:14:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:14:59.974 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a695780-4269-4338-aa25-b265cc4ff31e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714758, 'reachable_time': 33487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317062, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.012 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a49b6097-83f5-4e4b-aab1-252ee1dce8e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:00 np0005539550 kernel: tap5be36804-6e: entered promiscuous mode
Nov 29 03:15:00 np0005539550 NetworkManager[49039]: <info>  [1764404100.0248] manager: (tap5be36804-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/166)
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 systemd-udevd[317040]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:00Z|00373|binding|INFO|Claiming lport 5be36804-6ed4-419f-9b92-e268a97799f5 for this chassis.
Nov 29 03:15:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:00Z|00374|binding|INFO|5be36804-6ed4-419f-9b92-e268a97799f5: Claiming fa:16:3e:a3:26:00 10.100.0.2
Nov 29 03:15:00 np0005539550 NetworkManager[49039]: <info>  [1764404100.0500] device (tap5be36804-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:00 np0005539550 NetworkManager[49039]: <info>  [1764404100.0513] device (tap5be36804-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.063 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:26:00 10.100.0.2'], port_security=['fa:16:3e:a3:26:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'cc3dd9da-bb7d-4885-8555-d724b05677fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5be36804-6ed4-419f-9b92-e268a97799f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.088 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cbbe32d7-0811-4883-aabb-eba98b27e87c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.090 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.090 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.090 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.091 257641 DEBUG nova.compute.manager [req-3b307f04-b59f-4124-80c5-e74b343163bc req-7049c7a1-9ad3-4ef0-bc74-205859f62639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.092 257641 DEBUG oslo_concurrency.lockutils [req-3b307f04-b59f-4124-80c5-e74b343163bc req-7049c7a1-9ad3-4ef0-bc74-205859f62639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.092 257641 DEBUG oslo_concurrency.lockutils [req-3b307f04-b59f-4124-80c5-e74b343163bc req-7049c7a1-9ad3-4ef0-bc74-205859f62639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.092 257641 DEBUG oslo_concurrency.lockutils [req-3b307f04-b59f-4124-80c5-e74b343163bc req-7049c7a1-9ad3-4ef0-bc74-205859f62639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:00 np0005539550 NetworkManager[49039]: <info>  [1764404100.0930] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.092 257641 DEBUG nova.compute.manager [req-3b307f04-b59f-4124-80c5-e74b343163bc req-7049c7a1-9ad3-4ef0-bc74-205859f62639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Processing event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.093 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 systemd[1]: Started libpod-conmon-4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b.scope.
Nov 29 03:15:00 np0005539550 systemd-machined[216673]: New machine qemu-45-instance-00000060.
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.122 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.126 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.127 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.128 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:00Z|00375|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.133 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.134 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:15:00 np0005539550 systemd[1]: Started Virtual Machine qemu-45-instance-00000060.
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.134 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ce43d6a1-a871-47ac-8c40-b6aaa4714f88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.135 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:00.136 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:15:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1e06ca8c3584731c21d13dbadc7dcdb0ff6202b962257f24a2982e03d0a96a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1e06ca8c3584731c21d13dbadc7dcdb0ff6202b962257f24a2982e03d0a96a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1e06ca8c3584731c21d13dbadc7dcdb0ff6202b962257f24a2982e03d0a96a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b1e06ca8c3584731c21d13dbadc7dcdb0ff6202b962257f24a2982e03d0a96a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:00Z|00376|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 ovn-installed in OVS
Nov 29 03:15:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:00Z|00377|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 up in Southbound
Nov 29 03:15:00 np0005539550 nova_compute[257631]: 2025-11-29 08:15:00.155 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:00 np0005539550 podman[317014]: 2025-11-29 08:15:00.16749349 +0000 UTC m=+0.380377601 container init 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:15:00 np0005539550 podman[317014]: 2025-11-29 08:15:00.180125992 +0000 UTC m=+0.393010083 container start 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:15:00 np0005539550 podman[317014]: 2025-11-29 08:15:00.186314309 +0000 UTC m=+0.399198400 container attach 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:15:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:01.150 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.154 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:01.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.198 257641 DEBUG nova.compute.manager [req-a9fb81ce-ab04-49b4-9dd6-8fabd1409fad req-7ab00a21-1002-4381-8adf-18b56a2b9c21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.199 257641 DEBUG oslo_concurrency.lockutils [req-a9fb81ce-ab04-49b4-9dd6-8fabd1409fad req-7ab00a21-1002-4381-8adf-18b56a2b9c21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.199 257641 DEBUG oslo_concurrency.lockutils [req-a9fb81ce-ab04-49b4-9dd6-8fabd1409fad req-7ab00a21-1002-4381-8adf-18b56a2b9c21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.199 257641 DEBUG oslo_concurrency.lockutils [req-a9fb81ce-ab04-49b4-9dd6-8fabd1409fad req-7ab00a21-1002-4381-8adf-18b56a2b9c21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.199 257641 DEBUG nova.compute.manager [req-a9fb81ce-ab04-49b4-9dd6-8fabd1409fad req-7ab00a21-1002-4381-8adf-18b56a2b9c21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Processing event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:15:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 70 KiB/s rd, 3.6 MiB/s wr, 107 op/s
Nov 29 03:15:01 np0005539550 podman[317155]: 2025-11-29 08:15:01.320260366 +0000 UTC m=+0.078998099 container create 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:15:01 np0005539550 podman[317155]: 2025-11-29 08:15:01.268912391 +0000 UTC m=+0.027650144 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:15:01 np0005539550 systemd[1]: Started libpod-conmon-4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82.scope.
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.386 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.385934, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.387 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.390 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.396 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.403 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance spawned successfully.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.403 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb5d8f111b4211045ba2427551fccb9730cd39bd7e7cdc5e6e558049db0144f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.422 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.430 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:01 np0005539550 podman[317155]: 2025-11-29 08:15:01.432645803 +0000 UTC m=+0.191383536 container init 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.435 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.435 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.436 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.436 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.437 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.437 257641 DEBUG nova.virt.libvirt.driver [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 podman[317155]: 2025-11-29 08:15:01.440643727 +0000 UTC m=+0.199381460 container start 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:15:01 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [NOTICE]   (317224) : New worker (317228) forked
Nov 29 03:15:01 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [NOTICE]   (317224) : Loading success.
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.487 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.487 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.387381, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.487 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:01.520 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5be36804-6ed4-419f-9b92-e268a97799f5 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:15:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:01.521 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:15:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:01.522 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a903738f-3f09-46c0-a11a-338e48813adf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:01.523 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.524 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.541 257641 INFO nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Took 9.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.542 257641 DEBUG nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.543 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.3946059, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.543 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 podman[317212]: 2025-11-29 08:15:01.56546142 +0000 UTC m=+0.148514027 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.574 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.581 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.607 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.627 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.639 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.6059422, cc3dd9da-bb7d-4885-8555-d724b05677fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.640 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.643 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.646 257641 INFO nova.compute.manager [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Took 10.97 seconds to build instance.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.652 257641 INFO nova.virt.libvirt.driver [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance spawned successfully.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.652 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.668 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.677 257641 DEBUG oslo_concurrency.lockutils [None req-44461991-0cd9-47cd-810d-321670a9ed58 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.678 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.682 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.682 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.683 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.683 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.684 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.685 257641 DEBUG nova.virt.libvirt.driver [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.697 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.697 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.6060398, cc3dd9da-bb7d-4885-8555-d724b05677fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.698 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.719 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.722 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404101.6168365, cc3dd9da-bb7d-4885-8555-d724b05677fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.723 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.759 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.762 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.770 257641 INFO nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.771 257641 DEBUG nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:01.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.801 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.858 257641 INFO nova.compute.manager [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Took 10.47 seconds to build instance.#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.877 257641 DEBUG oslo_concurrency.lockutils [None req-e3672981-aaeb-4b95-b474-6619d79c463c f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:01 np0005539550 nova_compute[257631]: 2025-11-29 08:15:01.942 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:01 np0005539550 amazing_gould[317085]: {
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:        "osd_id": 0,
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:        "type": "bluestore"
Nov 29 03:15:01 np0005539550 amazing_gould[317085]:    }
Nov 29 03:15:01 np0005539550 amazing_gould[317085]: }
Nov 29 03:15:02 np0005539550 systemd[1]: libpod-4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b.scope: Deactivated successfully.
Nov 29 03:15:02 np0005539550 systemd[1]: libpod-4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b.scope: Consumed 1.790s CPU time.
Nov 29 03:15:02 np0005539550 podman[317014]: 2025-11-29 08:15:01.999812911 +0000 UTC m=+2.212697002 container died 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:15:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0b1e06ca8c3584731c21d13dbadc7dcdb0ff6202b962257f24a2982e03d0a96a-merged.mount: Deactivated successfully.
Nov 29 03:15:02 np0005539550 podman[317014]: 2025-11-29 08:15:02.101109176 +0000 UTC m=+2.313993267 container remove 4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:15:02 np0005539550 systemd[1]: libpod-conmon-4f6355120021d004bc2315e3705fdc65cd57b1d1ebcc563fae7ec24c9e65a01b.scope: Deactivated successfully.
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 91bf4ab0-972f-42b7-a74c-76c6d49b3d51 does not exist
Nov 29 03:15:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bd51b4d7-294c-448c-8413-fd3ec39ff8b8 does not exist
Nov 29 03:15:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 52ac989c-f7b5-40bf-aa12-73a3589e8232 does not exist
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.223 257641 DEBUG nova.compute.manager [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.223 257641 DEBUG oslo_concurrency.lockutils [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.224 257641 DEBUG oslo_concurrency.lockutils [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.224 257641 DEBUG oslo_concurrency.lockutils [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.224 257641 DEBUG nova.compute.manager [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:02 np0005539550 nova_compute[257631]: 2025-11-29 08:15:02.224 257641 WARNING nova.compute.manager [req-0e0acd85-17a7-4e1e-b062-9b5b1d36ad03 req-d3e5f9ec-b6e4-4b4a-9c4b-67b06402455b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:15:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 180 KiB/s rd, 3.6 MiB/s wr, 92 op/s
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.336 257641 DEBUG nova.compute.manager [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.336 257641 DEBUG oslo_concurrency.lockutils [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.337 257641 DEBUG oslo_concurrency.lockutils [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.337 257641 DEBUG oslo_concurrency.lockutils [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.337 257641 DEBUG nova.compute.manager [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.337 257641 WARNING nova.compute.manager [req-d61ca869-2390-4e1a-a9f4-86317654ad03 req-8d60365c-f1be-41d5-bf6b-011e60fa1d76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:15:03 np0005539550 nova_compute[257631]: 2025-11-29 08:15:03.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.596960) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103597066, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1204, "num_deletes": 252, "total_data_size": 1921134, "memory_usage": 1957008, "flush_reason": "Manual Compaction"}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103609744, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 1857339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40010, "largest_seqno": 41213, "table_properties": {"data_size": 1851536, "index_size": 3070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13174, "raw_average_key_size": 20, "raw_value_size": 1839648, "raw_average_value_size": 2856, "num_data_blocks": 135, "num_entries": 644, "num_filter_entries": 644, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404011, "oldest_key_time": 1764404011, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 12832 microseconds, and 5717 cpu microseconds.
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.609809) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 1857339 bytes OK
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.609833) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.612092) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.612115) EVENT_LOG_v1 {"time_micros": 1764404103612111, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.612131) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1915634, prev total WAL file size 1915634, number of live WAL files 2.
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.613160) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(1813KB)], [86(10109KB)]
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103613316, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12209673, "oldest_snapshot_seqno": -1}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7413 keys, 10269437 bytes, temperature: kUnknown
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103769068, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10269437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10221334, "index_size": 28471, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 192235, "raw_average_key_size": 25, "raw_value_size": 10089926, "raw_average_value_size": 1361, "num_data_blocks": 1118, "num_entries": 7413, "num_filter_entries": 7413, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404103, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.769534) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10269437 bytes
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.771946) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.3 rd, 65.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.9 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 7936, records dropped: 523 output_compression: NoCompression
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.771992) EVENT_LOG_v1 {"time_micros": 1764404103771972, "job": 50, "event": "compaction_finished", "compaction_time_micros": 155906, "compaction_time_cpu_micros": 30082, "output_level": 6, "num_output_files": 1, "total_output_size": 10269437, "num_input_records": 7936, "num_output_records": 7413, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103772417, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404103774078, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.612925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.774148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.774151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.774153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.774154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:15:03.774156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:15:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:03.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:04 np0005539550 nova_compute[257631]: 2025-11-29 08:15:04.220 257641 INFO nova.compute.manager [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Rescuing#033[00m
Nov 29 03:15:04 np0005539550 nova_compute[257631]: 2025-11-29 08:15:04.221 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:04 np0005539550 nova_compute[257631]: 2025-11-29 08:15:04.221 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquired lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:04 np0005539550 nova_compute[257631]: 2025-11-29 08:15:04.222 257641 DEBUG nova.network.neutron [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:05.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 157 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Nov 29 03:15:05 np0005539550 NetworkManager[49039]: <info>  [1764404105.7560] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Nov 29 03:15:05 np0005539550 NetworkManager[49039]: <info>  [1764404105.7569] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Nov 29 03:15:05 np0005539550 nova_compute[257631]: 2025-11-29 08:15:05.755 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:05.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:05 np0005539550 nova_compute[257631]: 2025-11-29 08:15:05.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:05Z|00378|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:15:05 np0005539550 nova_compute[257631]: 2025-11-29 08:15:05.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.313 257641 DEBUG nova.network.neutron [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updating instance_info_cache with network_info: [{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.333 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Releasing lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.471 257641 DEBUG nova.compute.manager [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-changed-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.472 257641 DEBUG nova.compute.manager [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Refreshing instance network info cache due to event network-changed-c4a22b7d-1070-4870-a1f6-21f50729504a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.473 257641 DEBUG oslo_concurrency.lockutils [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.474 257641 DEBUG oslo_concurrency.lockutils [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.474 257641 DEBUG nova.network.neutron [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Refreshing network info cache for port c4a22b7d-1070-4870-a1f6-21f50729504a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:15:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.887 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:15:06 np0005539550 nova_compute[257631]: 2025-11-29 08:15:06.950 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:07.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.738 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.761 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.761 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid cc3dd9da-bb7d-4885-8555-d724b05677fd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.761 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.762 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.762 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.763 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.763 257641 INFO nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.763 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:07.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:07 np0005539550 nova_compute[257631]: 2025-11-29 08:15:07.813 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:15:08 np0005539550 nova_compute[257631]: 2025-11-29 08:15:08.348 257641 DEBUG nova.network.neutron [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updated VIF entry in instance network info cache for port c4a22b7d-1070-4870-a1f6-21f50729504a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:15:08 np0005539550 nova_compute[257631]: 2025-11-29 08:15:08.349 257641 DEBUG nova.network.neutron [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:08 np0005539550 nova_compute[257631]: 2025-11-29 08:15:08.368 257641 DEBUG oslo_concurrency.lockutils [req-e11f899b-a7f7-4871-9ad7-9babe2732ae4 req-13872932-21bb-4b19-adf5-3e911eef7c12 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:08 np0005539550 nova_compute[257631]: 2025-11-29 08:15:08.490 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:15:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:15:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:09.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 233 KiB/s wr, 150 op/s
Nov 29 03:15:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:09.525 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:09.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:11.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Nov 29 03:15:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:11.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:11 np0005539550 nova_compute[257631]: 2025-11-29 08:15:11.952 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.6 KiB/s wr, 148 op/s
Nov 29 03:15:13 np0005539550 nova_compute[257631]: 2025-11-29 08:15:13.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:13.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 214 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.5 KiB/s wr, 141 op/s
Nov 29 03:15:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:15.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:16 np0005539550 nova_compute[257631]: 2025-11-29 08:15:16.929 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:15:16 np0005539550 nova_compute[257631]: 2025-11-29 08:15:16.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 320 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.9 MiB/s wr, 224 op/s
Nov 29 03:15:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:17.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:18Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:59:1a 10.100.0.8
Nov 29 03:15:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:18Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:59:1a 10.100.0.8
Nov 29 03:15:18 np0005539550 nova_compute[257631]: 2025-11-29 08:15:18.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:18.947 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:18.948 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:18.948 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:19.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 354 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 280 KiB/s rd, 7.6 MiB/s wr, 115 op/s
Nov 29 03:15:19 np0005539550 nova_compute[257631]: 2025-11-29 08:15:19.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:19.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008382116019914386 of space, bias 1.0, pg target 2.5146348059743158 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:15:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:15:20 np0005539550 podman[317404]: 2025-11-29 08:15:20.320124605 +0000 UTC m=+0.058222631 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:15:20 np0005539550 podman[317403]: 2025-11-29 08:15:20.323204293 +0000 UTC m=+0.061376201 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:15:20 np0005539550 kernel: tap5be36804-6e (unregistering): left promiscuous mode
Nov 29 03:15:20 np0005539550 NetworkManager[49039]: <info>  [1764404120.5691] device (tap5be36804-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:20Z|00379|binding|INFO|Releasing lport 5be36804-6ed4-419f-9b92-e268a97799f5 from this chassis (sb_readonly=0)
Nov 29 03:15:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:20Z|00380|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 down in Southbound
Nov 29 03:15:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:20Z|00381|binding|INFO|Removing iface tap5be36804-6e ovn-installed in OVS
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.580 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:20.586 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:26:00 10.100.0.2'], port_security=['fa:16:3e:a3:26:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'cc3dd9da-bb7d-4885-8555-d724b05677fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5be36804-6ed4-419f-9b92-e268a97799f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:20.587 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5be36804-6ed4-419f-9b92-e268a97799f5 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:15:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:20.588 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:15:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:20.590 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0a27de-f081-4e05-857b-dc526adbe78b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539550 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 29 03:15:20 np0005539550 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000060.scope: Consumed 14.744s CPU time.
Nov 29 03:15:20 np0005539550 systemd-machined[216673]: Machine qemu-45-instance-00000060 terminated.
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.818 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.948 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.954 257641 INFO nova.virt.libvirt.driver [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance destroyed successfully.#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.955 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'numa_topology' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.976 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Attempting rescue#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.977 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.982 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:15:20 np0005539550 nova_compute[257631]: 2025-11-29 08:15:20.982 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Creating image(s)#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.014 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.019 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'trusted_certs' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.067 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.094 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.098 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.182 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.183 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.184 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.184 257641 DEBUG oslo_concurrency.lockutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:21.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.213 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.217 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 371 MiB data, 950 MiB used, 20 GiB / 21 GiB avail; 632 KiB/s rd, 7.8 MiB/s wr, 191 op/s
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.717 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.718 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'migration_context' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.756 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.757 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start _get_guest_xml network_info=[{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:a3:26:00"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '4873db8c-b414-4e95-acd9-77caabebe722', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.758 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'resources' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.780 257641 WARNING nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.787 257641 DEBUG nova.virt.libvirt.host [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.788 257641 DEBUG nova.virt.libvirt.host [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:15:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.792 257641 DEBUG nova.virt.libvirt.host [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.792 257641 DEBUG nova.virt.libvirt.host [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.793 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.794 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.794 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.794 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.795 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.795 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.795 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.795 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.796 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.796 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.796 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.796 257641 DEBUG nova.virt.hardware [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.797 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'vcpu_model' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:21.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.827 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:21 np0005539550 nova_compute[257631]: 2025-11-29 08:15:21.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701559049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.325 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.327 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.584 257641 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.585 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.585 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.585 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.585 257641 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.585 257641 WARNING nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 DEBUG oslo_concurrency.lockutils [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 DEBUG nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.586 257641 WARNING nova.compute.manager [req-d28bc20b-c6f0-44e6-9e13-735cae59508b req-b72e6869-589e-4d41-bd4b-d8fae599b743 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:15:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/89292974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.775 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:22 np0005539550 nova_compute[257631]: 2025-11-29 08:15:22.776 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:15:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557239899' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:15:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:23.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.215 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.216 257641 DEBUG nova.virt.libvirt.vif [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-513044691',display_name='tempest-ServerRescueTestJSON-server-513044691',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-513044691',id=96,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-b20jon9y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:15:01Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=cc3dd9da-bb7d-4885-8555-d724b05677fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:a3:26:00"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.217 257641 DEBUG nova.network.os_vif_util [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1742590140-network", "vif_mac": "fa:16:3e:a3:26:00"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.218 257641 DEBUG nova.network.os_vif_util [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.219 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'pci_devices' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.236 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <uuid>cc3dd9da-bb7d-4885-8555-d724b05677fd</uuid>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <name>instance-00000060</name>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerRescueTestJSON-server-513044691</nova:name>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:15:21</nova:creationTime>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:user uuid="f4c89c9953854ecf96a802dc6055db9d">tempest-ServerRescueTestJSON-640276387-project-member</nova:user>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:project uuid="4da7fb77734a4135a6f8b5b70bed7a2f">tempest-ServerRescueTestJSON-640276387</nova:project>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <nova:port uuid="5be36804-6ed4-419f-9b92-e268a97799f5">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="serial">cc3dd9da-bb7d-4885-8555-d724b05677fd</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="uuid">cc3dd9da-bb7d-4885-8555-d724b05677fd</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.rescue">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cc3dd9da-bb7d-4885-8555-d724b05677fd_disk">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config.rescue">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a3:26:00"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <target dev="tap5be36804-6e"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/console.log" append="off"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:15:23 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:15:23 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:15:23 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:15:23 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.243 257641 INFO nova.virt.libvirt.driver [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance destroyed successfully.#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.310 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.310 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.310 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.311 257641 DEBUG nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] No VIF found with MAC fa:16:3e:a3:26:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.311 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Using config drive#033[00m
Nov 29 03:15:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 381 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.1 MiB/s wr, 230 op/s
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.342 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.360 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'ec2_ids' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.396 257641 DEBUG nova.objects.instance [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'keypairs' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.498 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.959 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:15:23 np0005539550 nova_compute[257631]: 2025-11-29 08:15:23.960 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.177 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.307 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Creating config drive at /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.312 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53_fs59b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106794716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.432 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.442 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53_fs59b" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.469 257641 DEBUG nova.storage.rbd_utils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] rbd image cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.473 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.566 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.567 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.571 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.571 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.572 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.744 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.745 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4237MB free_disk=20.807247161865234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.745 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.745 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.881 257641 DEBUG oslo_concurrency.processutils [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue cc3dd9da-bb7d-4885-8555-d724b05677fd_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.881 257641 INFO nova.virt.libvirt.driver [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Deleting local config drive /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.931 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 886e1d81-4445-45c7-8c0a-4838eb595ab1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.932 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance cc3dd9da-bb7d-4885-8555-d724b05677fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.932 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.932 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:15:24 np0005539550 kernel: tap5be36804-6e: entered promiscuous mode
Nov 29 03:15:24 np0005539550 NetworkManager[49039]: <info>  [1764404124.9396] manager: (tap5be36804-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Nov 29 03:15:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:24Z|00382|binding|INFO|Claiming lport 5be36804-6ed4-419f-9b92-e268a97799f5 for this chassis.
Nov 29 03:15:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:24Z|00383|binding|INFO|5be36804-6ed4-419f-9b92-e268a97799f5: Claiming fa:16:3e:a3:26:00 10.100.0.2
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.940 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:24.946 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:26:00 10.100.0.2'], port_security=['fa:16:3e:a3:26:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'cc3dd9da-bb7d-4885-8555-d724b05677fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5be36804-6ed4-419f-9b92-e268a97799f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:24.948 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5be36804-6ed4-419f-9b92-e268a97799f5 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 bound to our chassis#033[00m
Nov 29 03:15:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:24.949 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:15:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:24.951 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16e55a32-7b3c-457b-9d12-9e54b7fac698]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:24Z|00384|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 ovn-installed in OVS
Nov 29 03:15:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:24Z|00385|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 up in Southbound
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.965 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:15:24 np0005539550 nova_compute[257631]: 2025-11-29 08:15:24.967 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:24 np0005539550 systemd-udevd[317719]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:15:24 np0005539550 systemd-machined[216673]: New machine qemu-46-instance-00000060.
Nov 29 03:15:24 np0005539550 NetworkManager[49039]: <info>  [1764404124.9816] device (tap5be36804-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:15:24 np0005539550 NetworkManager[49039]: <info>  [1764404124.9829] device (tap5be36804-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:15:24 np0005539550 systemd[1]: Started Virtual Machine qemu-46-instance-00000060.
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.012 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.013 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.045 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.086 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:15:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:25.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.220 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 381 MiB data, 953 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 8.1 MiB/s wr, 219 op/s
Nov 29 03:15:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644122743' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.700 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.709 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.736 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.771 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:15:25 np0005539550 nova_compute[257631]: 2025-11-29 08:15:25.772 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:25.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.610 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for cc3dd9da-bb7d-4885-8555-d724b05677fd due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.611 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404126.610085, cc3dd9da-bb7d-4885-8555-d724b05677fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.611 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.620 257641 DEBUG nova.compute.manager [None req-73b892c3-b0eb-4529-8404-8b75de4c982d f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.635 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.639 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.677 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.678 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404126.6200914, cc3dd9da-bb7d-4885-8555-d724b05677fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.678 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.737 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.747 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.772 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.773 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.774 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.774 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:15:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:26 np0005539550 nova_compute[257631]: 2025-11-29 08:15:26.958 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.128 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.129 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.130 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.130 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:27.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 372 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 9.6 MiB/s wr, 361 op/s
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.398 257641 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.399 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.400 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.400 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.401 257641 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.402 257641 WARNING nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.402 257641 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.403 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.403 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.404 257641 DEBUG oslo_concurrency.lockutils [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.405 257641 DEBUG nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:27 np0005539550 nova_compute[257631]: 2025-11-29 08:15:27.405 257641 WARNING nova.compute.manager [req-5e32af8e-e2e7-4521-8df9-17ee1302baef req-05c62ba7-2b3b-46ba-be17-b1ea4e506f6e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:15:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:27.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:28 np0005539550 nova_compute[257631]: 2025-11-29 08:15:28.501 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:29.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 357 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 286 op/s
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.711 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.758 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.759 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.760 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.760 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.761 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539550 nova_compute[257631]: 2025-11-29 08:15:29.761 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:29.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:30 np0005539550 nova_compute[257631]: 2025-11-29 08:15:30.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:30 np0005539550 nova_compute[257631]: 2025-11-29 08:15:30.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:15:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:31.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 326 MiB data, 930 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.1 MiB/s wr, 355 op/s
Nov 29 03:15:31 np0005539550 nova_compute[257631]: 2025-11-29 08:15:31.648 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:31.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:31 np0005539550 nova_compute[257631]: 2025-11-29 08:15:31.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:32 np0005539550 podman[317814]: 2025-11-29 08:15:32.362018832 +0000 UTC m=+0.093121729 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:15:32 np0005539550 nova_compute[257631]: 2025-11-29 08:15:32.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:15:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:33.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 326 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 284 op/s
Nov 29 03:15:33 np0005539550 nova_compute[257631]: 2025-11-29 08:15:33.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:33.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:35.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 326 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.6 MiB/s wr, 245 op/s
Nov 29 03:15:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:35.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:36 np0005539550 nova_compute[257631]: 2025-11-29 08:15:36.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:15:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:37.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:15:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 326 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 1.6 MiB/s wr, 245 op/s
Nov 29 03:15:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:37.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:38 np0005539550 nova_compute[257631]: 2025-11-29 08:15:38.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:15:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539740427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:15:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:15:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2539740427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:15:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:39.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 326 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.7 KiB/s wr, 104 op/s
Nov 29 03:15:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:39.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:41.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 361 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 166 op/s
Nov 29 03:15:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:41.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:41 np0005539550 nova_compute[257631]: 2025-11-29 08:15:41.964 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.010 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.010 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.035 257641 DEBUG nova.objects.instance [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.107 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.499 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.499 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.500 257641 INFO nova.compute.manager [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Attaching volume eb8b2675-ae0e-4130-a3f5-06bcfedbadc6 to /dev/vdb#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.717 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.879 257641 DEBUG os_brick.utils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.881 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.893 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.894 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[b7a7ba33-1315-4220-aa1d-97c6bfeedacb]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.895 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.902 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.902 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcdfcb6-fdb6-466c-bad1-4a942f26db6c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.904 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.914 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.914 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[1aeb6306-5e08-4a76-ba7e-d1cdc66616e1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.915 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[8e00cfc1-6e26-4edf-8e1f-05c51fc2f88d]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.916 257641 DEBUG oslo_concurrency.processutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.942 257641 DEBUG oslo_concurrency.processutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.944 257641 DEBUG os_brick.initiator.connectors.lightos [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.944 257641 DEBUG os_brick.initiator.connectors.lightos [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.945 257641 DEBUG os_brick.initiator.connectors.lightos [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.945 257641 DEBUG os_brick.utils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:15:42 np0005539550 nova_compute[257631]: 2025-11-29 08:15:42.946 257641 DEBUG nova.virt.block_device [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating existing volume attachment record: 8fdcea1d-3748-4c96-b67d-221e641e9f2c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:15:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:43.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 399 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 785 KiB/s rd, 2.9 MiB/s wr, 94 op/s
Nov 29 03:15:43 np0005539550 nova_compute[257631]: 2025-11-29 08:15:43.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:43.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.139 257641 DEBUG nova.objects.instance [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.166 257641 DEBUG nova.virt.libvirt.driver [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Attempting to attach volume eb8b2675-ae0e-4130-a3f5-06bcfedbadc6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.169 257641 DEBUG nova.virt.libvirt.guest [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-eb8b2675-ae0e-4130-a3f5-06bcfedbadc6">
Nov 29 03:15:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:15:44 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:15:44 np0005539550 nova_compute[257631]:  <serial>eb8b2675-ae0e-4130-a3f5-06bcfedbadc6</serial>
Nov 29 03:15:44 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:15:44 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.359 257641 DEBUG nova.virt.libvirt.driver [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.359 257641 DEBUG nova.virt.libvirt.driver [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.359 257641 DEBUG nova.virt.libvirt.driver [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.360 257641 DEBUG nova.virt.libvirt.driver [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:26:59:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:15:44 np0005539550 nova_compute[257631]: 2025-11-29 08:15:44.671 257641 DEBUG oslo_concurrency.lockutils [None req-dae2eaf7-9388-459b-88ee-50f3653688a6 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:45.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 399 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 628 KiB/s rd, 2.9 MiB/s wr, 89 op/s
Nov 29 03:15:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:45.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:46 np0005539550 nova_compute[257631]: 2025-11-29 08:15:46.966 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:47.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 466 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 662 KiB/s rd, 5.3 MiB/s wr, 143 op/s
Nov 29 03:15:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:47.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:48 np0005539550 nova_compute[257631]: 2025-11-29 08:15:48.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:49.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 466 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 664 KiB/s rd, 5.3 MiB/s wr, 146 op/s
Nov 29 03:15:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:49.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.061 257641 DEBUG nova.compute.manager [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.204 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.204 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.291 257641 DEBUG nova.objects.instance [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_requests' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.318 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.319 257641 INFO nova.compute.claims [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.319 257641 DEBUG nova.objects.instance [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.348 257641 DEBUG nova.objects.instance [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.423 257641 INFO nova.compute.resource_tracker [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating resource usage from migration c1d4f0fd-49dd-403e-a4db-2961eb03ddd6#033[00m
Nov 29 03:15:50 np0005539550 nova_compute[257631]: 2025-11-29 08:15:50.602 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:15:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:15:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3115542486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.062 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.068 257641 DEBUG nova.compute.provider_tree [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.104 257641 DEBUG nova.scheduler.client.report [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.137 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.138 257641 INFO nova.compute.manager [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Migrating#033[00m
Nov 29 03:15:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:51.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.230 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.230 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.230 257641 DEBUG nova.network.neutron [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:15:51 np0005539550 podman[317947]: 2025-11-29 08:15:51.318812312 +0000 UTC m=+0.054762243 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:15:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 467 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 224 op/s
Nov 29 03:15:51 np0005539550 podman[317946]: 2025-11-29 08:15:51.332751636 +0000 UTC m=+0.071415786 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:15:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:51.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:51 np0005539550 nova_compute[257631]: 2025-11-29 08:15:51.968 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 467 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 170 op/s
Nov 29 03:15:53 np0005539550 nova_compute[257631]: 2025-11-29 08:15:53.514 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:53.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:15:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:55.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:15:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 467 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 152 op/s
Nov 29 03:15:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:55.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:55 np0005539550 nova_compute[257631]: 2025-11-29 08:15:55.932 257641 DEBUG nova.network.neutron [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:15:55 np0005539550 nova_compute[257631]: 2025-11-29 08:15:55.956 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:15:56 np0005539550 nova_compute[257631]: 2025-11-29 08:15:56.065 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:15:56 np0005539550 nova_compute[257631]: 2025-11-29 08:15:56.069 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:15:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:15:56 np0005539550 nova_compute[257631]: 2025-11-29 08:15:56.970 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:57.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 504 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.8 MiB/s wr, 233 op/s
Nov 29 03:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:57.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:15:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:15:58 np0005539550 nova_compute[257631]: 2025-11-29 08:15:58.518 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539550 kernel: tapc4a22b7d-10 (unregistering): left promiscuous mode
Nov 29 03:15:58 np0005539550 NetworkManager[49039]: <info>  [1764404158.6076] device (tapc4a22b7d-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:15:58 np0005539550 nova_compute[257631]: 2025-11-29 08:15:58.617 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:58Z|00386|binding|INFO|Releasing lport c4a22b7d-1070-4870-a1f6-21f50729504a from this chassis (sb_readonly=0)
Nov 29 03:15:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:58Z|00387|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a down in Southbound
Nov 29 03:15:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:15:58Z|00388|binding|INFO|Removing iface tapc4a22b7d-10 ovn-installed in OVS
Nov 29 03:15:58 np0005539550 nova_compute[257631]: 2025-11-29 08:15:58.622 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:58.636 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:15:58 np0005539550 nova_compute[257631]: 2025-11-29 08:15:58.638 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:58.638 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:15:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:58.641 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:15:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:58.642 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[529598c2-fd95-4b92-85d0-43bb54c67d2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:58.643 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:15:58 np0005539550 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 29 03:15:58 np0005539550 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000005f.scope: Consumed 17.161s CPU time.
Nov 29 03:15:58 np0005539550 systemd-machined[216673]: Machine qemu-44-instance-0000005f terminated.
Nov 29 03:15:58 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [NOTICE]   (317224) : haproxy version is 2.8.14-c23fe91
Nov 29 03:15:58 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [NOTICE]   (317224) : path to executable is /usr/sbin/haproxy
Nov 29 03:15:58 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [WARNING]  (317224) : Exiting Master process...
Nov 29 03:15:58 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [ALERT]    (317224) : Current worker (317228) exited with code 143 (Terminated)
Nov 29 03:15:58 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[317194]: [WARNING]  (317224) : All workers exited. Exiting... (0)
Nov 29 03:15:58 np0005539550 systemd[1]: libpod-4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82.scope: Deactivated successfully.
Nov 29 03:15:58 np0005539550 podman[318063]: 2025-11-29 08:15:58.987378099 +0000 UTC m=+0.259073036 container died 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:15:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0cb5d8f111b4211045ba2427551fccb9730cd39bd7e7cdc5e6e558049db0144f-merged.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82-userdata-shm.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.091 257641 INFO nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.097 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance destroyed successfully.#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.098 257641 DEBUG nova.virt.libvirt.vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:50Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.098 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.100 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.100 257641 DEBUG os_vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.102 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a22b7d-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.143 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.146 257641 INFO os_vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.154 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.155 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.155 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:15:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:15:59.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:15:59 np0005539550 podman[318063]: 2025-11-29 08:15:59.277566966 +0000 UTC m=+0.549261913 container cleanup 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:15:59 np0005539550 systemd[1]: libpod-conmon-4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82.scope: Deactivated successfully.
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 513 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:15:59
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Nov 29 03:15:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:15:59 np0005539550 podman[318105]: 2025-11-29 08:15:59.565820324 +0000 UTC m=+0.263978452 container remove 4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.571 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c09e8c-d2c7-4ed1-830b-b1852315a846]: (4, ('Sat Nov 29 08:15:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82)\n4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82\nSat Nov 29 08:15:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82)\n4615f3b873f1834d6aaad1ed72ad17e70ef926588e2be20823fb491b42cd6f82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.573 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2a23f0a9-1451-4c0b-8fd8-373e62607f09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.575 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539550 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.598 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[da1d7132-bd62-44ea-b966-89fad036f3f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.614 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c365bfff-5f4c-4d09-bda7-957632cc5c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.616 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f91c8ea-3057-4cef-8c07-9b70161d0fbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.634 257641 DEBUG nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.635 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.635 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.635 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.635 257641 DEBUG nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.636 257641 WARNING nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.636 257641 DEBUG nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.636 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.636 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.636 257641 DEBUG oslo_concurrency.lockutils [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.637 257641 DEBUG nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:15:59 np0005539550 nova_compute[257631]: 2025-11-29 08:15:59.637 257641 WARNING nova.compute.manager [req-1883b368-4e9c-4b83-9b5c-e5b0fca9eadb req-f60cdbd5-f4a6-47e1-b846-cfbcdf14d436 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.641 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaf27eb-5e40-4732-8fa6-45e2a7612286]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714750, 'reachable_time': 37667, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318120, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.646 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:15:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:15:59.646 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c0b5bd-22aa-49bb-8455-5429de203447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:15:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:15:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:15:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:15:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:00 np0005539550 nova_compute[257631]: 2025-11-29 08:16:00.534 257641 DEBUG nova.network.neutron [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Port c4a22b7d-1070-4870-a1f6-21f50729504a binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:16:00 np0005539550 nova_compute[257631]: 2025-11-29 08:16:00.697 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:00 np0005539550 nova_compute[257631]: 2025-11-29 08:16:00.698 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:00 np0005539550 nova_compute[257631]: 2025-11-29 08:16:00.698 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:00.705 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:00 np0005539550 nova_compute[257631]: 2025-11-29 08:16:00.706 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:00.706 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:16:01 np0005539550 nova_compute[257631]: 2025-11-29 08:16:01.089 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:01 np0005539550 nova_compute[257631]: 2025-11-29 08:16:01.089 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:01 np0005539550 nova_compute[257631]: 2025-11-29 08:16:01.089 257641 DEBUG nova.network.neutron [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 465 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 3.4 MiB/s wr, 331 op/s
Nov 29 03:16:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:01.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:01 np0005539550 nova_compute[257631]: 2025-11-29 08:16:01.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:02 np0005539550 podman[318148]: 2025-11-29 08:16:02.837797114 +0000 UTC m=+0.108024497 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 445 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.8 MiB/s wr, 265 op/s
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.718 257641 DEBUG nova.network.neutron [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.737 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.829 257641 DEBUG os_brick.utils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.830 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.840 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.840 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[f66c59a3-27dc-4977-b81c-7704d905f0bf]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.842 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.850 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.850 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[613b0389-670f-4061-a4ea-1cd53eb939ea]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.852 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.862 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.862 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[ef11a898-7808-4a59-ac66-618c5dcc0afc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.864 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[29968b58-6d21-4eb5-834a-c8726548031b]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.865 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.892 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.896 257641 DEBUG os_brick.initiator.connectors.lightos [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.897 257641 DEBUG os_brick.initiator.connectors.lightos [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.897 257641 DEBUG os_brick.initiator.connectors.lightos [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:16:03 np0005539550 nova_compute[257631]: 2025-11-29 08:16:03.897 257641 DEBUG os_brick.utils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:16:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.0287386 +0000 UTC m=+0.036720614 container create b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.012746364 +0000 UTC m=+0.020728388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:04 np0005539550 nova_compute[257631]: 2025-11-29 08:16:04.172 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:04 np0005539550 systemd[1]: Started libpod-conmon-b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96.scope.
Nov 29 03:16:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.317567643 +0000 UTC m=+0.325549667 container init b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.32649377 +0000 UTC m=+0.334475774 container start b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:16:04 np0005539550 systemd[1]: libpod-b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96.scope: Deactivated successfully.
Nov 29 03:16:04 np0005539550 zen_bassi[318445]: 167 167
Nov 29 03:16:04 np0005539550 conmon[318445]: conmon b452d25f8c16115acccf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96.scope/container/memory.events
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.364535257 +0000 UTC m=+0.372517271 container attach b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.365776928 +0000 UTC m=+0.373758932 container died b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3cbae9e9f5d41b24da9d9ccfe7870b0476b99cb5f8376151c227adb1f5ae3422-merged.mount: Deactivated successfully.
Nov 29 03:16:04 np0005539550 podman[318428]: 2025-11-29 08:16:04.412063575 +0000 UTC m=+0.420045579 container remove b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_bassi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:16:04 np0005539550 systemd[1]: libpod-conmon-b452d25f8c16115acccfd543b977529b2943f7f8c449517961f1ff7d975ddc96.scope: Deactivated successfully.
Nov 29 03:16:04 np0005539550 podman[318471]: 2025-11-29 08:16:04.555532472 +0000 UTC m=+0.021253411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:04 np0005539550 podman[318471]: 2025-11-29 08:16:04.835986143 +0000 UTC m=+0.301707072 container create 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:04 np0005539550 systemd[1]: Started libpod-conmon-4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba.scope.
Nov 29 03:16:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715f4406e9e6dab9d3e57a4e99b7410071a2da461224c9c4b712106468b1200a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715f4406e9e6dab9d3e57a4e99b7410071a2da461224c9c4b712106468b1200a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715f4406e9e6dab9d3e57a4e99b7410071a2da461224c9c4b712106468b1200a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/715f4406e9e6dab9d3e57a4e99b7410071a2da461224c9c4b712106468b1200a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:05 np0005539550 podman[318471]: 2025-11-29 08:16:05.046774991 +0000 UTC m=+0.512495940 container init 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:05 np0005539550 podman[318471]: 2025-11-29 08:16:05.055177085 +0000 UTC m=+0.520897994 container start 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:16:05 np0005539550 podman[318471]: 2025-11-29 08:16:05.058574601 +0000 UTC m=+0.524295550 container attach 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:16:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:05.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 449 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 278 op/s
Nov 29 03:16:05 np0005539550 nova_compute[257631]: 2025-11-29 08:16:05.415 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:16:05 np0005539550 nova_compute[257631]: 2025-11-29 08:16:05.417 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:16:05 np0005539550 nova_compute[257631]: 2025-11-29 08:16:05.417 257641 INFO nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Creating image(s)#033[00m
Nov 29 03:16:05 np0005539550 nova_compute[257631]: 2025-11-29 08:16:05.457 257641 DEBUG nova.storage.rbd_utils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(nova-resize) on rbd image(886e1d81-4445-45c7-8c0a-4838eb595ab1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:16:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:05.708 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.159 257641 DEBUG nova.objects.instance [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:06 np0005539550 laughing_cray[318488]: [
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:    {
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "available": false,
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "ceph_device": false,
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "lsm_data": {},
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "lvs": [],
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "path": "/dev/sr0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "rejected_reasons": [
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "Has a FileSystem",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "Insufficient space (<5GB)"
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        ],
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        "sys_api": {
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "actuators": null,
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "device_nodes": "sr0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "devname": "sr0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "human_readable_size": "482.00 KB",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "id_bus": "ata",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "model": "QEMU DVD-ROM",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "nr_requests": "2",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "parent": "/dev/sr0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "partitions": {},
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "path": "/dev/sr0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "removable": "1",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "rev": "2.5+",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "ro": "0",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "rotational": "1",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "sas_address": "",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "sas_device_handle": "",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "scheduler_mode": "mq-deadline",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "sectors": 0,
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "sectorsize": "2048",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "size": 493568.0,
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "support_discard": "2048",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "type": "disk",
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:            "vendor": "QEMU"
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:        }
Nov 29 03:16:06 np0005539550 laughing_cray[318488]:    }
Nov 29 03:16:06 np0005539550 laughing_cray[318488]: ]
Nov 29 03:16:06 np0005539550 systemd[1]: libpod-4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba.scope: Deactivated successfully.
Nov 29 03:16:06 np0005539550 systemd[1]: libpod-4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba.scope: Consumed 1.167s CPU time.
Nov 29 03:16:06 np0005539550 podman[318471]: 2025-11-29 08:16:06.219398241 +0000 UTC m=+1.685119160 container died 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.276 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.277 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Ensure instance console log exists: /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.277 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.277 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.278 257641 DEBUG oslo_concurrency.lockutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.281 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start _get_guest_xml network_info=[{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '0c268691-38c6-4db8-ae8c-00bc3286be2e', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'attached_at': '2025-11-29T08:16:05.000000', 'detached_at': '', 'volume_id': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'serial': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6'}, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.286 257641 WARNING nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.292 257641 DEBUG nova.virt.libvirt.host [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.293 257641 DEBUG nova.virt.libvirt.host [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-715f4406e9e6dab9d3e57a4e99b7410071a2da461224c9c4b712106468b1200a-merged.mount: Deactivated successfully.
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.301 257641 DEBUG nova.virt.libvirt.host [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.302 257641 DEBUG nova.virt.libvirt.host [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.303 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.304 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.304 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.304 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.304 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.305 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.306 257641 DEBUG nova.virt.hardware [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.306 257641 DEBUG nova.objects.instance [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.321 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:06 np0005539550 podman[318471]: 2025-11-29 08:16:06.344439809 +0000 UTC m=+1.810160728 container remove 4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cray, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:16:06 np0005539550 systemd[1]: libpod-conmon-4b8f1e1825225c1a1a185ef980f8897dadc300c39e0bfc0f118035b5dadd8fba.scope: Deactivated successfully.
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3dc485b1-7287-48b6-942c-3c9d1ead0ad4 does not exist
Nov 29 03:16:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 85b9bd3b-f37e-4473-a3a9-45580428c0dd does not exist
Nov 29 03:16:06 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f46487e1-efe2-4951-b264-2fde97e2b1af does not exist
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267153500' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.799 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.844 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:06 np0005539550 nova_compute[257631]: 2025-11-29 08:16:06.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.088949856 +0000 UTC m=+0.042815979 container create a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:16:07 np0005539550 systemd[1]: Started libpod-conmon-a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3.scope.
Nov 29 03:16:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.068384173 +0000 UTC m=+0.022250316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.16539527 +0000 UTC m=+0.119261403 container init a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.171747371 +0000 UTC m=+0.125613514 container start a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.175722922 +0000 UTC m=+0.129589045 container attach a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:07 np0005539550 sad_bell[319895]: 167 167
Nov 29 03:16:07 np0005539550 systemd[1]: libpod-a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3.scope: Deactivated successfully.
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.177177459 +0000 UTC m=+0.131043582 container died a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7b8148d89c941f8c35205b34196419e467fd37cdd083b1e381d7df9cd655ba4b-merged.mount: Deactivated successfully.
Nov 29 03:16:07 np0005539550 podman[319879]: 2025-11-29 08:16:07.213828211 +0000 UTC m=+0.167694334 container remove a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:16:07 np0005539550 systemd[1]: libpod-conmon-a9c123b553f425e6dd7b0e8acd60f4de5efff12364e57affb48b7f67d7fe88e3.scope: Deactivated successfully.
Nov 29 03:16:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971443138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.308 257641 DEBUG oslo_concurrency.processutils [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 452 MiB data, 1018 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 245 op/s
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.363 257641 DEBUG nova.virt.libvirt.vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.363 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.364 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.367 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <uuid>886e1d81-4445-45c7-8c0a-4838eb595ab1</uuid>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <name>instance-0000005f</name>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-431627369</nova:name>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:16:06</nova:creationTime>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <nova:port uuid="c4a22b7d-1070-4870-a1f6-21f50729504a">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="serial">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="uuid">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-eb8b2675-ae0e-4130-a3f5-06bcfedbadc6">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <serial>eb8b2675-ae0e-4130-a3f5-06bcfedbadc6</serial>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:26:59:1a"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <target dev="tapc4a22b7d-10"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/console.log" append="off"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:16:07 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:16:07 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:16:07 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:16:07 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.368 257641 DEBUG nova.virt.libvirt.vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.368 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-322060255-network", "vif_mac": "fa:16:3e:26:59:1a"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.369 257641 DEBUG nova.network.os_vif_util [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.369 257641 DEBUG os_vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.370 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.371 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.374 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a22b7d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.374 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4a22b7d-10, col_values=(('external_ids', {'iface-id': 'c4a22b7d-1070-4870-a1f6-21f50729504a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:59:1a', 'vm-uuid': '886e1d81-4445-45c7-8c0a-4838eb595ab1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.3772] manager: (tapc4a22b7d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.383 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.384 257641 INFO os_vif [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:16:07 np0005539550 podman[319920]: 2025-11-29 08:16:07.402775984 +0000 UTC m=+0.046973475 container create 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:16:07 np0005539550 systemd[1]: Started libpod-conmon-93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607.scope.
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.463 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.463 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.464 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.464 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:26:59:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.464 257641 INFO nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Using config drive#033[00m
Nov 29 03:16:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:07 np0005539550 podman[319920]: 2025-11-29 08:16:07.38450815 +0000 UTC m=+0.028705671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:07 np0005539550 podman[319920]: 2025-11-29 08:16:07.49148711 +0000 UTC m=+0.135684631 container init 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:16:07 np0005539550 podman[319920]: 2025-11-29 08:16:07.498786095 +0000 UTC m=+0.142983596 container start 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:16:07 np0005539550 podman[319920]: 2025-11-29 08:16:07.501559296 +0000 UTC m=+0.145756797 container attach 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.5567] manager: (tapc4a22b7d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Nov 29 03:16:07 np0005539550 kernel: tapc4a22b7d-10: entered promiscuous mode
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.560 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:07Z|00389|binding|INFO|Claiming lport c4a22b7d-1070-4870-a1f6-21f50729504a for this chassis.
Nov 29 03:16:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:07Z|00390|binding|INFO|c4a22b7d-1070-4870-a1f6-21f50729504a: Claiming fa:16:3e:26:59:1a 10.100.0.8
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.570 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.571 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.573 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:16:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:07Z|00391|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a ovn-installed in OVS
Nov 29 03:16:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:07Z|00392|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a up in Southbound
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.585 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3c5db1-b5c4-4f07-8f3a-73ccf26816ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.586 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.588 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.588 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5fdf33-57cd-4060-b2ab-0fdbef358f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbc1c70-a0f2-4b4a-a573-8947f19315bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 systemd-udevd[319976]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:07 np0005539550 systemd-machined[216673]: New machine qemu-47-instance-0000005f.
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.605 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[2182224d-debc-4267-8759-d0096b8b2e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.6157] device (tapc4a22b7d-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.6166] device (tapc4a22b7d-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:07 np0005539550 systemd[1]: Started Virtual Machine qemu-47-instance-0000005f.
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.629 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[858b984f-8acc-4cb2-8c87-17e74b7484fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.659 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f04087d4-1830-40cb-98ec-b328fa1af876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.664 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[df422e04-251b-49d9-af0d-4160836919ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.6657] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Nov 29 03:16:07 np0005539550 systemd-udevd[319979]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.697 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[747b415b-272c-4704-ab97-a0f96205154e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.699 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8f32f558-8fe6-4642-a8b4-f7084d73a6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.7212] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.725 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[338767a4-9293-478d-9740-cf17b96f36ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.743 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5ce5f2-286b-4baf-8153-bcf500a8b143]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721538, 'reachable_time': 24169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320007, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.755 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[40ce88f8-ef9f-4e5b-839e-26f2ec50ff6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 721538, 'tstamp': 721538}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320008, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.777 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a17019c2-f5b1-41b5-908c-55b1c4eeaf00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 104], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721538, 'reachable_time': 24169, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320009, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.807 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d88153d4-773b-44c1-91d1-7a6fc02fc1e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.867 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[64459b90-12c0-447e-b500-056027a43136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.868 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.868 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.869 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:07.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 NetworkManager[49039]: <info>  [1764404167.9253] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/174)
Nov 29 03:16:07 np0005539550 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.929 257641 DEBUG nova.compute.manager [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.930 257641 DEBUG oslo_concurrency.lockutils [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.930 257641 DEBUG oslo_concurrency.lockutils [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.930 257641 DEBUG oslo_concurrency.lockutils [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.930 257641 DEBUG nova.compute.manager [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.931 257641 WARNING nova.compute.manager [req-898f52a7-ed34-4ada-9135-479151bfde44 req-cc62706b-3081-402b-85ea-ef8cbdccbba8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.931 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:07Z|00393|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.934 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.937 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bf93acb4-8b16-4c8d-a853-46d8bda0e37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.938 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:16:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:07.939 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:16:07 np0005539550 nova_compute[257631]: 2025-11-29 08:16:07.951 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:16:08 np0005539550 podman[320047]: 2025-11-29 08:16:08.301365878 +0000 UTC m=+0.047208461 container create b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:16:08 np0005539550 determined_jennings[319939]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:16:08 np0005539550 determined_jennings[319939]: --> relative data size: 1.0
Nov 29 03:16:08 np0005539550 determined_jennings[319939]: --> All data devices are unavailable
Nov 29 03:16:08 np0005539550 podman[319920]: 2025-11-29 08:16:08.339381425 +0000 UTC m=+0.983578946 container died 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:16:08 np0005539550 systemd[1]: Started libpod-conmon-b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52.scope.
Nov 29 03:16:08 np0005539550 systemd[1]: libpod-93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607.scope: Deactivated successfully.
Nov 29 03:16:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:08 np0005539550 podman[320047]: 2025-11-29 08:16:08.277317887 +0000 UTC m=+0.023160500 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:16:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92bbe60167b3724f5f3569f585b850e4e04258418f13910ddf719211a1a8e4c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-49cc506ba4f2f7f2e884333628250db1a405f44dd2ac05505621b88b9011d190-merged.mount: Deactivated successfully.
Nov 29 03:16:08 np0005539550 podman[320047]: 2025-11-29 08:16:08.400501118 +0000 UTC m=+0.146343721 container init b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:08 np0005539550 podman[320047]: 2025-11-29 08:16:08.407587749 +0000 UTC m=+0.153430332 container start b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:16:08 np0005539550 podman[319920]: 2025-11-29 08:16:08.42101693 +0000 UTC m=+1.065214431 container remove 93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:16:08 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [NOTICE]   (320135) : New worker (320140) forked
Nov 29 03:16:08 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [NOTICE]   (320135) : Loading success.
Nov 29 03:16:08 np0005539550 systemd[1]: libpod-conmon-93733290237f971f2dadfce2feda3bda0c6452ea7c837e29e8fd9e13cd4b3607.scope: Deactivated successfully.
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.551 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 886e1d81-4445-45c7-8c0a-4838eb595ab1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.552 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404168.551161, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.552 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.554 257641 DEBUG nova.compute.manager [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.557 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance running successfully.#033[00m
Nov 29 03:16:08 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.559 257641 DEBUG nova.virt.libvirt.guest [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.560 257641 DEBUG nova.virt.libvirt.driver [None req-02d14462-b04c-4d00-ae54-5502311d50e0 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.574 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.576 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.599 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.600 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404168.5525002, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.600 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.632 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.637 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:08 np0005539550 nova_compute[257631]: 2025-11-29 08:16:08.661 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:16:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.043151596 +0000 UTC m=+0.048700229 container create 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:16:09 np0005539550 systemd[1]: Started libpod-conmon-6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a.scope.
Nov 29 03:16:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.023086956 +0000 UTC m=+0.028635619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.130213809 +0000 UTC m=+0.135762462 container init 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.140524391 +0000 UTC m=+0.146073034 container start 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.144653216 +0000 UTC m=+0.150201879 container attach 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:09 np0005539550 systemd[1]: libpod-6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a.scope: Deactivated successfully.
Nov 29 03:16:09 np0005539550 quizzical_dijkstra[320310]: 167 167
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.151366927 +0000 UTC m=+0.156915570 container died 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:16:09 np0005539550 conmon[320310]: conmon 6e1c86f025e58d5d853f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a.scope/container/memory.events
Nov 29 03:16:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-63f19be12e8388f6a079cff4cdcd8144de72c36850abb162c96dc566c787aeb9-merged.mount: Deactivated successfully.
Nov 29 03:16:09 np0005539550 podman[320294]: 2025-11-29 08:16:09.194432522 +0000 UTC m=+0.199981155 container remove 6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:09 np0005539550 systemd[1]: libpod-conmon-6e1c86f025e58d5d853f5fe7507e9cea1c23018a64c10f4d31fa617548fb363a.scope: Deactivated successfully.
Nov 29 03:16:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:09.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 453 MiB data, 1019 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 212 op/s
Nov 29 03:16:09 np0005539550 podman[320332]: 2025-11-29 08:16:09.383727034 +0000 UTC m=+0.043912607 container create 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:16:09 np0005539550 systemd[1]: Started libpod-conmon-5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f.scope.
Nov 29 03:16:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f7435b45fafa89e39270d00f3e2770a42f233f058ca883fa5332e7ff7e7b45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f7435b45fafa89e39270d00f3e2770a42f233f058ca883fa5332e7ff7e7b45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f7435b45fafa89e39270d00f3e2770a42f233f058ca883fa5332e7ff7e7b45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03f7435b45fafa89e39270d00f3e2770a42f233f058ca883fa5332e7ff7e7b45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:09 np0005539550 podman[320332]: 2025-11-29 08:16:09.364225588 +0000 UTC m=+0.024411191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:09 np0005539550 podman[320332]: 2025-11-29 08:16:09.468757846 +0000 UTC m=+0.128943439 container init 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Nov 29 03:16:09 np0005539550 podman[320332]: 2025-11-29 08:16:09.475955389 +0000 UTC m=+0.136140962 container start 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:09 np0005539550 podman[320332]: 2025-11-29 08:16:09.479816707 +0000 UTC m=+0.140002270 container attach 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:16:09 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:09Z|00394|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:09 np0005539550 nova_compute[257631]: 2025-11-29 08:16:09.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:09.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.117 257641 DEBUG nova.compute.manager [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.118 257641 DEBUG oslo_concurrency.lockutils [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.118 257641 DEBUG oslo_concurrency.lockutils [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.118 257641 DEBUG oslo_concurrency.lockutils [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.118 257641 DEBUG nova.compute.manager [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:10 np0005539550 nova_compute[257631]: 2025-11-29 08:16:10.119 257641 WARNING nova.compute.manager [req-4ffef785-75ab-436a-a267-890276aee5c5 req-7a89f315-c48e-485d-a5e9-0b50e57f6fdd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]: {
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:    "0": [
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:        {
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "devices": [
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "/dev/loop3"
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            ],
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "lv_name": "ceph_lv0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "lv_size": "7511998464",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "name": "ceph_lv0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "tags": {
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.cluster_name": "ceph",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.crush_device_class": "",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.encrypted": "0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.osd_id": "0",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.type": "block",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:                "ceph.vdo": "0"
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            },
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "type": "block",
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:            "vg_name": "ceph_vg0"
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:        }
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]:    ]
Nov 29 03:16:10 np0005539550 xenodochial_darwin[320349]: }
Nov 29 03:16:10 np0005539550 systemd[1]: libpod-5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f.scope: Deactivated successfully.
Nov 29 03:16:10 np0005539550 podman[320332]: 2025-11-29 08:16:10.328755908 +0000 UTC m=+0.988941481 container died 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:16:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-03f7435b45fafa89e39270d00f3e2770a42f233f058ca883fa5332e7ff7e7b45-merged.mount: Deactivated successfully.
Nov 29 03:16:10 np0005539550 podman[320332]: 2025-11-29 08:16:10.385153141 +0000 UTC m=+1.045338714 container remove 5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_darwin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:16:10 np0005539550 systemd[1]: libpod-conmon-5caef9b26e04a977724bc57f364045247559344a7f4347c3848ce7123d98f15f.scope: Deactivated successfully.
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.001044079 +0000 UTC m=+0.043430866 container create 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:16:11 np0005539550 systemd[1]: Started libpod-conmon-0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65.scope.
Nov 29 03:16:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.071261194 +0000 UTC m=+0.113648001 container init 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:10.98105125 +0000 UTC m=+0.023438057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.078264052 +0000 UTC m=+0.120650829 container start 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.081179256 +0000 UTC m=+0.123566073 container attach 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:16:11 np0005539550 competent_elgamal[320528]: 167 167
Nov 29 03:16:11 np0005539550 systemd[1]: libpod-0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65.scope: Deactivated successfully.
Nov 29 03:16:11 np0005539550 conmon[320528]: conmon 0a09274e24adb8e34599 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65.scope/container/memory.events
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.085261099 +0000 UTC m=+0.127647886 container died 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:16:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f20cf80a81636340b26d24f9a172b78fbb1b5c893464e9af0e285a205d746922-merged.mount: Deactivated successfully.
Nov 29 03:16:11 np0005539550 podman[320511]: 2025-11-29 08:16:11.120142206 +0000 UTC m=+0.162528993 container remove 0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:16:11 np0005539550 systemd[1]: libpod-conmon-0a09274e24adb8e3459953066582b4540069397222305b5da0fd54315fa78b65.scope: Deactivated successfully.
Nov 29 03:16:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:11.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:11 np0005539550 podman[320551]: 2025-11-29 08:16:11.28628794 +0000 UTC m=+0.038687295 container create d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:16:11 np0005539550 systemd[1]: Started libpod-conmon-d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78.scope.
Nov 29 03:16:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 461 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Nov 29 03:16:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89314d6260dea34a30df376129fc47a7083d40513e13f790cfc514f90808f65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89314d6260dea34a30df376129fc47a7083d40513e13f790cfc514f90808f65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89314d6260dea34a30df376129fc47a7083d40513e13f790cfc514f90808f65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:11 np0005539550 podman[320551]: 2025-11-29 08:16:11.26975264 +0000 UTC m=+0.022152005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:16:11 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c89314d6260dea34a30df376129fc47a7083d40513e13f790cfc514f90808f65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:11 np0005539550 podman[320551]: 2025-11-29 08:16:11.380675369 +0000 UTC m=+0.133074764 container init d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:16:11 np0005539550 podman[320551]: 2025-11-29 08:16:11.388316504 +0000 UTC m=+0.140715869 container start d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:16:11 np0005539550 podman[320551]: 2025-11-29 08:16:11.391497164 +0000 UTC m=+0.143896529 container attach d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:16:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:11.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:11 np0005539550 nova_compute[257631]: 2025-11-29 08:16:11.976 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:12 np0005539550 hungry_turing[320567]: {
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:        "osd_id": 0,
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:        "type": "bluestore"
Nov 29 03:16:12 np0005539550 hungry_turing[320567]:    }
Nov 29 03:16:12 np0005539550 hungry_turing[320567]: }
Nov 29 03:16:12 np0005539550 systemd[1]: libpod-d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78.scope: Deactivated successfully.
Nov 29 03:16:12 np0005539550 podman[320590]: 2025-11-29 08:16:12.27361974 +0000 UTC m=+0.023965481 container died d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:16:12 np0005539550 nova_compute[257631]: 2025-11-29 08:16:12.377 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:12 np0005539550 nova_compute[257631]: 2025-11-29 08:16:12.427 257641 DEBUG nova.network.neutron [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Port c4a22b7d-1070-4870-a1f6-21f50729504a binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:16:12 np0005539550 nova_compute[257631]: 2025-11-29 08:16:12.428 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:12 np0005539550 nova_compute[257631]: 2025-11-29 08:16:12.428 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:12 np0005539550 nova_compute[257631]: 2025-11-29 08:16:12.428 257641 DEBUG nova.network.neutron [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c89314d6260dea34a30df376129fc47a7083d40513e13f790cfc514f90808f65-merged.mount: Deactivated successfully.
Nov 29 03:16:12 np0005539550 podman[320590]: 2025-11-29 08:16:12.475590394 +0000 UTC m=+0.225936115 container remove d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:16:12 np0005539550 systemd[1]: libpod-conmon-d11aaab5a87f1bbe9fcb79cb649a26d824c8d36e17acfc240b1250dc88bc7d78.scope: Deactivated successfully.
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ed019b22-ceab-4f2e-bf64-cc1d8c2b1429 does not exist
Nov 29 03:16:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 776e6176-6406-4939-a87a-3a69e7364a1a does not exist
Nov 29 03:16:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8edb42f2-0f16-4a29-a5c7-832b5c8a6f53 does not exist
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:16:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:13.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 476 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 141 op/s
Nov 29 03:16:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:13.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.225 257641 DEBUG nova.network.neutron [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.245 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:14 np0005539550 kernel: tapc4a22b7d-10 (unregistering): left promiscuous mode
Nov 29 03:16:14 np0005539550 NetworkManager[49039]: <info>  [1764404174.5422] device (tapc4a22b7d-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:14Z|00395|binding|INFO|Releasing lport c4a22b7d-1070-4870-a1f6-21f50729504a from this chassis (sb_readonly=0)
Nov 29 03:16:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:14Z|00396|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a down in Southbound
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.551 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:14Z|00397|binding|INFO|Removing iface tapc4a22b7d-10 ovn-installed in OVS
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.553 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.559 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.560 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.562 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.563 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f488cc-2970-4edf-a124-49e68f985c63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.563 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 29 03:16:14 np0005539550 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000005f.scope: Consumed 6.635s CPU time.
Nov 29 03:16:14 np0005539550 systemd-machined[216673]: Machine qemu-47-instance-0000005f terminated.
Nov 29 03:16:14 np0005539550 kernel: tapc4a22b7d-10: entered promiscuous mode
Nov 29 03:16:14 np0005539550 kernel: tapc4a22b7d-10 (unregistering): left promiscuous mode
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [NOTICE]   (320135) : haproxy version is 2.8.14-c23fe91
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [NOTICE]   (320135) : path to executable is /usr/sbin/haproxy
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [WARNING]  (320135) : Exiting Master process...
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [WARNING]  (320135) : Exiting Master process...
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [ALERT]    (320135) : Current worker (320140) exited with code 143 (Terminated)
Nov 29 03:16:14 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[320101]: [WARNING]  (320135) : All workers exited. Exiting... (0)
Nov 29 03:16:14 np0005539550 systemd[1]: libpod-b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52.scope: Deactivated successfully.
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.690 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.697 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance destroyed successfully.#033[00m
Nov 29 03:16:14 np0005539550 podman[320680]: 2025-11-29 08:16:14.69795139 +0000 UTC m=+0.050330470 container died b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.698 257641 DEBUG nova.objects.instance [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.712 257641 DEBUG nova.virt.libvirt.vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.712 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.713 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.713 257641 DEBUG os_vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.715 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.716 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a22b7d-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.719 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.725 257641 INFO os_vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:16:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52-userdata-shm.mount: Deactivated successfully.
Nov 29 03:16:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-92bbe60167b3724f5f3569f585b850e4e04258418f13910ddf719211a1a8e4c9-merged.mount: Deactivated successfully.
Nov 29 03:16:14 np0005539550 podman[320680]: 2025-11-29 08:16:14.745322565 +0000 UTC m=+0.097701515 container cleanup b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:16:14 np0005539550 systemd[1]: libpod-conmon-b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52.scope: Deactivated successfully.
Nov 29 03:16:14 np0005539550 podman[320719]: 2025-11-29 08:16:14.80806824 +0000 UTC m=+0.042909832 container remove b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bcdd2d5a-aa72-45bb-9bb9-d76fdf407ba7]: (4, ('Sat Nov 29 08:16:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52)\nb348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52\nSat Nov 29 08:16:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (b348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52)\nb348e6b228cf256b397cd139e597b939451ab978f299550931a41d061ebebe52\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.815 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef96d33e-49db-4316-9425-67d46f88a055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.816 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:14 np0005539550 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 nova_compute[257631]: 2025-11-29 08:16:14.832 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.835 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f840e3e5-3347-4dd6-ba9a-37250d0f222d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.850 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[821691fe-6748-4322-a4dd-71fa97b1d2e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.851 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1c472f21-43c7-4195-8b44-7c80d98e2c8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.864 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d1a998-d96e-4d2f-bef3-ab0e5d2da0ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 721531, 'reachable_time': 16885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320733, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.867 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:16:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:14.867 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e456d014-9ebd-4b90-9092-c83ff09ec751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:14 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:16:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:15.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.389 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.390 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.410 257641 DEBUG nova.objects.instance [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.519 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/744106522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.955 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.962 257641 DEBUG nova.compute.provider_tree [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:15 np0005539550 nova_compute[257631]: 2025-11-29 08:16:15.990 257641 DEBUG nova.scheduler.client.report [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.052 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.206 257641 INFO nova.compute.manager [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Swapping old allocation on dict_keys(['a73c606e-2495-4af4-b703-8d4b3001fdf5']) held by migration c1d4f0fd-49dd-403e-a4db-2961eb03ddd6 for instance#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.238 257641 DEBUG nova.scheduler.client.report [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Overwriting current allocation {'allocations': {'a73c606e-2495-4af4-b703-8d4b3001fdf5': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}, 'generation': 65}}, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'consumer_generation': 1} on consumer 886e1d81-4445-45c7-8c0a-4838eb595ab1 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Nov 29 03:16:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.919 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.920 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.920 257641 DEBUG nova.network.neutron [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:16 np0005539550 nova_compute[257631]: 2025-11-29 08:16:16.979 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.175 257641 DEBUG nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.176 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.176 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.177 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.177 257641 DEBUG nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.177 257641 WARNING nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.177 257641 DEBUG nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.178 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.178 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.178 257641 DEBUG oslo_concurrency.lockutils [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.179 257641 DEBUG nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:17 np0005539550 nova_compute[257631]: 2025-11-29 08:16:17.179 257641 WARNING nova.compute.manager [req-411d820b-c15e-4822-89f7-078f8d40783d req-0687df75-95b9-4e22-ab19-e63c88002522 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:16:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 146 op/s
Nov 29 03:16:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:17.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:18.948 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:18.949 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:18.949 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:19.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 500 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 150 op/s
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.509 257641 DEBUG nova.network.neutron [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.529 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-886e1d81-4445-45c7-8c0a-4838eb595ab1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.529 257641 DEBUG os_brick.utils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.530 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.540 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.540 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[63966fd7-be95-4328-b7e7-1586da19be8f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.541 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.548 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.548 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[ed32b120-d34c-456d-aed7-cfc9f2325e66]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.549 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.561 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.562 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[2e05b706-fc60-4faf-9b05-4bf8cb3482c9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.564 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[e320be2d-5281-4be3-9516-c0d4cf8d1c00]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.565 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.595 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.598 257641 DEBUG os_brick.initiator.connectors.lightos [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.599 257641 DEBUG os_brick.initiator.connectors.lightos [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.599 257641 DEBUG os_brick.initiator.connectors.lightos [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.599 257641 DEBUG os_brick.utils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:16:19 np0005539550 nova_compute[257631]: 2025-11-29 08:16:19.718 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:19.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010672212334822259 of space, bias 1.0, pg target 3.2016637004466775 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.2940336447864322 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:16:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.483 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.602 257641 DEBUG nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.677 257641 DEBUG nova.storage.rbd_utils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rolling back rbd image(886e1d81-4445-45c7-8c0a-4838eb595ab1_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.782 257641 DEBUG nova.storage.rbd_utils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(nova-resize) on rbd image(886e1d81-4445-45c7-8c0a-4838eb595ab1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:16:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 29 03:16:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 29 03:16:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.903 257641 DEBUG nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start _get_guest_xml network_info=[{"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'af39c7eb-4a6e-462e-b129-e7501bb7d547', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'attached_at': '2025-11-29T08:16:20.000000', 'detached_at': '', 'volume_id': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6', 'serial': 'eb8b2675-ae0e-4130-a3f5-06bcfedbadc6'}, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.908 257641 WARNING nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.916 257641 DEBUG nova.virt.libvirt.host [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.917 257641 DEBUG nova.virt.libvirt.host [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.923 257641 DEBUG nova.virt.libvirt.host [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.923 257641 DEBUG nova.virt.libvirt.host [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.924 257641 DEBUG nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.925 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.925 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.926 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.926 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.926 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.927 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.927 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.927 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.927 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.928 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.928 257641 DEBUG nova.virt.hardware [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.928 257641 DEBUG nova.objects.instance [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:20 np0005539550 nova_compute[257631]: 2025-11-29 08:16:20.953 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 492 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 161 op/s
Nov 29 03:16:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760639676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.426 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.471 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120886049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.937 257641 DEBUG oslo_concurrency.processutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.970 257641 DEBUG nova.virt.libvirt.vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.971 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.971 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.974 257641 DEBUG nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <uuid>886e1d81-4445-45c7-8c0a-4838eb595ab1</uuid>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <name>instance-0000005f</name>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-431627369</nova:name>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:16:20</nova:creationTime>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <nova:port uuid="c4a22b7d-1070-4870-a1f6-21f50729504a">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="serial">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="uuid">886e1d81-4445-45c7-8c0a-4838eb595ab1</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/886e1d81-4445-45c7-8c0a-4838eb595ab1_disk.config">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-eb8b2675-ae0e-4130-a3f5-06bcfedbadc6">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <serial>eb8b2675-ae0e-4130-a3f5-06bcfedbadc6</serial>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:26:59:1a"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <target dev="tapc4a22b7d-10"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1/console.log" append="off"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:16:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:16:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:16:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:16:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.976 257641 DEBUG nova.compute.manager [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Preparing to wait for external event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.977 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.977 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.977 257641 DEBUG oslo_concurrency.lockutils [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.978 257641 DEBUG nova.virt.libvirt.vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.979 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.979 257641 DEBUG nova.network.os_vif_util [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.980 257641 DEBUG os_vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.980 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.981 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.982 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.983 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.985 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.986 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc4a22b7d-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.986 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc4a22b7d-10, col_values=(('external_ids', {'iface-id': 'c4a22b7d-1070-4870-a1f6-21f50729504a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:59:1a', 'vm-uuid': '886e1d81-4445-45c7-8c0a-4838eb595ab1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.987 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539550 NetworkManager[49039]: <info>  [1764404181.9884] manager: (tapc4a22b7d-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.993 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:21 np0005539550 nova_compute[257631]: 2025-11-29 08:16:21.993 257641 INFO os_vif [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.0751] manager: (tapc4a22b7d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Nov 29 03:16:22 np0005539550 kernel: tapc4a22b7d-10: entered promiscuous mode
Nov 29 03:16:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:22Z|00398|binding|INFO|Claiming lport c4a22b7d-1070-4870-a1f6-21f50729504a for this chassis.
Nov 29 03:16:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:22Z|00399|binding|INFO|c4a22b7d-1070-4870-a1f6-21f50729504a: Claiming fa:16:3e:26:59:1a 10.100.0.8
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.083 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.093 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.095 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.098 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:22Z|00400|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a ovn-installed in OVS
Nov 29 03:16:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:22Z|00401|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a up in Southbound
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.107 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 systemd-udevd[320984]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.111 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 systemd-machined[216673]: New machine qemu-48-instance-0000005f.
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.116 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[32ee694b-fe52-425c-9b8d-b39c67408fc1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.117 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.119 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.119 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4181381b-3fb9-4ed0-a4d2-153ed892c553]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 podman[320937]: 2025-11-29 08:16:22.121006258 +0000 UTC m=+0.082549730 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.1212] device (tapc4a22b7d-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.120 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d27b92a5-9175-401d-ae1e-604a9aae81a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.1220] device (tapc4a22b7d-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:22 np0005539550 systemd[1]: Started Virtual Machine qemu-48-instance-0000005f.
Nov 29 03:16:22 np0005539550 podman[320939]: 2025-11-29 08:16:22.123514981 +0000 UTC m=+0.083289868 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.134 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe8637d-22a3-4351-9c4e-1a89a1311e52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.151 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eaef75-6bbf-427e-8465-03f5a8dabdb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.180 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[60047807-4ae4-4501-9165-2452a738b3aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.1870] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.186 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dde7ed46-01ce-4607-b290-f0b4dd056885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.217 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff76dc3-0781-44b1-8bd7-b5c0d549bfc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.220 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b8948c06-cf36-4531-bb6d-980cd9b2bb1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.2437] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.248 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b388a17c-9ddf-4a12-a9ad-7ca435f4daac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.268 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c475b8c7-ca89-485b-a2e2-0c16279975b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722990, 'reachable_time': 22189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321022, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9ada4315-ee3c-4ef9-b2c5-8776147e861a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722990, 'tstamp': 722990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321023, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.302 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c60352f-aa21-4441-b2f9-555654509cb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 107], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722990, 'reachable_time': 22189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321024, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.335 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbb8cea-2ffa-4fe3-bc09-ef8d56bbf4bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.396 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5c376c-6f6c-4f08-9279-9e72df67dc50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.398 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.399 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.399 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:16:22 np0005539550 NetworkManager[49039]: <info>  [1764404182.4025] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.408 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:22Z|00402|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.414 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.415 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd37dce-0944-4de0-af59-479c00fa0bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.415 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:16:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:22.416 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.755 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 886e1d81-4445-45c7-8c0a-4838eb595ab1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.756 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404182.7546139, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.756 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:22 np0005539550 podman[321115]: 2025-11-29 08:16:22.772178002 +0000 UTC m=+0.050647909 container create decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.782 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.786 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404182.754755, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.786 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:22 np0005539550 systemd[1]: Started libpod-conmon-decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e.scope.
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.817 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.821 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:22 np0005539550 podman[321115]: 2025-11-29 08:16:22.743661757 +0000 UTC m=+0.022131674 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:16:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7e0e33793b8e2a4d49a23d76fe05e7af104ed5ed143460067d9ca3da6008e9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:22 np0005539550 podman[321115]: 2025-11-29 08:16:22.855948021 +0000 UTC m=+0.134417928 container init decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:16:22 np0005539550 podman[321115]: 2025-11-29 08:16:22.861958414 +0000 UTC m=+0.140428301 container start decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:16:22 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [NOTICE]   (321135) : New worker (321137) forked
Nov 29 03:16:22 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [NOTICE]   (321135) : Loading success.
Nov 29 03:16:22 np0005539550 nova_compute[257631]: 2025-11-29 08:16:22.891 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:16:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 477 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 185 op/s
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.945 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:16:23 np0005539550 nova_compute[257631]: 2025-11-29 08:16:23.946 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611076057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.397 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.487 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.488 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.488 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000005f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.492 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.492 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.492 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000060 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.662 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.663 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4185MB free_disk=20.774791717529297GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.663 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.664 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.806 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance cc3dd9da-bb7d-4885-8555-d724b05677fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.807 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 886e1d81-4445-45c7-8c0a-4838eb595ab1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.807 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.807 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.891 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.937 257641 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.938 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.938 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.938 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.939 257641 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Processing event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.939 257641 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.939 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.939 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.940 257641 DEBUG oslo_concurrency.lockutils [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.940 257641 DEBUG nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.940 257641 WARNING nova.compute.manager [req-c35c31c8-677c-46f0-8efe-e18c87be520a req-5d8ec76d-67a6-476e-81ae-2f6730458d49 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.941 257641 DEBUG nova.compute.manager [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.944 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404184.9441855, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.944 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.949 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance running successfully.#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.949 257641 DEBUG nova.virt.libvirt.driver [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] finish_revert_migration finished successfully. finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11887#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.970 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:24 np0005539550 nova_compute[257631]: 2025-11-29 08:16:24.989 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: resized, current task_state: resize_reverting, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.029 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] During sync_power_state the instance has a pending task (resize_reverting). Skip.#033[00m
Nov 29 03:16:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.299 257641 INFO nova.compute.manager [None req-1daa1d3e-fa98-4a9a-b4b7-b99bc4f7b315 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance to original state: 'active'#033[00m
Nov 29 03:16:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 295 active+clean; 417 MiB data, 1001 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.1 KiB/s wr, 241 op/s
Nov 29 03:16:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046158403' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.364 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.370 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.408 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.433 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:16:25 np0005539550 nova_compute[257631]: 2025-11-29 08:16:25.433 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:25.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.433 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.433 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:16:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.908 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.908 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.908 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:16:26 np0005539550 nova_compute[257631]: 2025-11-29 08:16:26.984 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 394 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.6 KiB/s wr, 248 op/s
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.591 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.591 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.592 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.592 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.593 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.595 257641 INFO nova.compute.manager [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Terminating instance#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.597 257641 DEBUG nova.compute.manager [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:27 np0005539550 kernel: tap5be36804-6e (unregistering): left promiscuous mode
Nov 29 03:16:27 np0005539550 NetworkManager[49039]: <info>  [1764404187.6681] device (tap5be36804-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:27Z|00403|binding|INFO|Releasing lport 5be36804-6ed4-419f-9b92-e268a97799f5 from this chassis (sb_readonly=0)
Nov 29 03:16:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:27Z|00404|binding|INFO|Setting lport 5be36804-6ed4-419f-9b92-e268a97799f5 down in Southbound
Nov 29 03:16:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:27Z|00405|binding|INFO|Removing iface tap5be36804-6e ovn-installed in OVS
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.679 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:27.701 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:26:00 10.100.0.2'], port_security=['fa:16:3e:a3:26:00 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'cc3dd9da-bb7d-4885-8555-d724b05677fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9404f82f-199b-4eec-83ca-0eeb6b2d1ce8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4da7fb77734a4135a6f8b5b70bed7a2f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5d9eaea9-a53e-4ac6-a82a-9c11849e63d4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eff10a56-99f6-4778-b800-4c9f705b38bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5be36804-6ed4-419f-9b92-e268a97799f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:27.703 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5be36804-6ed4-419f-9b92-e268a97799f5 in datapath 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 unbound from our chassis#033[00m
Nov 29 03:16:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:27.705 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9404f82f-199b-4eec-83ca-0eeb6b2d1ce8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:16:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:27.708 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e2eb33-3f34-4961-b8e8-c198011f3d56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.727 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000060.scope: Deactivated successfully.
Nov 29 03:16:27 np0005539550 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000060.scope: Consumed 16.708s CPU time.
Nov 29 03:16:27 np0005539550 systemd-machined[216673]: Machine qemu-46-instance-00000060 terminated.
Nov 29 03:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.844 257641 INFO nova.virt.libvirt.driver [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Instance destroyed successfully.#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.845 257641 DEBUG nova.objects.instance [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lazy-loading 'resources' on Instance uuid cc3dd9da-bb7d-4885-8555-d724b05677fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.868 257641 DEBUG nova.virt.libvirt.vif [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-513044691',display_name='tempest-ServerRescueTestJSON-server-513044691',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-513044691',id=96,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:15:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4da7fb77734a4135a6f8b5b70bed7a2f',ramdisk_id='',reservation_id='r-b20jon9y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-640276387',owner_user_name='tempest-ServerRescueTestJSON-640276387-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:15:26Z,user_data=None,user_id='f4c89c9953854ecf96a802dc6055db9d',uuid=cc3dd9da-bb7d-4885-8555-d724b05677fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.869 257641 DEBUG nova.network.os_vif_util [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converting VIF {"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.870 257641 DEBUG nova.network.os_vif_util [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.871 257641 DEBUG os_vif [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.874 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.874 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5be36804-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.882 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:27 np0005539550 nova_compute[257631]: 2025-11-29 08:16:27.885 257641 INFO os_vif [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:26:00,bridge_name='br-int',has_traffic_filtering=True,id=5be36804-6ed4-419f-9b92-e268a97799f5,network=Network(9404f82f-199b-4eec-83ca-0eeb6b2d1ce8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5be36804-6e')#033[00m
Nov 29 03:16:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:27.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.560 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.561 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.561 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.562 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.562 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.564 257641 INFO nova.compute.manager [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Terminating instance#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.565 257641 DEBUG nova.compute.manager [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:16:28 np0005539550 kernel: tapc4a22b7d-10 (unregistering): left promiscuous mode
Nov 29 03:16:28 np0005539550 NetworkManager[49039]: <info>  [1764404188.6294] device (tapc4a22b7d-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:16:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:28Z|00406|binding|INFO|Releasing lport c4a22b7d-1070-4870-a1f6-21f50729504a from this chassis (sb_readonly=0)
Nov 29 03:16:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:28Z|00407|binding|INFO|Setting lport c4a22b7d-1070-4870-a1f6-21f50729504a down in Southbound
Nov 29 03:16:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:28Z|00408|binding|INFO|Removing iface tapc4a22b7d-10 ovn-installed in OVS
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.637 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:28.645 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:59:1a 10.100.0.8'], port_security=['fa:16:3e:26:59:1a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '886e1d81-4445-45c7-8c0a-4838eb595ab1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '8', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.246', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c4a22b7d-1070-4870-a1f6-21f50729504a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:28.646 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c4a22b7d-1070-4870-a1f6-21f50729504a in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:16:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:28.648 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:16:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:28.648 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9203a812-1ebc-4b90-988d-e2fb4ab072c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:28.649 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.659 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Nov 29 03:16:28 np0005539550 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000005f.scope: Consumed 4.431s CPU time.
Nov 29 03:16:28 np0005539550 systemd-machined[216673]: Machine qemu-48-instance-0000005f terminated.
Nov 29 03:16:28 np0005539550 NetworkManager[49039]: <info>  [1764404188.7797] manager: (tapc4a22b7d-10): new Tun device (/org/freedesktop/NetworkManager/Devices/179)
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.798 257641 INFO nova.virt.libvirt.driver [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Instance destroyed successfully.#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.799 257641 DEBUG nova.objects.instance [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 886e1d81-4445-45c7-8c0a-4838eb595ab1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:28 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [NOTICE]   (321135) : haproxy version is 2.8.14-c23fe91
Nov 29 03:16:28 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [NOTICE]   (321135) : path to executable is /usr/sbin/haproxy
Nov 29 03:16:28 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [WARNING]  (321135) : Exiting Master process...
Nov 29 03:16:28 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [ALERT]    (321135) : Current worker (321137) exited with code 143 (Terminated)
Nov 29 03:16:28 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321131]: [WARNING]  (321135) : All workers exited. Exiting... (0)
Nov 29 03:16:28 np0005539550 systemd[1]: libpod-decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e.scope: Deactivated successfully.
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.853 257641 DEBUG nova.virt.libvirt.vif [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:14:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-431627369',display_name='tempest-ServerActionsTestOtherB-server-431627369',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-431627369',id=95,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-w4qpz19d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:16:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=886e1d81-4445-45c7-8c0a-4838eb595ab1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.854 257641 DEBUG nova.network.os_vif_util [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "c4a22b7d-1070-4870-a1f6-21f50729504a", "address": "fa:16:3e:26:59:1a", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc4a22b7d-10", "ovs_interfaceid": "c4a22b7d-1070-4870-a1f6-21f50729504a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.854 257641 DEBUG nova.network.os_vif_util [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.854 257641 DEBUG os_vif [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.855 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.856 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc4a22b7d-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:28 np0005539550 podman[321251]: 2025-11-29 08:16:28.858040015 +0000 UTC m=+0.125837490 container died decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.860 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:28 np0005539550 nova_compute[257631]: 2025-11-29 08:16:28.862 257641 INFO os_vif [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:59:1a,bridge_name='br-int',has_traffic_filtering=True,id=c4a22b7d-1070-4870-a1f6-21f50729504a,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc4a22b7d-10')#033[00m
Nov 29 03:16:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.333 257641 DEBUG nova.compute.manager [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.334 257641 DEBUG oslo_concurrency.lockutils [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.334 257641 DEBUG oslo_concurrency.lockutils [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.334 257641 DEBUG oslo_concurrency.lockutils [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.334 257641 DEBUG nova.compute.manager [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.334 257641 DEBUG nova.compute.manager [req-8b751e97-1488-48da-81a0-1029c5e89fa7 req-20980816-7444-4f87-9d7c-c810093234f3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-unplugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 374 MiB data, 972 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.3 KiB/s wr, 232 op/s
Nov 29 03:16:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f7e0e33793b8e2a4d49a23d76fe05e7af104ed5ed143460067d9ca3da6008e9e-merged.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539550 podman[321251]: 2025-11-29 08:16:29.434537731 +0000 UTC m=+0.702335166 container cleanup decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:16:29 np0005539550 systemd[1]: libpod-conmon-decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e.scope: Deactivated successfully.
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.476 257641 INFO nova.virt.libvirt.driver [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Deleting instance files /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd_del#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.478 257641 INFO nova.virt.libvirt.driver [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Deletion of /var/lib/nova/instances/cc3dd9da-bb7d-4885-8555-d724b05677fd_del complete#033[00m
Nov 29 03:16:29 np0005539550 podman[321312]: 2025-11-29 08:16:29.508346618 +0000 UTC m=+0.047509079 container remove decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.515 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f6747e6-d253-417b-a024-3bc525456562]: (4, ('Sat Nov 29 08:16:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e)\ndecdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e\nSat Nov 29 08:16:29 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (decdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e)\ndecdac8546a3e25610954d1829562fa905b49c6444bfb07bbec822140181b21e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.516 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82ecb4ce-b7a5-4b7b-812f-7930f437ca80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.519 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.521 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539550 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.534 257641 INFO nova.compute.manager [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Took 1.94 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.536 257641 DEBUG oslo.service.loopingcall [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.537 257641 DEBUG nova.compute.manager [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.537 257641 DEBUG nova.network.neutron [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.542 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82fa96c7-1358-4f02-b498-fed9a5a0f852]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.560 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[621f00aa-cb08-4361-a211-43193faf3d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.564 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2386d3ab-a45f-4ed6-afc3-6126205a9ccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.570 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updating instance_info_cache with network_info: [{"id": "5be36804-6ed4-419f-9b92-e268a97799f5", "address": "fa:16:3e:a3:26:00", "network": {"id": "9404f82f-199b-4eec-83ca-0eeb6b2d1ce8", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1742590140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "4da7fb77734a4135a6f8b5b70bed7a2f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5be36804-6e", "ovs_interfaceid": "5be36804-6ed4-419f-9b92-e268a97799f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.582 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9279f659-d22a-4952-a805-398045253972]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722984, 'reachable_time': 16606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321327, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.586 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:16:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:29.587 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[c2b1a2cf-519b-49ff-988b-64cd31991b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.589 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-cc3dd9da-bb7d-4885-8555-d724b05677fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.589 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.589 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.590 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.590 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.590 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.795 257641 INFO nova.virt.libvirt.driver [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Deleting instance files /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1_del#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.796 257641 INFO nova.virt.libvirt.driver [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Deletion of /var/lib/nova/instances/886e1d81-4445-45c7-8c0a-4838eb595ab1_del complete#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.863 257641 INFO nova.compute.manager [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.864 257641 DEBUG oslo.service.loopingcall [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.864 257641 DEBUG nova.compute.manager [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.865 257641 DEBUG nova.network.neutron [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.934 257641 DEBUG nova.compute.manager [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.934 257641 DEBUG oslo_concurrency.lockutils [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.934 257641 DEBUG oslo_concurrency.lockutils [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.935 257641 DEBUG oslo_concurrency.lockutils [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.935 257641 DEBUG nova.compute.manager [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:29 np0005539550 nova_compute[257631]: 2025-11-29 08:16:29.935 257641 DEBUG nova.compute.manager [req-a1bc5839-d9e5-4b09-9fcc-e7f6308720cb req-db7eb78c-d861-4185-8fea-486e60bfb30d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-unplugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:16:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:29.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.600 257641 DEBUG nova.network.neutron [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.672 257641 INFO nova.compute.manager [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.793 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.793 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.890 257641 DEBUG oslo_concurrency.processutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:30 np0005539550 nova_compute[257631]: 2025-11-29 08:16:30.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.013 257641 DEBUG nova.compute.manager [req-8866fc4f-a6e9-484b-b93f-0d2d8a3e759c req-864dcd2a-af4f-4fac-8bcc-82a0140adccf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-deleted-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/554844372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.324 257641 DEBUG oslo_concurrency.processutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.330 257641 DEBUG nova.compute.provider_tree [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 205 MiB data, 916 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.4 KiB/s wr, 296 op/s
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.356 257641 DEBUG nova.scheduler.client.report [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.409 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.467 257641 INFO nova.scheduler.client.report [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Deleted allocations for instance cc3dd9da-bb7d-4885-8555-d724b05677fd#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.604 257641 DEBUG nova.network.neutron [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.609 257641 DEBUG oslo_concurrency.lockutils [None req-e74a7ecd-500e-45b7-a128-98d899b8f6cf f4c89c9953854ecf96a802dc6055db9d 4da7fb77734a4135a6f8b5b70bed7a2f - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.612 257641 DEBUG nova.compute.manager [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.612 257641 DEBUG oslo_concurrency.lockutils [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.612 257641 DEBUG oslo_concurrency.lockutils [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.613 257641 DEBUG oslo_concurrency.lockutils [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.613 257641 DEBUG nova.compute.manager [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] No waiting events found dispatching network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.613 257641 WARNING nova.compute.manager [req-89ddd0a8-5cf3-430f-8bd7-ebad7bff793c req-f64123e9-76a0-4e48-a2d6-4aee037f1378 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received unexpected event network-vif-plugged-c4a22b7d-1070-4870-a1f6-21f50729504a for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.638 257641 INFO nova.compute.manager [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Took 1.77 seconds to deallocate network for instance.#033[00m
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 29 03:16:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 29 03:16:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:31.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:31 np0005539550 nova_compute[257631]: 2025-11-29 08:16:31.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.099 257641 INFO nova.compute.manager [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Took 0.46 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.186 257641 DEBUG nova.compute.manager [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.186 257641 DEBUG oslo_concurrency.lockutils [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.186 257641 DEBUG oslo_concurrency.lockutils [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.186 257641 DEBUG oslo_concurrency.lockutils [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cc3dd9da-bb7d-4885-8555-d724b05677fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.187 257641 DEBUG nova.compute.manager [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] No waiting events found dispatching network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.187 257641 WARNING nova.compute.manager [req-5b3831fc-f4bf-42e0-ab63-99f5a4f866b3 req-6aaec0e8-9c83-4ad5-9d09-4723ac51ccde 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Received unexpected event network-vif-plugged-5be36804-6ed4-419f-9b92-e268a97799f5 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.198 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.199 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.269 257641 DEBUG oslo_concurrency.processutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035138158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.712 257641 DEBUG oslo_concurrency.processutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.719 257641 DEBUG nova.compute.provider_tree [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.758 257641 DEBUG nova.scheduler.client.report [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.853 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:32 np0005539550 nova_compute[257631]: 2025-11-29 08:16:32.911 257641 INFO nova.scheduler.client.report [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 886e1d81-4445-45c7-8c0a-4838eb595ab1#033[00m
Nov 29 03:16:33 np0005539550 nova_compute[257631]: 2025-11-29 08:16:33.026 257641 DEBUG oslo_concurrency.lockutils [None req-3842af07-65e3-49d3-aff0-3909ce97f801 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "886e1d81-4445-45c7-8c0a-4838eb595ab1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 177 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 223 KiB/s wr, 291 op/s
Nov 29 03:16:33 np0005539550 podman[321375]: 2025-11-29 08:16:33.367847213 +0000 UTC m=+0.104747204 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:16:33 np0005539550 nova_compute[257631]: 2025-11-29 08:16:33.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:33 np0005539550 nova_compute[257631]: 2025-11-29 08:16:33.884 257641 DEBUG nova.compute.manager [req-87a40f06-e413-48f8-a2b1-d2096a813368 req-63f4caa8-6b0f-4c04-922b-03dbd83eb763 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Received event network-vif-deleted-c4a22b7d-1070-4870-a1f6-21f50729504a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:34 np0005539550 nova_compute[257631]: 2025-11-29 08:16:34.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:35.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 196 MiB data, 887 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 994 KiB/s wr, 229 op/s
Nov 29 03:16:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:36 np0005539550 nova_compute[257631]: 2025-11-29 08:16:36.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:16:36 np0005539550 nova_compute[257631]: 2025-11-29 08:16:36.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:36 np0005539550 nova_compute[257631]: 2025-11-29 08:16:36.988 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:37.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 213 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Nov 29 03:16:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.762 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.763 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.790 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.907 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.909 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.915 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:16:38 np0005539550 nova_compute[257631]: 2025-11-29 08:16:38.915 257641 INFO nova.compute.claims [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:16:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:16:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/938037242' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:16:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:16:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/938037242' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.086 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 213 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 312 op/s
Nov 29 03:16:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:16:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896839922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.549 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.556 257641 DEBUG nova.compute.provider_tree [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.839 257641 DEBUG nova.scheduler.client.report [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.903 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.904 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.967 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.968 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:16:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:39 np0005539550 nova_compute[257631]: 2025-11-29 08:16:39.992 257641 INFO nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.031 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.323 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.324 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.325 257641 INFO nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating image(s)#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.356 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.386 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.418 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.423 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.505 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.507 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.508 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.508 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.537 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.541 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 68c19565-6fe1-4c2c-927d-87f801074e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.853 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 68c19565-6fe1-4c2c-927d-87f801074e18_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:16:40 np0005539550 nova_compute[257631]: 2025-11-29 08:16:40.928 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] resizing rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.026 257641 DEBUG nova.objects.instance [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.044 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.045 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Ensure instance console log exists: /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.046 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.046 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.046 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.212 257641 DEBUG nova.policy [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:16:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:41.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 239 MiB data, 896 MiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 3.4 MiB/s wr, 336 op/s
Nov 29 03:16:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:41.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:41 np0005539550 nova_compute[257631]: 2025-11-29 08:16:41.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:42 np0005539550 nova_compute[257631]: 2025-11-29 08:16:42.607 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Successfully created port: 2d17165a-5bc8-402e-a08c-e6f21188fb1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:16:42 np0005539550 nova_compute[257631]: 2025-11-29 08:16:42.843 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404187.8422263, cc3dd9da-bb7d-4885-8555-d724b05677fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:42 np0005539550 nova_compute[257631]: 2025-11-29 08:16:42.844 257641 INFO nova.compute.manager [-] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:42 np0005539550 nova_compute[257631]: 2025-11-29 08:16:42.870 257641 DEBUG nova.compute.manager [None req-f016a121-1ddb-4d8d-a0e6-d5c8d8b2b147 - - - - - -] [instance: cc3dd9da-bb7d-4885-8555-d724b05677fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 248 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 454 KiB/s rd, 3.3 MiB/s wr, 313 op/s
Nov 29 03:16:43 np0005539550 nova_compute[257631]: 2025-11-29 08:16:43.798 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404188.7965944, 886e1d81-4445-45c7-8c0a-4838eb595ab1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:43 np0005539550 nova_compute[257631]: 2025-11-29 08:16:43.799 257641 INFO nova.compute.manager [-] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:16:43 np0005539550 nova_compute[257631]: 2025-11-29 08:16:43.864 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:43.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:44 np0005539550 nova_compute[257631]: 2025-11-29 08:16:44.008 257641 DEBUG nova.compute.manager [None req-1bce0695-5fd4-4916-b60e-0f27a02aa1bd - - - - - -] [instance: 886e1d81-4445-45c7-8c0a-4838eb595ab1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.111 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Successfully updated port: 2d17165a-5bc8-402e-a08c-e6f21188fb1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.137 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.137 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.138 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.262 257641 DEBUG nova.compute.manager [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.262 257641 DEBUG nova.compute.manager [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing instance network info cache due to event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.263 257641 DEBUG oslo_concurrency.lockutils [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:45.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 260 MiB data, 909 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.4 MiB/s wr, 296 op/s
Nov 29 03:16:45 np0005539550 nova_compute[257631]: 2025-11-29 08:16:45.748 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:16:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:46 np0005539550 nova_compute[257631]: 2025-11-29 08:16:46.992 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 260 MiB data, 922 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 303 op/s
Nov 29 03:16:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:47.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.265 257641 DEBUG nova.network.neutron [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.343 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.343 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance network_info: |[{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.344 257641 DEBUG oslo_concurrency.lockutils [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.344 257641 DEBUG nova.network.neutron [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.348 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Start _get_guest_xml network_info=[{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.354 257641 WARNING nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.359 257641 DEBUG nova.virt.libvirt.host [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.360 257641 DEBUG nova.virt.libvirt.host [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.369 257641 DEBUG nova.virt.libvirt.host [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.370 257641 DEBUG nova.virt.libvirt.host [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.372 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.372 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.373 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.373 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.373 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.373 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.374 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.374 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.374 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.374 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.375 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.375 257641 DEBUG nova.virt.hardware [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.379 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014859703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.797 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.822 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.826 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:48 np0005539550 nova_compute[257631]: 2025-11-29 08:16:48.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:16:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853261430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.242 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.245 257641 DEBUG nova.virt.libvirt.vif [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.245 257641 DEBUG nova.network.os_vif_util [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.246 257641 DEBUG nova.network.os_vif_util [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.248 257641 DEBUG nova.objects.instance [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.272 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <uuid>68c19565-6fe1-4c2c-927d-87f801074e18</uuid>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <name>instance-00000068</name>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-671911008</nova:name>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:16:48</nova:creationTime>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <nova:port uuid="2d17165a-5bc8-402e-a08c-e6f21188fb1b">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="serial">68c19565-6fe1-4c2c-927d-87f801074e18</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="uuid">68c19565-6fe1-4c2c-927d-87f801074e18</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk.config">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ae:bc:67"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <target dev="tap2d17165a-5b"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/console.log" append="off"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:16:49 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:16:49 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:16:49 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:16:49 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.275 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Preparing to wait for external event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.276 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.277 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.277 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.278 257641 DEBUG nova.virt.libvirt.vif [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:16:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.279 257641 DEBUG nova.network.os_vif_util [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.280 257641 DEBUG nova.network.os_vif_util [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.280 257641 DEBUG os_vif [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.281 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.282 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.282 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.286 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.286 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d17165a-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.287 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d17165a-5b, col_values=(('external_ids', {'iface-id': '2d17165a-5bc8-402e-a08c-e6f21188fb1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:bc:67', 'vm-uuid': '68c19565-6fe1-4c2c-927d-87f801074e18'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:49 np0005539550 NetworkManager[49039]: <info>  [1764404209.2896] manager: (tap2d17165a-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.291 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.297 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.298 257641 INFO os_vif [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b')#033[00m
Nov 29 03:16:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 250 MiB data, 920 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.403 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.405 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.405 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:ae:bc:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.406 257641 INFO nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Using config drive#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.432 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.976 257641 INFO nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Creating config drive at /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config#033[00m
Nov 29 03:16:49 np0005539550 nova_compute[257631]: 2025-11-29 08:16:49.987 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqbnzhpql execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:49.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.126 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqbnzhpql" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.172 257641 DEBUG nova.storage.rbd_utils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.176 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.363 257641 DEBUG oslo_concurrency.processutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config 68c19565-6fe1-4c2c-927d-87f801074e18_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.364 257641 INFO nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deleting local config drive /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18/disk.config because it was imported into RBD.#033[00m
Nov 29 03:16:50 np0005539550 kernel: tap2d17165a-5b: entered promiscuous mode
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.4243] manager: (tap2d17165a-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Nov 29 03:16:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:50Z|00409|binding|INFO|Claiming lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b for this chassis.
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:50Z|00410|binding|INFO|2d17165a-5bc8-402e-a08c-e6f21188fb1b: Claiming fa:16:3e:ae:bc:67 10.100.0.9
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 systemd-udevd[321784]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.4744] device (tap2d17165a-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.4752] device (tap2d17165a-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:16:50 np0005539550 systemd-machined[216673]: New machine qemu-49-instance-00000068.
Nov 29 03:16:50 np0005539550 systemd[1]: Started Virtual Machine qemu-49-instance-00000068.
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.503 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bc:67 10.100.0.9'], port_security=['fa:16:3e:ae:bc:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '68c19565-6fe1-4c2c-927d-87f801074e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d17165a-5bc8-402e-a08c-e6f21188fb1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.505 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d17165a-5bc8-402e-a08c-e6f21188fb1b in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.506 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:16:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:50Z|00411|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b ovn-installed in OVS
Nov 29 03:16:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:50Z|00412|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b up in Southbound
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.518 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.518 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8eab97de-d92e-4c28-8c9d-9a96cf1174f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.519 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b704d3a-d1 in ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.521 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b704d3a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.521 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd1657c-3e35-4828-ae01-3f82262ee530]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.522 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c2db87df-b78c-4f7f-8c09-d41c59843fd2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.534 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[84e6a494-967c-4aa3-b246-b0f9288e9bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.547 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a63ae924-1c39-477b-9075-e9d40df212a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.577 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[263de01d-7c08-400d-990a-93517809fa4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.5854] manager: (tap2b704d3a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/182)
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.584 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[68b73c75-bb0d-4759-a64c-cd572e20d9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.622 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f93cf4-6ad0-49e9-bb44-5170c8017811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.625 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1d775c-a6b1-4acd-867f-ad192d7166d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.6495] device (tap2b704d3a-d0): carrier: link connected
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.654 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c91016-9453-452b-936c-dfbe30eedcd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.670 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[beed9a4b-68a8-40eb-94ad-6b2d8ebd2d45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725831, 'reachable_time': 35106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321822, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.685 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a1be8d-0b5b-4125-9327-77a96530a652]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:d799'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725831, 'tstamp': 725831}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321823, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.703 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[24fad392-ea24-4ff5-9a30-a7b2ac492cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725831, 'reachable_time': 35106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321824, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.734 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b52fa2-32f0-44ab-b6e3-9f5cfcc9d33f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a2af61-37fd-4c03-b119-b59113b0b141]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.787 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.787 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.788 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.789 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 NetworkManager[49039]: <info>  [1764404210.7902] manager: (tap2b704d3a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Nov 29 03:16:50 np0005539550 kernel: tap2b704d3a-d0: entered promiscuous mode
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.793 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:50Z|00413|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:50 np0005539550 nova_compute[257631]: 2025-11-29 08:16:50.812 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.813 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.814 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78df7f5f-c8bf-455b-827c-3090d3ad3c68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.814 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.pid.haproxy
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:16:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:16:50.815 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'env', 'PROCESS_TAG=haproxy-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:16:51 np0005539550 podman[321889]: 2025-11-29 08:16:51.17718305 +0000 UTC m=+0.048060843 container create e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:16:51 np0005539550 systemd[1]: Started libpod-conmon-e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de.scope.
Nov 29 03:16:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:16:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ed2f6fcff3901942db69a412a6da7462678fcce5e0c9cc3155e11b252b1a55d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:16:51 np0005539550 podman[321889]: 2025-11-29 08:16:51.151513848 +0000 UTC m=+0.022391661 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.248 257641 DEBUG nova.network.neutron [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updated VIF entry in instance network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.249 257641 DEBUG nova.network.neutron [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.256 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404211.2562737, 68c19565-6fe1-4c2c-927d-87f801074e18 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.257 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Started (Lifecycle Event)#033[00m
Nov 29 03:16:51 np0005539550 podman[321889]: 2025-11-29 08:16:51.258382284 +0000 UTC m=+0.129260087 container init e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:16:51 np0005539550 podman[321889]: 2025-11-29 08:16:51.264784597 +0000 UTC m=+0.135662390 container start e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:16:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:16:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:51.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:16:51 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [NOTICE]   (321917) : New worker (321919) forked
Nov 29 03:16:51 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [NOTICE]   (321917) : Loading success.
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.302 257641 DEBUG oslo_concurrency.lockutils [req-6f817b97-7abc-4b5c-bfee-06e48792ac1c req-f8f4a2f1-edc6-4e96-9b8b-7aff4dc4cb24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.304 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.309 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404211.2563984, 68c19565-6fe1-4c2c-927d-87f801074e18 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.309 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.343 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.346 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 224 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 236 op/s
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.381 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:51.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:51 np0005539550 nova_compute[257631]: 2025-11-29 08:16:51.994 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:52 np0005539550 podman[321930]: 2025-11-29 08:16:52.311985798 +0000 UTC m=+0.047984380 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:16:52 np0005539550 podman[321929]: 2025-11-29 08:16:52.315807686 +0000 UTC m=+0.054504497 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2)
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.441 257641 DEBUG nova.compute.manager [req-39d3d826-4318-4bc3-8dfe-e97427eb51e9 req-696c1d8a-8ad1-4e70-9926-4917c4984527 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.443 257641 DEBUG oslo_concurrency.lockutils [req-39d3d826-4318-4bc3-8dfe-e97427eb51e9 req-696c1d8a-8ad1-4e70-9926-4917c4984527 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.443 257641 DEBUG oslo_concurrency.lockutils [req-39d3d826-4318-4bc3-8dfe-e97427eb51e9 req-696c1d8a-8ad1-4e70-9926-4917c4984527 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.444 257641 DEBUG oslo_concurrency.lockutils [req-39d3d826-4318-4bc3-8dfe-e97427eb51e9 req-696c1d8a-8ad1-4e70-9926-4917c4984527 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.444 257641 DEBUG nova.compute.manager [req-39d3d826-4318-4bc3-8dfe-e97427eb51e9 req-696c1d8a-8ad1-4e70-9926-4917c4984527 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Processing event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.445 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.449 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404212.4493482, 68c19565-6fe1-4c2c-927d-87f801074e18 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.450 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.452 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.456 257641 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance spawned successfully.#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.457 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.487 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.495 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.500 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.502 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.502 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.503 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.503 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.504 257641 DEBUG nova.virt.libvirt.driver [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.547 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.636 257641 INFO nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Took 12.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.637 257641 DEBUG nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.745 257641 INFO nova.compute.manager [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Took 13.87 seconds to build instance.#033[00m
Nov 29 03:16:52 np0005539550 nova_compute[257631]: 2025-11-29 08:16:52.814 257641 DEBUG oslo_concurrency.lockutils [None req-91bbad4b-c111-4a09-b5ae-ce810e1c00d4 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:53.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 237 MiB data, 907 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Nov 29 03:16:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.310 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.641 257641 DEBUG nova.compute.manager [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.641 257641 DEBUG oslo_concurrency.lockutils [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.642 257641 DEBUG oslo_concurrency.lockutils [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.642 257641 DEBUG oslo_concurrency.lockutils [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.642 257641 DEBUG nova.compute.manager [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:16:54 np0005539550 nova_compute[257631]: 2025-11-29 08:16:54.642 257641 WARNING nova.compute.manager [req-a623c19d-edd4-4c8e-a86b-afa73358d2b9 req-7fb0d29d-4c67-4bb3-ba78-7887e408a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received unexpected event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:16:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:55.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 290 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Nov 29 03:16:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:16:56 np0005539550 nova_compute[257631]: 2025-11-29 08:16:56.996 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:16:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:57.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:16:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 306 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Nov 29 03:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:16:58.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:16:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:16:58 np0005539550 NetworkManager[49039]: <info>  [1764404218.2801] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Nov 29 03:16:58 np0005539550 NetworkManager[49039]: <info>  [1764404218.2811] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.281 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.444 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:58Z|00414|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.795 257641 DEBUG nova.compute.manager [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.795 257641 DEBUG nova.compute.manager [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing instance network info cache due to event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.795 257641 DEBUG oslo_concurrency.lockutils [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.796 257641 DEBUG oslo_concurrency.lockutils [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:16:58 np0005539550 nova_compute[257631]: 2025-11-29 08:16:58.796 257641 DEBUG nova.network.neutron [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:16:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:16:59Z|00415|binding|INFO|Releasing lport 299ca1be-be1b-47d9-8865-4316d34012e3 from this chassis (sb_readonly=0)
Nov 29 03:16:59 np0005539550 nova_compute[257631]: 2025-11-29 08:16:59.189 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:16:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:16:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:16:59.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:16:59 np0005539550 nova_compute[257631]: 2025-11-29 08:16:59.312 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 306 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:16:59
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'vms', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.control']
Nov 29 03:16:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:17:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:00.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:00 np0005539550 nova_compute[257631]: 2025-11-29 08:17:00.648 257641 DEBUG nova.network.neutron [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updated VIF entry in instance network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:00 np0005539550 nova_compute[257631]: 2025-11-29 08:17:00.649 257641 DEBUG nova.network.neutron [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:00 np0005539550 nova_compute[257631]: 2025-11-29 08:17:00.675 257641 DEBUG oslo_concurrency.lockutils [req-a308460a-4a53-45b2-9f5e-f65aaf98b51f req-c31aa70d-3b4a-4309-b2cf-b4fcb80f4537 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:01.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 306 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Nov 29 03:17:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:02 np0005539550 nova_compute[257631]: 2025-11-29 08:17:01.999 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:02.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 306 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.4 MiB/s wr, 273 op/s
Nov 29 03:17:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000076s ======
Nov 29 03:17:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:04.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Nov 29 03:17:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:04.032 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:04.033 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:17:04 np0005539550 nova_compute[257631]: 2025-11-29 08:17:04.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:04 np0005539550 nova_compute[257631]: 2025-11-29 08:17:04.313 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:04 np0005539550 podman[322029]: 2025-11-29 08:17:04.36667985 +0000 UTC m=+0.101498071 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:17:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:05.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 306 MiB data, 943 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.8 MiB/s wr, 248 op/s
Nov 29 03:17:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:06.035 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:07 np0005539550 nova_compute[257631]: 2025-11-29 08:17:07.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 335 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.7 MiB/s wr, 238 op/s
Nov 29 03:17:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:07Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:bc:67 10.100.0.9
Nov 29 03:17:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:07Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:bc:67 10.100.0.9
Nov 29 03:17:07 np0005539550 nova_compute[257631]: 2025-11-29 08:17:07.834 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:17:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:17:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:09.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:09 np0005539550 nova_compute[257631]: 2025-11-29 08:17:09.316 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 352 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 186 op/s
Nov 29 03:17:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 400 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 6.4 MiB/s wr, 281 op/s
Nov 29 03:17:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:12 np0005539550 nova_compute[257631]: 2025-11-29 08:17:12.004 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:13.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 404 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.6 MiB/s wr, 224 op/s
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:14.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:14 np0005539550 nova_compute[257631]: 2025-11-29 08:17:14.320 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:17:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:15.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 376 MiB data, 1020 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 7.2 MiB/s wr, 255 op/s
Nov 29 03:17:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:16.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:16 np0005539550 nova_compute[257631]: 2025-11-29 08:17:16.263 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 963bc210-a0b1-4a5f-aec8-1b979a47610c does not exist
Nov 29 03:17:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 569da2e3-3da2-47a7-bd50-139a4826e753 does not exist
Nov 29 03:17:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b3407cb6-0fa2-4a05-a7a6-dc4ee27fe596 does not exist
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:17:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:17:17 np0005539550 nova_compute[257631]: 2025-11-29 08:17:17.006 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 372 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.571535794 +0000 UTC m=+0.045063046 container create 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:17 np0005539550 systemd[1]: Started libpod-conmon-82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86.scope.
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.552050409 +0000 UTC m=+0.025577691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.669328499 +0000 UTC m=+0.142855781 container init 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.676207844 +0000 UTC m=+0.149735096 container start 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.680300838 +0000 UTC m=+0.153828130 container attach 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:17:17 np0005539550 systemd[1]: libpod-82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86.scope: Deactivated successfully.
Nov 29 03:17:17 np0005539550 compassionate_cohen[322521]: 167 167
Nov 29 03:17:17 np0005539550 conmon[322521]: conmon 82c1117995c479b8266e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86.scope/container/memory.events
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.685345907 +0000 UTC m=+0.158873159 container died 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:17:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7d03dcbcb1c084f42cc6df96da86b54abc3ce789ea4be17106ecc4d35608ed11-merged.mount: Deactivated successfully.
Nov 29 03:17:17 np0005539550 podman[322504]: 2025-11-29 08:17:17.728943875 +0000 UTC m=+0.202471127 container remove 82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:17:17 np0005539550 systemd[1]: libpod-conmon-82c1117995c479b8266ede7656fc4bbd6144845ec75f0cff9e85e71ea481de86.scope: Deactivated successfully.
Nov 29 03:17:17 np0005539550 podman[322545]: 2025-11-29 08:17:17.925582284 +0000 UTC m=+0.047277513 container create 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:17:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:17:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:17:17 np0005539550 systemd[1]: Started libpod-conmon-1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc.scope.
Nov 29 03:17:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:17.905306839 +0000 UTC m=+0.027002088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:18.007070956 +0000 UTC m=+0.128766205 container init 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:18.016087155 +0000 UTC m=+0.137782394 container start 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:18.021348399 +0000 UTC m=+0.143043668 container attach 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:17:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:18.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.475 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.477 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.543 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.623 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.623 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.632 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.632 257641 INFO nova.compute.claims [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:17:18 np0005539550 nova_compute[257631]: 2025-11-29 08:17:18.791 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:18 np0005539550 awesome_galois[322562]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:17:18 np0005539550 awesome_galois[322562]: --> relative data size: 1.0
Nov 29 03:17:18 np0005539550 awesome_galois[322562]: --> All data devices are unavailable
Nov 29 03:17:18 np0005539550 systemd[1]: libpod-1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc.scope: Deactivated successfully.
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:18.848093506 +0000 UTC m=+0.969788735 container died 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:17:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cab55766f790d9402f0b0f0ccac081df32c374d709407b2a973bf954b6725edd-merged.mount: Deactivated successfully.
Nov 29 03:17:18 np0005539550 podman[322545]: 2025-11-29 08:17:18.931002624 +0000 UTC m=+1.052697853 container remove 1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:18 np0005539550 systemd[1]: libpod-conmon-1f90b2f1a915c000e8fdeda3aab58730ba78bb0fc3071740766dec8bfbf982dc.scope: Deactivated successfully.
Nov 29 03:17:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:18.949 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:18.950 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:18.951 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031667668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.231 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.239 257641 DEBUG nova.compute.provider_tree [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.258 257641 DEBUG nova.scheduler.client.report [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.282 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.283 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:19.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.335 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.335 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 372 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.0 MiB/s wr, 260 op/s
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.366 257641 INFO nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.371 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.385 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.539054392 +0000 UTC m=+0.040413669 container create 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.569 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.570 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.570 257641 INFO nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Creating image(s)#033[00m
Nov 29 03:17:19 np0005539550 systemd[1]: Started libpod-conmon-9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7.scope.
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.599 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.520230143 +0000 UTC m=+0.021589400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.625 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.636310904 +0000 UTC m=+0.137670151 container init 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.642361098 +0000 UTC m=+0.143720345 container start 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.645386635 +0000 UTC m=+0.146745902 container attach 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:17:19 np0005539550 objective_pare[322781]: 167 167
Nov 29 03:17:19 np0005539550 systemd[1]: libpod-9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7.scope: Deactivated successfully.
Nov 29 03:17:19 np0005539550 conmon[322781]: conmon 9e9beac855eb762708cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7.scope/container/memory.events
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.652005913 +0000 UTC m=+0.153365190 container died 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.653 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.657 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-47ba38395c69a6f074ccef2ecdefa6e1987893781707e64a47bcc836d8ab9d5a-merged.mount: Deactivated successfully.
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.689 257641 DEBUG nova.policy [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5e3ade3963d47be97b545b2e3779b6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b8899f76f554afc96bb2441424e5a77', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:17:19 np0005539550 podman[322753]: 2025-11-29 08:17:19.694696178 +0000 UTC m=+0.196055415 container remove 9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:17:19 np0005539550 systemd[1]: libpod-conmon-9e9beac855eb762708ccf30e63bcae4236321d032cded78a8665c667ae2d99b7.scope: Deactivated successfully.
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.727 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.728 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.729 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.729 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.757 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:19 np0005539550 nova_compute[257631]: 2025-11-29 08:17:19.760 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:19 np0005539550 podman[322875]: 2025-11-29 08:17:19.882539914 +0000 UTC m=+0.036456668 container create f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:17:19 np0005539550 systemd[1]: Started libpod-conmon-f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591.scope.
Nov 29 03:17:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfe771d90266e4f64f2e69662249438f208bed227a859ce403ffe9b7470ef9c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfe771d90266e4f64f2e69662249438f208bed227a859ce403ffe9b7470ef9c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfe771d90266e4f64f2e69662249438f208bed227a859ce403ffe9b7470ef9c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cfe771d90266e4f64f2e69662249438f208bed227a859ce403ffe9b7470ef9c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:19 np0005539550 podman[322875]: 2025-11-29 08:17:19.86705065 +0000 UTC m=+0.020967424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:19 np0005539550 podman[322875]: 2025-11-29 08:17:19.972105781 +0000 UTC m=+0.126022565 container init f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:17:19 np0005539550 podman[322875]: 2025-11-29 08:17:19.979728185 +0000 UTC m=+0.133644939 container start f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007502973490135865 of space, bias 1.0, pg target 2.250892047040759 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29502365705844036 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:17:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:17:19 np0005539550 podman[322875]: 2025-11-29 08:17:19.984543907 +0000 UTC m=+0.138460711 container attach f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.079 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.143 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] resizing rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.236 257641 DEBUG nova.objects.instance [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'migration_context' on Instance uuid 187cf9d2-4ae7-4113-af4c-2ec0ce078494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.252 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.253 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Ensure instance console log exists: /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.253 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.253 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.253 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:20 np0005539550 determined_yalow[322904]: {
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:    "0": [
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:        {
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "devices": [
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "/dev/loop3"
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            ],
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "lv_name": "ceph_lv0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "lv_size": "7511998464",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "name": "ceph_lv0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "tags": {
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.cluster_name": "ceph",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.crush_device_class": "",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.encrypted": "0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.osd_id": "0",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.type": "block",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:                "ceph.vdo": "0"
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            },
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "type": "block",
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:            "vg_name": "ceph_vg0"
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:        }
Nov 29 03:17:20 np0005539550 determined_yalow[322904]:    ]
Nov 29 03:17:20 np0005539550 determined_yalow[322904]: }
Nov 29 03:17:20 np0005539550 systemd[1]: libpod-f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591.scope: Deactivated successfully.
Nov 29 03:17:20 np0005539550 conmon[322904]: conmon f6b37866c907100d5f65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591.scope/container/memory.events
Nov 29 03:17:20 np0005539550 podman[322875]: 2025-11-29 08:17:20.767971513 +0000 UTC m=+0.921888267 container died f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:17:20 np0005539550 nova_compute[257631]: 2025-11-29 08:17:20.770 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Successfully created port: 105a56d1-8987-4906-83b8-38bf41ce2843 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:17:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5cfe771d90266e4f64f2e69662249438f208bed227a859ce403ffe9b7470ef9c-merged.mount: Deactivated successfully.
Nov 29 03:17:20 np0005539550 podman[322875]: 2025-11-29 08:17:20.928103514 +0000 UTC m=+1.082020258 container remove f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_yalow, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:17:20 np0005539550 systemd[1]: libpod-conmon-f6b37866c907100d5f65f3239f0a74c621fc9ca431f8c69f3c518ffcb6fb6591.scope: Deactivated successfully.
Nov 29 03:17:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 350 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.4 MiB/s wr, 365 op/s
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.605257948 +0000 UTC m=+0.080152479 container create e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:17:21 np0005539550 systemd[1]: Started libpod-conmon-e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1.scope.
Nov 29 03:17:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.574719911 +0000 UTC m=+0.049614522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.681334722 +0000 UTC m=+0.156229273 container init e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.688200426 +0000 UTC m=+0.163094947 container start e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:17:21 np0005539550 competent_chebyshev[323153]: 167 167
Nov 29 03:17:21 np0005539550 systemd[1]: libpod-e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1.scope: Deactivated successfully.
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.693499721 +0000 UTC m=+0.168394262 container attach e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.693993954 +0000 UTC m=+0.168888475 container died e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:17:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cc306cf211ebc590d942922baa6bf247c3fa998bdbf43cc11271d9afd7701d62-merged.mount: Deactivated successfully.
Nov 29 03:17:21 np0005539550 podman[323137]: 2025-11-29 08:17:21.733287093 +0000 UTC m=+0.208181624 container remove e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:17:21 np0005539550 systemd[1]: libpod-conmon-e7f00db0b1966dd6bdf81ce1de29d78f0ff80fd5554ae34deadbbc30586f81b1.scope: Deactivated successfully.
Nov 29 03:17:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:21 np0005539550 podman[323176]: 2025-11-29 08:17:21.905840189 +0000 UTC m=+0.036614352 container create de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:21 np0005539550 systemd[1]: Started libpod-conmon-de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9.scope.
Nov 29 03:17:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:17:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbbc491c768ec7324575575324d1abca17e9f956e660b3ab62d2247d42dc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbbc491c768ec7324575575324d1abca17e9f956e660b3ab62d2247d42dc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbbc491c768ec7324575575324d1abca17e9f956e660b3ab62d2247d42dc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da0dbbc491c768ec7324575575324d1abca17e9f956e660b3ab62d2247d42dc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:17:21 np0005539550 podman[323176]: 2025-11-29 08:17:21.977556842 +0000 UTC m=+0.108331025 container init de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:17:21 np0005539550 podman[323176]: 2025-11-29 08:17:21.984913899 +0000 UTC m=+0.115688062 container start de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:17:21 np0005539550 podman[323176]: 2025-11-29 08:17:21.890068298 +0000 UTC m=+0.020842481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:17:21 np0005539550 podman[323176]: 2025-11-29 08:17:21.988379967 +0000 UTC m=+0.119154160 container attach de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:17:22 np0005539550 nova_compute[257631]: 2025-11-29 08:17:22.008 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]: {
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:        "osd_id": 0,
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:        "type": "bluestore"
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]:    }
Nov 29 03:17:22 np0005539550 competent_ardinghelli[323192]: }
Nov 29 03:17:22 np0005539550 systemd[1]: libpod-de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9.scope: Deactivated successfully.
Nov 29 03:17:22 np0005539550 podman[323176]: 2025-11-29 08:17:22.80784672 +0000 UTC m=+0.938620973 container died de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:17:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da0dbbc491c768ec7324575575324d1abca17e9f956e660b3ab62d2247d42dc8-merged.mount: Deactivated successfully.
Nov 29 03:17:22 np0005539550 podman[323176]: 2025-11-29 08:17:22.873428877 +0000 UTC m=+1.004203050 container remove de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:17:22 np0005539550 systemd[1]: libpod-conmon-de98ee893b2a4cb58d0332038382b4546703432893b29315235932e5c9ac07f9.scope: Deactivated successfully.
Nov 29 03:17:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:17:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:17:22 np0005539550 nova_compute[257631]: 2025-11-29 08:17:22.922 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Successfully updated port: 105a56d1-8987-4906-83b8-38bf41ce2843 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:17:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 62d87089-29a4-4178-aa2a-c91613828b23 does not exist
Nov 29 03:17:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d54da4b0-0637-4d49-876d-e658f64ee76f does not exist
Nov 29 03:17:22 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 09641558-daf0-432b-a320-450ecead6462 does not exist
Nov 29 03:17:22 np0005539550 nova_compute[257631]: 2025-11-29 08:17:22.936 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:22 np0005539550 nova_compute[257631]: 2025-11-29 08:17:22.936 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:22 np0005539550 nova_compute[257631]: 2025-11-29 08:17:22.936 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:22 np0005539550 podman[323222]: 2025-11-29 08:17:22.938781769 +0000 UTC m=+0.093523369 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:17:22 np0005539550 podman[323215]: 2025-11-29 08:17:22.941324313 +0000 UTC m=+0.095287233 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd)
Nov 29 03:17:23 np0005539550 nova_compute[257631]: 2025-11-29 08:17:23.005 257641 DEBUG nova.compute.manager [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-changed-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:23 np0005539550 nova_compute[257631]: 2025-11-29 08:17:23.006 257641 DEBUG nova.compute.manager [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Refreshing instance network info cache due to event network-changed-105a56d1-8987-4906-83b8-38bf41ce2843. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:23 np0005539550 nova_compute[257631]: 2025-11-29 08:17:23.006 257641 DEBUG oslo_concurrency.lockutils [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:17:23 np0005539550 nova_compute[257631]: 2025-11-29 08:17:23.111 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:17:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:23.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 355 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 260 op/s
Nov 29 03:17:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.375 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.855 257641 DEBUG nova.network.neutron [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updating instance_info_cache with network_info: [{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.898 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.898 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance network_info: |[{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.899 257641 DEBUG oslo_concurrency.lockutils [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.899 257641 DEBUG nova.network.neutron [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Refreshing network info cache for port 105a56d1-8987-4906-83b8-38bf41ce2843 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.904 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Start _get_guest_xml network_info=[{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.912 257641 WARNING nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.919 257641 DEBUG nova.virt.libvirt.host [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.920 257641 DEBUG nova.virt.libvirt.host [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.932 257641 DEBUG nova.virt.libvirt.host [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.934 257641 DEBUG nova.virt.libvirt.host [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.936 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.937 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.938 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.939 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.939 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.940 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.941 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.941 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.942 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.942 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.943 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.944 257641 DEBUG nova.virt.hardware [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.949 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.986 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.987 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.987 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:17:24 np0005539550 nova_compute[257631]: 2025-11-29 08:17:24.988 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:25.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 340 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 237 op/s
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3017716401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928350139' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.418 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.429 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.461 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.465 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.579 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.580 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.751 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.752 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4185MB free_disk=20.840431213378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.753 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.753 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.902 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 68c19565-6fe1-4c2c-927d-87f801074e18 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.902 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 187cf9d2-4ae7-4113-af4c-2ec0ce078494 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.903 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.903 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:17:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1807090561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.934 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.936 257641 DEBUG nova.virt.libvirt.vif [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-296895970',display_name='tempest-ServerActionsTestOtherB-server-296895970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-296895970',id=107,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-50ze3204',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:19Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=187cf9d2-4ae7-4113-af4c-2ec0ce078494,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.937 257641 DEBUG nova.network.os_vif_util [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.938 257641 DEBUG nova.network.os_vif_util [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.940 257641 DEBUG nova.objects.instance [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'pci_devices' on Instance uuid 187cf9d2-4ae7-4113-af4c-2ec0ce078494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.959 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <uuid>187cf9d2-4ae7-4113-af4c-2ec0ce078494</uuid>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <name>instance-0000006b</name>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherB-server-296895970</nova:name>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:17:24</nova:creationTime>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:user uuid="c5e3ade3963d47be97b545b2e3779b6b">tempest-ServerActionsTestOtherB-477220446-project-member</nova:user>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:project uuid="1b8899f76f554afc96bb2441424e5a77">tempest-ServerActionsTestOtherB-477220446</nova:project>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <nova:port uuid="105a56d1-8987-4906-83b8-38bf41ce2843">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="serial">187cf9d2-4ae7-4113-af4c-2ec0ce078494</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="uuid">187cf9d2-4ae7-4113-af4c-2ec0ce078494</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:3a:85:a8"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <target dev="tap105a56d1-89"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/console.log" append="off"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:17:25 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:17:25 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:17:25 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:17:25 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.961 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Preparing to wait for external event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.962 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.962 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.962 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.963 257641 DEBUG nova.virt.libvirt.vif [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-296895970',display_name='tempest-ServerActionsTestOtherB-server-296895970',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-296895970',id=107,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-50ze3204',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:19Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=187cf9d2-4ae7-4113-af4c-2ec0ce078494,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.963 257641 DEBUG nova.network.os_vif_util [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.964 257641 DEBUG nova.network.os_vif_util [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.964 257641 DEBUG os_vif [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.965 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.965 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.966 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.973 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.974 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap105a56d1-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.974 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap105a56d1-89, col_values=(('external_ids', {'iface-id': '105a56d1-8987-4906-83b8-38bf41ce2843', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:85:a8', 'vm-uuid': '187cf9d2-4ae7-4113-af4c-2ec0ce078494'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.976 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:25 np0005539550 NetworkManager[49039]: <info>  [1764404245.9780] manager: (tap105a56d1-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.978 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.984 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.985 257641 INFO os_vif [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89')#033[00m
Nov 29 03:17:25 np0005539550 nova_compute[257631]: 2025-11-29 08:17:25.988 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:26.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.112 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.113 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.113 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No VIF found with MAC fa:16:3e:3a:85:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.114 257641 INFO nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Using config drive#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.150 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136080908' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.428 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.433 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.639 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.672 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.673 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.788 257641 DEBUG nova.network.neutron [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updated VIF entry in instance network info cache for port 105a56d1-8987-4906-83b8-38bf41ce2843. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.789 257641 DEBUG nova.network.neutron [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updating instance_info_cache with network_info: [{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.818 257641 DEBUG oslo_concurrency.lockutils [req-cfbc0a19-325d-47a7-9531-249ac5d08f9d req-9aac2bec-7151-4b2c-aba9-613bf2d88945 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.945 257641 INFO nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Creating config drive at /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config#033[00m
Nov 29 03:17:26 np0005539550 nova_compute[257631]: 2025-11-29 08:17:26.950 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6w25niqn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.088 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6w25niqn" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.184 257641 DEBUG nova.storage.rbd_utils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] rbd image 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.188 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:27.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 322 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 231 op/s
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.553 257641 DEBUG oslo_concurrency.processutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config 187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.554 257641 INFO nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Deleting local config drive /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494/disk.config because it was imported into RBD.#033[00m
Nov 29 03:17:27 np0005539550 NetworkManager[49039]: <info>  [1764404247.6028] manager: (tap105a56d1-89): new Tun device (/org/freedesktop/NetworkManager/Devices/187)
Nov 29 03:17:27 np0005539550 kernel: tap105a56d1-89: entered promiscuous mode
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.606 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:27Z|00416|binding|INFO|Claiming lport 105a56d1-8987-4906-83b8-38bf41ce2843 for this chassis.
Nov 29 03:17:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:27Z|00417|binding|INFO|105a56d1-8987-4906-83b8-38bf41ce2843: Claiming fa:16:3e:3a:85:a8 10.100.0.10
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.612 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:85:a8 10.100.0.10'], port_security=['fa:16:3e:3a:85:a8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '187cf9d2-4ae7-4113-af4c-2ec0ce078494', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=105a56d1-8987-4906-83b8-38bf41ce2843) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.613 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 105a56d1-8987-4906-83b8-38bf41ce2843 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 bound to our chassis#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.614 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:17:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:27Z|00418|binding|INFO|Setting lport 105a56d1-8987-4906-83b8-38bf41ce2843 ovn-installed in OVS
Nov 29 03:17:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:27Z|00419|binding|INFO|Setting lport 105a56d1-8987-4906-83b8-38bf41ce2843 up in Southbound
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.634 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55fbce38-b61b-4f93-b440-4680b6803814]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 systemd-udevd[323497]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.669 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[073b3b2d-3034-4105-bef2-9898e0805214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.672 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[76a0c18e-714a-4547-ac39-e24fb1518b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.674 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.675 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.675 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:17:27 np0005539550 NetworkManager[49039]: <info>  [1764404247.6927] device (tap105a56d1-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.692 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539550 NetworkManager[49039]: <info>  [1764404247.6942] device (tap105a56d1-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:17:27 np0005539550 systemd-machined[216673]: New machine qemu-50-instance-0000006b.
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.700 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac37e03-c9df-45de-b3b4-a910d6c8f2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.708 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:17:27 np0005539550 systemd[1]: Started Virtual Machine qemu-50-instance-0000006b.
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.721 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6542c622-7f68-449f-a835-67143f5681af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725831, 'reachable_time': 35106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323502, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.740 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[835005dc-50cd-41bd-aaac-efdbe93328cf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725841, 'tstamp': 725841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323503, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323503, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.742 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.743 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539550 nova_compute[257631]: 2025-11-29 08:17:27.744 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.745 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.745 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.746 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:27.746 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:28.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.170 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.170 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.170 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.171 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.388 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404248.388104, 187cf9d2-4ae7-4113-af4c-2ec0ce078494 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.389 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] VM Started (Lifecycle Event)#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.431 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.435 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404248.388337, 187cf9d2-4ae7-4113-af4c-2ec0ce078494 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.436 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.448 257641 DEBUG nova.compute.manager [req-55692a74-a06a-461e-8d40-fd2951cbeaed req-f7dd2e90-8141-487d-beb3-d3d198c6e40b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.448 257641 DEBUG oslo_concurrency.lockutils [req-55692a74-a06a-461e-8d40-fd2951cbeaed req-f7dd2e90-8141-487d-beb3-d3d198c6e40b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.449 257641 DEBUG oslo_concurrency.lockutils [req-55692a74-a06a-461e-8d40-fd2951cbeaed req-f7dd2e90-8141-487d-beb3-d3d198c6e40b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.449 257641 DEBUG oslo_concurrency.lockutils [req-55692a74-a06a-461e-8d40-fd2951cbeaed req-f7dd2e90-8141-487d-beb3-d3d198c6e40b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.449 257641 DEBUG nova.compute.manager [req-55692a74-a06a-461e-8d40-fd2951cbeaed req-f7dd2e90-8141-487d-beb3-d3d198c6e40b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Processing event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.450 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.454 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.457 257641 INFO nova.virt.libvirt.driver [-] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance spawned successfully.#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.457 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.495 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.501 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404248.4527833, 187cf9d2-4ae7-4113-af4c-2ec0ce078494 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.501 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.513 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.513 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.514 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.514 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.514 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.515 257641 DEBUG nova.virt.libvirt.driver [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.566 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.573 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.617 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.634 257641 INFO nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.634 257641 DEBUG nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.695 257641 INFO nova.compute.manager [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Took 10.11 seconds to build instance.#033[00m
Nov 29 03:17:28 np0005539550 nova_compute[257631]: 2025-11-29 08:17:28.714 257641 DEBUG oslo_concurrency.lockutils [None req-a0f3d6c4-449f-4c1a-8f67-ae9574e54313 c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 322 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.7 MiB/s wr, 224 op/s
Nov 29 03:17:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:30.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.577 257641 DEBUG nova.compute.manager [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.578 257641 DEBUG oslo_concurrency.lockutils [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.578 257641 DEBUG oslo_concurrency.lockutils [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.578 257641 DEBUG oslo_concurrency.lockutils [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.578 257641 DEBUG nova.compute.manager [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] No waiting events found dispatching network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.579 257641 WARNING nova.compute.manager [req-df451d8f-8a07-41f2-8a05-1d03a364437b req-66270f39-8d6f-41cb-98a0-aebc5b767647 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received unexpected event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.670 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.685 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.685 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.686 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.686 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.686 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.687 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.880 257641 INFO nova.compute.manager [None req-d7c42e6f-12eb-4c4a-a964-0bc93492bcec c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Pausing#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.881 257641 DEBUG nova.objects.instance [None req-d7c42e6f-12eb-4c4a-a964-0bc93492bcec c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'flavor' on Instance uuid 187cf9d2-4ae7-4113-af4c-2ec0ce078494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.908 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404250.9079444, 187cf9d2-4ae7-4113-af4c-2ec0ce078494 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.908 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.909 257641 DEBUG nova.compute.manager [None req-d7c42e6f-12eb-4c4a-a964-0bc93492bcec c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.933 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.937 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.966 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Nov 29 03:17:30 np0005539550 nova_compute[257631]: 2025-11-29 08:17:30.991 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:31.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 359 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.0 MiB/s wr, 314 op/s
Nov 29 03:17:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:31 np0005539550 nova_compute[257631]: 2025-11-29 08:17:31.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:31 np0005539550 nova_compute[257631]: 2025-11-29 08:17:31.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:31 np0005539550 nova_compute[257631]: 2025-11-29 08:17:31.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:17:32 np0005539550 nova_compute[257631]: 2025-11-29 08:17:32.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:32.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 369 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.9 MiB/s wr, 209 op/s
Nov 29 03:17:33 np0005539550 nova_compute[257631]: 2025-11-29 08:17:33.920 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:33 np0005539550 nova_compute[257631]: 2025-11-29 08:17:33.920 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:33 np0005539550 nova_compute[257631]: 2025-11-29 08:17:33.920 257641 INFO nova.compute.manager [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Shelving#033[00m
Nov 29 03:17:33 np0005539550 kernel: tap105a56d1-89 (unregistering): left promiscuous mode
Nov 29 03:17:33 np0005539550 NetworkManager[49039]: <info>  [1764404253.9840] device (tap105a56d1-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:33Z|00420|binding|INFO|Releasing lport 105a56d1-8987-4906-83b8-38bf41ce2843 from this chassis (sb_readonly=0)
Nov 29 03:17:33 np0005539550 nova_compute[257631]: 2025-11-29 08:17:33.993 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:33Z|00421|binding|INFO|Setting lport 105a56d1-8987-4906-83b8-38bf41ce2843 down in Southbound
Nov 29 03:17:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:33Z|00422|binding|INFO|Removing iface tap105a56d1-89 ovn-installed in OVS
Nov 29 03:17:33 np0005539550 nova_compute[257631]: 2025-11-29 08:17:33.994 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.000 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:85:a8 10.100.0.10'], port_security=['fa:16:3e:3a:85:a8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '187cf9d2-4ae7-4113-af4c-2ec0ce078494', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37becba8-ee73-4915-a6ba-420db31887d1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=105a56d1-8987-4906-83b8-38bf41ce2843) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.001 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 105a56d1-8987-4906-83b8-38bf41ce2843 in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.003 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.019 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dd49ca-903b-4a6d-b9a8-33b00caad9f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Nov 29 03:17:34 np0005539550 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000006b.scope: Consumed 3.178s CPU time.
Nov 29 03:17:34 np0005539550 systemd-machined[216673]: Machine qemu-50-instance-0000006b terminated.
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.046 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ce9ee9-12b9-4613-8913-bca33c8b575d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.050 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a55c7a87-8721-4a36-8023-b24010bd8b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:34.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.076 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[34c5e8b3-b28b-4129-9f74-e9f88bbd1cec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.092 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d2415b-dae5-457a-b4d6-45e8c2063c83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b704d3a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:d7:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725831, 'reachable_time': 35106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323567, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2fdb72a1-1dab-4b5b-9fcb-736e457227bf]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725841, 'tstamp': 725841}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323568, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap2b704d3a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 725844, 'tstamp': 725844}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323568, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.111 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.113 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.118 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b704d3a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.118 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b704d3a-d0, col_values=(('external_ids', {'iface-id': '299ca1be-be1b-47d9-8865-4316d34012e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:34.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.180 257641 INFO nova.virt.libvirt.driver [-] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance destroyed successfully.#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.181 257641 DEBUG nova.objects.instance [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 187cf9d2-4ae7-4113-af4c-2ec0ce078494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.231 257641 DEBUG nova.compute.manager [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-vif-unplugged-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.231 257641 DEBUG oslo_concurrency.lockutils [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.232 257641 DEBUG oslo_concurrency.lockutils [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.232 257641 DEBUG oslo_concurrency.lockutils [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.232 257641 DEBUG nova.compute.manager [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] No waiting events found dispatching network-vif-unplugged-105a56d1-8987-4906-83b8-38bf41ce2843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.232 257641 WARNING nova.compute.manager [req-384d1879-be43-4223-a18f-8cc0839a95b4 req-9ecd28bf-05ae-430d-83d0-98118b6d96fc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received unexpected event network-vif-unplugged-105a56d1-8987-4906-83b8-38bf41ce2843 for instance with vm_state paused and task_state shelving.#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.524 257641 INFO nova.virt.libvirt.driver [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Beginning cold snapshot process#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.702 257641 DEBUG nova.virt.libvirt.imagebackend [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:17:34 np0005539550 nova_compute[257631]: 2025-11-29 08:17:34.956 257641 DEBUG nova.storage.rbd_utils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(ff213555de144d7f80090dff124a511f) on rbd image(187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 373 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.8 MiB/s wr, 233 op/s
Nov 29 03:17:35 np0005539550 podman[323633]: 2025-11-29 08:17:35.405038252 +0000 UTC m=+0.143241993 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:17:35 np0005539550 nova_compute[257631]: 2025-11-29 08:17:35.733 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:35 np0005539550 nova_compute[257631]: 2025-11-29 08:17:35.993 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 29 03:17:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 29 03:17:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.202 257641 DEBUG nova.storage.rbd_utils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning vms/187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk@ff213555de144d7f80090dff124a511f to images/159b7691-923b-4bf5-a065-6be0cf4e75ba clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.336 257641 DEBUG nova.compute.manager [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.337 257641 DEBUG oslo_concurrency.lockutils [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.338 257641 DEBUG oslo_concurrency.lockutils [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.338 257641 DEBUG oslo_concurrency.lockutils [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.338 257641 DEBUG nova.compute.manager [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] No waiting events found dispatching network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.339 257641 WARNING nova.compute.manager [req-ab8e7f34-993e-40f6-8209-3ce29ddf3fe9 req-1ea66a9d-4cfa-4af8-997a-759bbcca34c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received unexpected event network-vif-plugged-105a56d1-8987-4906-83b8-38bf41ce2843 for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.348 257641 DEBUG nova.storage.rbd_utils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening images/159b7691-923b-4bf5-a065-6be0cf4e75ba flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.697 257641 DEBUG nova.storage.rbd_utils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(ff213555de144d7f80090dff124a511f) on rbd image(187cf9d2-4ae7-4113-af4c-2ec0ce078494_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:17:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:36 np0005539550 nova_compute[257631]: 2025-11-29 08:17:36.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:17:37 np0005539550 nova_compute[257631]: 2025-11-29 08:17:37.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 29 03:17:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 29 03:17:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 29 03:17:37 np0005539550 nova_compute[257631]: 2025-11-29 08:17:37.202 257641 DEBUG nova.storage.rbd_utils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(snap) on rbd image(159b7691-923b-4bf5-a065-6be0cf4e75ba) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:37.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 373 MiB data, 1003 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 294 op/s
Nov 29 03:17:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 29 03:17:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 29 03:17:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 29 03:17:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:39.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 380 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 247 KiB/s wr, 236 op/s
Nov 29 03:17:39 np0005539550 nova_compute[257631]: 2025-11-29 08:17:39.651 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:40.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.633 257641 INFO nova.virt.libvirt.driver [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Snapshot image upload complete#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.633 257641 DEBUG nova.compute.manager [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.704 257641 INFO nova.compute.manager [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Shelve offloading#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.710 257641 INFO nova.virt.libvirt.driver [-] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance destroyed successfully.#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.710 257641 DEBUG nova.compute.manager [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.712 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.713 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.713 257641 DEBUG nova.network.neutron [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:17:40 np0005539550 nova_compute[257631]: 2025-11-29 08:17:40.996 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:41.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 3.6 MiB/s wr, 193 op/s
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.862221) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261862342, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1911, "num_deletes": 257, "total_data_size": 3256787, "memory_usage": 3297472, "flush_reason": "Manual Compaction"}
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261887267, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3153680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41214, "largest_seqno": 43124, "table_properties": {"data_size": 3144977, "index_size": 5261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18918, "raw_average_key_size": 20, "raw_value_size": 3127233, "raw_average_value_size": 3388, "num_data_blocks": 228, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404104, "oldest_key_time": 1764404104, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 25102 microseconds, and 7993 cpu microseconds.
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.887329) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3153680 bytes OK
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.887359) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.889816) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.889842) EVENT_LOG_v1 {"time_micros": 1764404261889834, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.889899) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3248709, prev total WAL file size 3248709, number of live WAL files 2.
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.891504) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323536' seq:72057594037927935, type:22 .. '6C6F676D0031353037' seq:0, type:0; will stop at (end)
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(3079KB)], [89(10028KB)]
Nov 29 03:17:41 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404261891576, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 13423117, "oldest_snapshot_seqno": -1}
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7802 keys, 13254711 bytes, temperature: kUnknown
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404262001182, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 13254711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13201038, "index_size": 33081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19525, "raw_key_size": 201543, "raw_average_key_size": 25, "raw_value_size": 13060056, "raw_average_value_size": 1673, "num_data_blocks": 1310, "num_entries": 7802, "num_filter_entries": 7802, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404261, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.001515) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 13254711 bytes
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.003096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.4 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.8 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(8.5) write-amplify(4.2) OK, records in: 8336, records dropped: 534 output_compression: NoCompression
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.003113) EVENT_LOG_v1 {"time_micros": 1764404262003104, "job": 52, "event": "compaction_finished", "compaction_time_micros": 109699, "compaction_time_cpu_micros": 59056, "output_level": 6, "num_output_files": 1, "total_output_size": 13254711, "num_input_records": 8336, "num_output_records": 7802, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404262003775, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404262005408, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:41.891374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.005529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.005536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.005539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.005542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:17:42.005544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:17:42 np0005539550 nova_compute[257631]: 2025-11-29 08:17:42.017 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:42.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:42 np0005539550 nova_compute[257631]: 2025-11-29 08:17:42.775 257641 DEBUG nova.network.neutron [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updating instance_info_cache with network_info: [{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:42 np0005539550 nova_compute[257631]: 2025-11-29 08:17:42.815 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:43.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.052 257641 INFO nova.virt.libvirt.driver [-] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Instance destroyed successfully.#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.052 257641 DEBUG nova.objects.instance [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 187cf9d2-4ae7-4113-af4c-2ec0ce078494 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.077 257641 DEBUG nova.virt.libvirt.vif [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-296895970',display_name='tempest-ServerActionsTestOtherB-server-296895970',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-296895970',id=107,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-50ze3204',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member',shelved_at='2025-11-29T08:17:40.633641',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='159b7691-923b-4bf5-a065-6be0cf4e75ba'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:34Z,user_data=None,user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=187cf9d2-4ae7-4113-af4c-2ec0ce078494,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.077 257641 DEBUG nova.network.os_vif_util [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap105a56d1-89", "ovs_interfaceid": "105a56d1-8987-4906-83b8-38bf41ce2843", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.078 257641 DEBUG nova.network.os_vif_util [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.079 257641 DEBUG os_vif [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.081 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.082 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap105a56d1-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:44.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.084 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.085 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.088 257641 INFO os_vif [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:85:a8,bridge_name='br-int',has_traffic_filtering=True,id=105a56d1-8987-4906-83b8-38bf41ce2843,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap105a56d1-89')#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.390 257641 DEBUG nova.compute.manager [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Received event network-changed-105a56d1-8987-4906-83b8-38bf41ce2843 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.392 257641 DEBUG nova.compute.manager [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Refreshing instance network info cache due to event network-changed-105a56d1-8987-4906-83b8-38bf41ce2843. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.393 257641 DEBUG oslo_concurrency.lockutils [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.393 257641 DEBUG oslo_concurrency.lockutils [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:17:44 np0005539550 nova_compute[257631]: 2025-11-29 08:17:44.393 257641 DEBUG nova.network.neutron [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Refreshing network info cache for port 105a56d1-8987-4906-83b8-38bf41ce2843 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:17:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:45.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.7 MiB/s wr, 160 op/s
Nov 29 03:17:45 np0005539550 nova_compute[257631]: 2025-11-29 08:17:45.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:45 np0005539550 nova_compute[257631]: 2025-11-29 08:17:45.941 257641 DEBUG nova.network.neutron [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updated VIF entry in instance network info cache for port 105a56d1-8987-4906-83b8-38bf41ce2843. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:17:45 np0005539550 nova_compute[257631]: 2025-11-29 08:17:45.942 257641 DEBUG nova.network.neutron [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Updating instance_info_cache with network_info: [{"id": "105a56d1-8987-4906-83b8-38bf41ce2843", "address": "fa:16:3e:3a:85:a8", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": null, "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap105a56d1-89", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:17:45 np0005539550 nova_compute[257631]: 2025-11-29 08:17:45.971 257641 DEBUG oslo_concurrency.lockutils [req-0232cc68-e9f7-4ecb-9b4b-2207ba9382a8 req-83124d27-abab-4832-bf1d-e14e3f6d5af6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-187cf9d2-4ae7-4113-af4c-2ec0ce078494" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:17:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:46.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.149 257641 INFO nova.virt.libvirt.driver [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Deleting instance files /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494_del#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.150 257641 INFO nova.virt.libvirt.driver [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Deletion of /var/lib/nova/instances/187cf9d2-4ae7-4113-af4c-2ec0ce078494_del complete#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.260 257641 INFO nova.scheduler.client.report [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 187cf9d2-4ae7-4113-af4c-2ec0ce078494#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.299 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.300 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.359 257641 DEBUG oslo_concurrency.processutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/274103729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.836 257641 DEBUG oslo_concurrency.processutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.844 257641 DEBUG nova.compute.provider_tree [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.912 257641 DEBUG nova.scheduler.client.report [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.934 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:46 np0005539550 nova_compute[257631]: 2025-11-29 08:17:46.979 257641 DEBUG oslo_concurrency.lockutils [None req-67fe8f18-0a3f-4025-9cc0-c8e1a53bd9fe c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "187cf9d2-4ae7-4113-af4c-2ec0ce078494" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 13.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:47 np0005539550 nova_compute[257631]: 2025-11-29 08:17:47.017 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:47.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 407 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 86 op/s
Nov 29 03:17:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:48.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:49 np0005539550 nova_compute[257631]: 2025-11-29 08:17:49.086 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:49 np0005539550 nova_compute[257631]: 2025-11-29 08:17:49.180 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404254.1784885, 187cf9d2-4ae7-4113-af4c-2ec0ce078494 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:17:49 np0005539550 nova_compute[257631]: 2025-11-29 08:17:49.180 257641 INFO nova.compute.manager [-] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:17:49 np0005539550 nova_compute[257631]: 2025-11-29 08:17:49.200 257641 DEBUG nova.compute.manager [None req-cef71f64-ddde-46d3-b456-608cb3d18aac - - - - - -] [instance: 187cf9d2-4ae7-4113-af4c-2ec0ce078494] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:17:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:49.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 404 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Nov 29 03:17:49 np0005539550 nova_compute[257631]: 2025-11-29 08:17:49.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:50.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:51 np0005539550 nova_compute[257631]: 2025-11-29 08:17:51.023 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:51 np0005539550 nova_compute[257631]: 2025-11-29 08:17:51.024 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:51 np0005539550 nova_compute[257631]: 2025-11-29 08:17:51.024 257641 INFO nova.compute.manager [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Shelving#033[00m
Nov 29 03:17:51 np0005539550 nova_compute[257631]: 2025-11-29 08:17:51.045 257641 DEBUG nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:17:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:51.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 640 KiB/s rd, 2.6 MiB/s wr, 142 op/s
Nov 29 03:17:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e301 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:52 np0005539550 nova_compute[257631]: 2025-11-29 08:17:52.020 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:53 np0005539550 podman[323856]: 2025-11-29 08:17:53.317114719 +0000 UTC m=+0.055424080 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:17:53 np0005539550 podman[323855]: 2025-11-29 08:17:53.328612491 +0000 UTC m=+0.069969319 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:17:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 555 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Nov 29 03:17:53 np0005539550 kernel: tap2d17165a-5b (unregistering): left promiscuous mode
Nov 29 03:17:53 np0005539550 NetworkManager[49039]: <info>  [1764404273.3948] device (tap2d17165a-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:17:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:53Z|00423|binding|INFO|Releasing lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b from this chassis (sb_readonly=0)
Nov 29 03:17:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:53Z|00424|binding|INFO|Setting lport 2d17165a-5bc8-402e-a08c-e6f21188fb1b down in Southbound
Nov 29 03:17:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:17:53Z|00425|binding|INFO|Removing iface tap2d17165a-5b ovn-installed in OVS
Nov 29 03:17:53 np0005539550 nova_compute[257631]: 2025-11-29 08:17:53.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539550 nova_compute[257631]: 2025-11-29 08:17:53.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:53.422 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bc:67 10.100.0.9'], port_security=['fa:16:3e:ae:bc:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '68c19565-6fe1-4c2c-927d-87f801074e18', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b8899f76f554afc96bb2441424e5a77', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8e7cfeb6-8d91-4d68-8970-f480a7e0a619', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0af49baf-9694-4485-99a0-1529dc778e83, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d17165a-5bc8-402e-a08c-e6f21188fb1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:17:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:53.424 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d17165a-5bc8-402e-a08c-e6f21188fb1b in datapath 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 unbound from our chassis#033[00m
Nov 29 03:17:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:53.426 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:17:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:53.428 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b963041-6ce4-4dfa-914e-c02ad9ccaadd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:53.428 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 namespace which is not needed anymore#033[00m
Nov 29 03:17:53 np0005539550 nova_compute[257631]: 2025-11-29 08:17:53.459 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:53 np0005539550 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000068.scope: Deactivated successfully.
Nov 29 03:17:53 np0005539550 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000068.scope: Consumed 17.097s CPU time.
Nov 29 03:17:53 np0005539550 systemd-machined[216673]: Machine qemu-49-instance-00000068 terminated.
Nov 29 03:17:53 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [NOTICE]   (321917) : haproxy version is 2.8.14-c23fe91
Nov 29 03:17:53 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [NOTICE]   (321917) : path to executable is /usr/sbin/haproxy
Nov 29 03:17:53 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [WARNING]  (321917) : Exiting Master process...
Nov 29 03:17:53 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [ALERT]    (321917) : Current worker (321919) exited with code 143 (Terminated)
Nov 29 03:17:53 np0005539550 neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06[321910]: [WARNING]  (321917) : All workers exited. Exiting... (0)
Nov 29 03:17:53 np0005539550 systemd[1]: libpod-e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de.scope: Deactivated successfully.
Nov 29 03:17:53 np0005539550 podman[323915]: 2025-11-29 08:17:53.56223868 +0000 UTC m=+0.043848506 container died e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:17:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de-userdata-shm.mount: Deactivated successfully.
Nov 29 03:17:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4ed2f6fcff3901942db69a412a6da7462678fcce5e0c9cc3155e11b252b1a55d-merged.mount: Deactivated successfully.
Nov 29 03:17:53 np0005539550 podman[323915]: 2025-11-29 08:17:53.790033741 +0000 UTC m=+0.271643557 container cleanup e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:17:53 np0005539550 systemd[1]: libpod-conmon-e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de.scope: Deactivated successfully.
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.066 257641 INFO nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.073 257641 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance destroyed successfully.#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.073 257641 DEBUG nova.objects.instance [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'numa_topology' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:17:54 np0005539550 podman[323955]: 2025-11-29 08:17:54.081340216 +0000 UTC m=+0.263220462 container remove e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.088 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.088 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8b35c0f4-2748-47ef-b71e-4e0503991f6a]: (4, ('Sat Nov 29 08:17:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de)\ne0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de\nSat Nov 29 08:17:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 (e0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de)\ne0b4546aec3ec79ba06c0e7c8ca8d51e3ff70030e042425bc2dca838099a73de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.090 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a0dafe4-138a-47f0-b171-4e7653924ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.091 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b704d3a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:17:54 np0005539550 kernel: tap2b704d3a-d0: left promiscuous mode
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.095 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:17:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.115 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.118 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[96f7f837-f54e-4305-aac8-e3c2b90a0599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.140 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c3fce111-c0e4-4c8c-a575-f93562507205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.142 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad937ff-0e17-4046-ab0c-298d0ce828a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.161 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6d52ac1d-7886-439a-8968-b6869c721a61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 725823, 'reachable_time': 38149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323977, 'error': None, 'target': 'ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.165 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:17:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:17:54.165 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[ba94f4bc-4dff-4a30-ad19-faf282f05ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:17:54 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b704d3a\x2dd3e4\x2d47ce\x2d8a28\x2d10a6f4e6fd06.mount: Deactivated successfully.
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.378 257641 DEBUG nova.compute.manager [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.379 257641 DEBUG oslo_concurrency.lockutils [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.380 257641 DEBUG oslo_concurrency.lockutils [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.380 257641 DEBUG oslo_concurrency.lockutils [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.380 257641 DEBUG nova.compute.manager [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.380 257641 WARNING nova.compute.manager [req-de8dd81e-3b63-4050-bbed-fc547bcfafae req-2f563663-a520-4a4e-bf2c-f69be95c1380 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received unexpected event network-vif-unplugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.494 257641 INFO nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Beginning cold snapshot process#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.638 257641 DEBUG nova.virt.libvirt.imagebackend [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:17:54 np0005539550 nova_compute[257631]: 2025-11-29 08:17:54.925 257641 DEBUG nova.storage.rbd_utils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(e2ae79868b5a46c58f7348430558ba17) on rbd image(68c19565-6fe1-4c2c-927d-87f801074e18_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 523 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Nov 29 03:17:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 29 03:17:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 29 03:17:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:17:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.176 257641 DEBUG nova.storage.rbd_utils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] cloning vms/68c19565-6fe1-4c2c-927d-87f801074e18_disk@e2ae79868b5a46c58f7348430558ba17 to images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.384 257641 DEBUG nova.storage.rbd_utils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] flattening images/c28b4cfd-fa14-4a57-95be-5fa2d667ebc5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.698 257641 DEBUG nova.compute.manager [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.698 257641 DEBUG oslo_concurrency.lockutils [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.698 257641 DEBUG oslo_concurrency.lockutils [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.699 257641 DEBUG oslo_concurrency.lockutils [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.699 257641 DEBUG nova.compute.manager [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] No waiting events found dispatching network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:17:56 np0005539550 nova_compute[257631]: 2025-11-29 08:17:56.700 257641 WARNING nova.compute.manager [req-3f493dcb-2b84-4c4f-ba48-12eb63519876 req-ccbb7ba0-7181-4be1-8981-942de87b288c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received unexpected event network-vif-plugged-2d17165a-5bc8-402e-a08c-e6f21188fb1b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:17:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:17:57 np0005539550 nova_compute[257631]: 2025-11-29 08:17:57.095 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:57 np0005539550 nova_compute[257631]: 2025-11-29 08:17:57.213 257641 DEBUG nova.storage.rbd_utils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] removing snapshot(e2ae79868b5a46c58f7348430558ba17) on rbd image(68c19565-6fe1-4c2c-927d-87f801074e18_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:17:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 584 KiB/s rd, 1.6 MiB/s wr, 106 op/s
Nov 29 03:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:17:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.189 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.189 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.216 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:17:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:17:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:17:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 29 03:17:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 29 03:17:58 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.298 257641 DEBUG nova.storage.rbd_utils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] creating snapshot(snap) on rbd image(c28b4cfd-fa14-4a57-95be-5fa2d667ebc5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.332 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.333 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.342 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.342 257641 INFO nova.compute.claims [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.466 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:17:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2866384736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.979 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:58 np0005539550 nova_compute[257631]: 2025-11-29 08:17:58.985 257641 DEBUG nova.compute.provider_tree [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.005 257641 DEBUG nova.scheduler.client.report [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.024 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.025 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.070 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.070 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.089 257641 INFO nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.094 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.115 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.217 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.218 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.218 257641 INFO nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Creating image(s)#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.245 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.333 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:17:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:17:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:17:59.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.365 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.369 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:17:59
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'volumes']
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:17:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 419 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 19 op/s
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.435 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.436 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.436 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.437 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.462 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.467 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:17:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 29 03:17:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 29 03:17:59 np0005539550 nova_compute[257631]: 2025-11-29 08:17:59.565 257641 DEBUG nova.policy [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb363261bdab40e790d3017750f85069', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4e5f917d899b41e7b5ec4efeeb4fc74f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.001 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.088 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] resizing rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:00.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.211 257641 DEBUG nova.objects.instance [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lazy-loading 'migration_context' on Instance uuid 75bc7b63-d45e-41a1-893d-ff78f18861f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.239 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.240 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Ensure instance console log exists: /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.240 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.241 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.242 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:00 np0005539550 nova_compute[257631]: 2025-11-29 08:18:00.577 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Successfully created port: 47febb89-e1c8-407b-a42f-268eaaf0febd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:18:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:01.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 492 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 8.4 MiB/s wr, 236 op/s
Nov 29 03:18:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:01 np0005539550 nova_compute[257631]: 2025-11-29 08:18:01.933 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Successfully updated port: 47febb89-e1c8-407b-a42f-268eaaf0febd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:18:01 np0005539550 nova_compute[257631]: 2025-11-29 08:18:01.955 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:01 np0005539550 nova_compute[257631]: 2025-11-29 08:18:01.956 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquired lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:01 np0005539550 nova_compute[257631]: 2025-11-29 08:18:01.956 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.078 257641 DEBUG nova.compute.manager [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-changed-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.079 257641 DEBUG nova.compute.manager [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Refreshing instance network info cache due to event network-changed-47febb89-e1c8-407b-a42f-268eaaf0febd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.079 257641 DEBUG oslo_concurrency.lockutils [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:02.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.130 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.164 257641 INFO nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Snapshot image upload complete#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.165 257641 DEBUG nova.compute.manager [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.235 257641 INFO nova.compute.manager [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Shelve offloading#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.242 257641 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance destroyed successfully.#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.242 257641 DEBUG nova.compute.manager [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.245 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.245 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:02 np0005539550 nova_compute[257631]: 2025-11-29 08:18:02.245 257641 DEBUG nova.network.neutron [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:03.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 515 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 8.3 MiB/s wr, 188 op/s
Nov 29 03:18:03 np0005539550 nova_compute[257631]: 2025-11-29 08:18:03.917 257641 DEBUG nova.network.neutron [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Updating instance_info_cache with network_info: [{"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:03 np0005539550 nova_compute[257631]: 2025-11-29 08:18:03.935 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Releasing lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:03 np0005539550 nova_compute[257631]: 2025-11-29 08:18:03.935 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Instance network_info: |[{"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:18:03 np0005539550 nova_compute[257631]: 2025-11-29 08:18:03.935 257641 DEBUG oslo_concurrency.lockutils [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:03 np0005539550 nova_compute[257631]: 2025-11-29 08:18:03.936 257641 DEBUG nova.network.neutron [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Refreshing network info cache for port 47febb89-e1c8-407b-a42f-268eaaf0febd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.102 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Start _get_guest_xml network_info=[{"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.110 257641 WARNING nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:04.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.118 257641 DEBUG nova.virt.libvirt.host [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.118 257641 DEBUG nova.virt.libvirt.host [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.121 257641 DEBUG nova.virt.libvirt.host [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.122 257641 DEBUG nova.virt.libvirt.host [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.122 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.123 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.123 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.123 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.123 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.123 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.124 257641 DEBUG nova.virt.hardware [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.127 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.234 257641 DEBUG nova.network.neutron [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036084553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.560 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.583 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.587 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:04 np0005539550 nova_compute[257631]: 2025-11-29 08:18:04.647 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4103498935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.015 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.017 257641 DEBUG nova.virt.libvirt.vif [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-585833635',id=109,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e5f917d899b41e7b5ec4efeeb4fc74f',ramdisk_id='',reservation_id='r-bkbgf0zj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-222518474',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-222518474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:59Z,user_data=None,user_id='eb363261bdab40e790d3017750f85069',uuid=75bc7b63-d45e-41a1-893d-ff78f18861f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.017 257641 DEBUG nova.network.os_vif_util [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converting VIF {"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.018 257641 DEBUG nova.network.os_vif_util [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.019 257641 DEBUG nova.objects.instance [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lazy-loading 'pci_devices' on Instance uuid 75bc7b63-d45e-41a1-893d-ff78f18861f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.211 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <uuid>75bc7b63-d45e-41a1-893d-ff78f18861f0</uuid>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <name>instance-0000006d</name>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-585833635</nova:name>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:18:04</nova:creationTime>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:user uuid="eb363261bdab40e790d3017750f85069">tempest-ServersNegativeTestMultiTenantJSON-222518474-project-member</nova:user>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:project uuid="4e5f917d899b41e7b5ec4efeeb4fc74f">tempest-ServersNegativeTestMultiTenantJSON-222518474</nova:project>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <nova:port uuid="47febb89-e1c8-407b-a42f-268eaaf0febd">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="serial">75bc7b63-d45e-41a1-893d-ff78f18861f0</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="uuid">75bc7b63-d45e-41a1-893d-ff78f18861f0</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/75bc7b63-d45e-41a1-893d-ff78f18861f0_disk">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:e8:e0:4b"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <target dev="tap47febb89-e1"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/console.log" append="off"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:18:05 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:18:05 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:18:05 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:18:05 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.213 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Preparing to wait for external event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.214 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.214 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.214 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.215 257641 DEBUG nova.virt.libvirt.vif [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-585833635',id=109,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4e5f917d899b41e7b5ec4efeeb4fc74f',ramdisk_id='',reservation_id='r-bkbgf0zj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-222518474',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-222518474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:17:59Z,user_data=None,user_id='eb363261bdab40e790d3017750f85069',uuid=75bc7b63-d45e-41a1-893d-ff78f18861f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.215 257641 DEBUG nova.network.os_vif_util [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converting VIF {"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.216 257641 DEBUG nova.network.os_vif_util [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.216 257641 DEBUG os_vif [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.217 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.217 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.218 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.221 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.222 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47febb89-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.222 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47febb89-e1, col_values=(('external_ids', {'iface-id': '47febb89-e1c8-407b-a42f-268eaaf0febd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:e0:4b', 'vm-uuid': '75bc7b63-d45e-41a1-893d-ff78f18861f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.224 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.226 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:05 np0005539550 NetworkManager[49039]: <info>  [1764404285.2265] manager: (tap47febb89-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/188)
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.231 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.232 257641 INFO os_vif [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1')#033[00m
Nov 29 03:18:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:05.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.5 MiB/s wr, 178 op/s
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.837 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.838 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.839 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] No VIF found with MAC fa:16:3e:e8:e0:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.840 257641 INFO nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Using config drive#033[00m
Nov 29 03:18:05 np0005539550 nova_compute[257631]: 2025-11-29 08:18:05.880 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:06.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:06 np0005539550 podman[324449]: 2025-11-29 08:18:06.330699749 +0000 UTC m=+0.073216072 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 29 03:18:06 np0005539550 nova_compute[257631]: 2025-11-29 08:18:06.979 257641 INFO nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Creating config drive at /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config#033[00m
Nov 29 03:18:06 np0005539550 nova_compute[257631]: 2025-11-29 08:18:06.985 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiknu9xv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.021 257641 INFO nova.virt.libvirt.driver [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Instance destroyed successfully.#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.022 257641 DEBUG nova.objects.instance [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lazy-loading 'resources' on Instance uuid 68c19565-6fe1-4c2c-927d-87f801074e18 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.024 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.045 257641 DEBUG nova.virt.libvirt.vif [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-671911008',display_name='tempest-ServerActionsTestOtherB-server-671911008',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-671911008',id=104,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEIWQ7Agoaix0SKEJrKHu4bB1Waq8EgVKfKJ/0RzVkl2dpwZ96ym4a4YEld/N4o6ej04XW7IMisQ29oCITVHbKZxjsHowaHjgF+3UGfTUq2pqZm9EZTJqhsQL0kJWzkKow==',key_name='tempest-keypair-319762409',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:16:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1b8899f76f554afc96bb2441424e5a77',ramdisk_id='',reservation_id='r-1uf85n06',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-477220446',owner_user_name='tempest-ServerActionsTestOtherB-477220446-project-member',shelved_at='2025-11-29T08:18:02.165022',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='c28b4cfd-fa14-4a57-95be-5fa2d667ebc5'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:17:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5e3ade3963d47be97b545b2e3779b6b',uuid=68c19565-6fe1-4c2c-927d-87f801074e18,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.046 257641 DEBUG nova.network.os_vif_util [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converting VIF {"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d17165a-5b", "ovs_interfaceid": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.048 257641 DEBUG nova.network.os_vif_util [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.048 257641 DEBUG os_vif [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.051 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d17165a-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.058 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.060 257641 INFO os_vif [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bc:67,bridge_name='br-int',has_traffic_filtering=True,id=2d17165a-5bc8-402e-a08c-e6f21188fb1b,network=Network(2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d17165a-5b')#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.124 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppiknu9xv" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.168 257641 DEBUG nova.storage.rbd_utils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] rbd image 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.173 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.215 257641 DEBUG nova.compute.manager [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Received event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.216 257641 DEBUG nova.compute.manager [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing instance network info cache due to event network-changed-2d17165a-5bc8-402e-a08c-e6f21188fb1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.216 257641 DEBUG oslo_concurrency.lockutils [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.217 257641 DEBUG oslo_concurrency.lockutils [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.217 257641 DEBUG nova.network.neutron [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Refreshing network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.301 257641 DEBUG nova.network.neutron [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Updated VIF entry in instance network info cache for port 47febb89-e1c8-407b-a42f-268eaaf0febd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.302 257641 DEBUG nova.network.neutron [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Updating instance_info_cache with network_info: [{"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:07 np0005539550 nova_compute[257631]: 2025-11-29 08:18:07.343 257641 DEBUG oslo_concurrency.lockutils [req-04227191-ae11-4db3-a613-a784b4b8af9e req-547cc562-ffa1-4412-99eb-1a912ad2e8ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-75bc7b63-d45e-41a1-893d-ff78f18861f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:07.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 172 op/s
Nov 29 03:18:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 29 03:18:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 29 03:18:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:08.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:18:08 np0005539550 nova_compute[257631]: 2025-11-29 08:18:08.664 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404273.662774, 68c19565-6fe1-4c2c-927d-87f801074e18 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:08 np0005539550 nova_compute[257631]: 2025-11-29 08:18:08.665 257641 INFO nova.compute.manager [-] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:08 np0005539550 nova_compute[257631]: 2025-11-29 08:18:08.690 257641 DEBUG nova.compute.manager [None req-6f8d9f5e-a519-4e06-9a61-3358d6c053e6 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:08 np0005539550 nova_compute[257631]: 2025-11-29 08:18:08.695 257641 DEBUG nova.compute.manager [None req-6f8d9f5e-a519-4e06-9a61-3358d6c053e6 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:08 np0005539550 nova_compute[257631]: 2025-11-29 08:18:08.725 257641 INFO nova.compute.manager [None req-6f8d9f5e-a519-4e06-9a61-3358d6c053e6 - - - - - -] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:18:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.158 257641 DEBUG oslo_concurrency.processutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config 75bc7b63-d45e-41a1-893d-ff78f18861f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.985s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.159 257641 INFO nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Deleting local config drive /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:09 np0005539550 kernel: tap47febb89-e1: entered promiscuous mode
Nov 29 03:18:09 np0005539550 NetworkManager[49039]: <info>  [1764404289.2120] manager: (tap47febb89-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/189)
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.210 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:09 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:09Z|00426|binding|INFO|Claiming lport 47febb89-e1c8-407b-a42f-268eaaf0febd for this chassis.
Nov 29 03:18:09 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:09Z|00427|binding|INFO|47febb89-e1c8-407b-a42f-268eaaf0febd: Claiming fa:16:3e:e8:e0:4b 10.100.0.4
Nov 29 03:18:09 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:09Z|00428|binding|INFO|Setting lport 47febb89-e1c8-407b-a42f-268eaaf0febd ovn-installed in OVS
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.232 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:09 np0005539550 systemd-udevd[324549]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:09 np0005539550 systemd-machined[216673]: New machine qemu-51-instance-0000006d.
Nov 29 03:18:09 np0005539550 NetworkManager[49039]: <info>  [1764404289.2533] device (tap47febb89-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:09 np0005539550 systemd[1]: Started Virtual Machine qemu-51-instance-0000006d.
Nov 29 03:18:09 np0005539550 NetworkManager[49039]: <info>  [1764404289.2545] device (tap47febb89-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.364 257641 DEBUG nova.network.neutron [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updated VIF entry in instance network info cache for port 2d17165a-5bc8-402e-a08c-e6f21188fb1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.365 257641 DEBUG nova.network.neutron [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Updating instance_info_cache with network_info: [{"id": "2d17165a-5bc8-402e-a08c-e6f21188fb1b", "address": "fa:16:3e:ae:bc:67", "network": {"id": "2b704d3a-d3e4-47ce-8a28-10a6f4e6fd06", "bridge": null, "label": "tempest-ServerActionsTestOtherB-322060255-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b8899f76f554afc96bb2441424e5a77", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap2d17165a-5b", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:09.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 531 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.0 MiB/s wr, 158 op/s
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.819 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404289.8190577, 75bc7b63-d45e-41a1-893d-ff78f18861f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.819 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:09 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:09Z|00429|binding|INFO|Setting lport 47febb89-e1c8-407b-a42f-268eaaf0febd up in Southbound
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.961 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e0:4b 10.100.0.4'], port_security=['fa:16:3e:e8:e0:4b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '75bc7b63-d45e-41a1-893d-ff78f18861f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e5f917d899b41e7b5ec4efeeb4fc74f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '17d19dd4-3284-4764-8bc1-544e618c9d02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e05d00fb-0101-48b1-abca-11115e27d55e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=47febb89-e1c8-407b-a42f-268eaaf0febd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.962 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 47febb89-e1c8-407b-a42f-268eaaf0febd in datapath 7ac1161c-bccc-48c7-b430-7e1f1ab891df bound to our chassis#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.964 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ac1161c-bccc-48c7-b430-7e1f1ab891df#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.974 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e5c9c0-9cac-459a-8410-2280c37b2fba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.975 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ac1161c-b1 in ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.977 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ac1161c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.977 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[418ba860-8f91-4b23-9a8a-51493d823c37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.978 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[baa17123-d2f1-41e5-a924-950376c43a3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.987 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.990 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404289.8192422, 75bc7b63-d45e-41a1-893d-ff78f18861f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.990 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:09.991 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[4755a8e8-8909-482c-84d1-279048c64a76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:09 np0005539550 nova_compute[257631]: 2025-11-29 08:18:09.998 257641 DEBUG oslo_concurrency.lockutils [req-90b5d269-94d8-43f1-8902-1be281a7f69b req-d1c4727f-b9c0-4309-a9a7-a55973965d1f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-68c19565-6fe1-4c2c-927d-87f801074e18" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.009 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[71447459-1397-4000-b3e5-cb6c1d4155fe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.020 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.023 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.043 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[20b0e944-83f6-46cc-86b5-1f764cabcb77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.052 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[897f3b27-979e-4b72-bfc0-b4e387625b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 NetworkManager[49039]: <info>  [1764404290.0529] manager: (tap7ac1161c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/190)
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.059 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.081 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6793f341-def2-4c98-b04a-db5e8559c60e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.085 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b0158f9f-fabb-4007-b26a-e984c5380b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 NetworkManager[49039]: <info>  [1764404290.1064] device (tap7ac1161c-b0): carrier: link connected
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.110 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b6287f-51d3-45ca-8762-3f75b4a9eea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.126 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7c634e7d-77dd-499e-b8ee-89fc0d7759f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ac1161c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:03:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733777, 'reachable_time': 16208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324628, 'error': None, 'target': 'ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:10.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.140 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1d6138-c9cc-4a2f-bdcf-6784852dc7c4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:350'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 733777, 'tstamp': 733777}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324629, 'error': None, 'target': 'ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.154 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b40da59b-e9e4-4332-a382-5ee9feecf4e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ac1161c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:03:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733777, 'reachable_time': 16208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324630, 'error': None, 'target': 'ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3cdd3f-92bc-4af1-899d-c9e23f7d8084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.240 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[84f95fe7-ea4d-4a06-8083-e9eb91dc023b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.241 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ac1161c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.241 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.242 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ac1161c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.244 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 NetworkManager[49039]: <info>  [1764404290.2452] manager: (tap7ac1161c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Nov 29 03:18:10 np0005539550 kernel: tap7ac1161c-b0: entered promiscuous mode
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.247 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.249 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ac1161c-b0, col_values=(('external_ids', {'iface-id': '5f9db27f-fa5f-419a-9170-9d234cc3df4b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.250 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:10Z|00430|binding|INFO|Releasing lport 5f9db27f-fa5f-419a-9170-9d234cc3df4b from this chassis (sb_readonly=0)
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.250 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.253 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ac1161c-bccc-48c7-b430-7e1f1ab891df.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ac1161c-bccc-48c7-b430-7e1f1ab891df.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.253 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ded5e4ae-3b95-409e-9a0c-9fe419a02088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.254 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7ac1161c-bccc-48c7-b430-7e1f1ab891df
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7ac1161c-bccc-48c7-b430-7e1f1ab891df.pid.haproxy
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7ac1161c-bccc-48c7-b430-7e1f1ab891df
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.255 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'env', 'PROCESS_TAG=haproxy-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ac1161c-bccc-48c7-b430-7e1f1ab891df.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.268 257641 INFO nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deleting instance files /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18_del#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.269 257641 INFO nova.virt.libvirt.driver [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] [instance: 68c19565-6fe1-4c2c-927d-87f801074e18] Deletion of /var/lib/nova/instances/68c19565-6fe1-4c2c-927d-87f801074e18_del complete#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.276 257641 DEBUG nova.compute.manager [req-fe2595c1-b86b-4d4f-b818-c800a8be4c6b req-1b3d9df4-6035-4c20-90db-5e952f6f26ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.277 257641 DEBUG oslo_concurrency.lockutils [req-fe2595c1-b86b-4d4f-b818-c800a8be4c6b req-1b3d9df4-6035-4c20-90db-5e952f6f26ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.277 257641 DEBUG oslo_concurrency.lockutils [req-fe2595c1-b86b-4d4f-b818-c800a8be4c6b req-1b3d9df4-6035-4c20-90db-5e952f6f26ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.277 257641 DEBUG oslo_concurrency.lockutils [req-fe2595c1-b86b-4d4f-b818-c800a8be4c6b req-1b3d9df4-6035-4c20-90db-5e952f6f26ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.278 257641 DEBUG nova.compute.manager [req-fe2595c1-b86b-4d4f-b818-c800a8be4c6b req-1b3d9df4-6035-4c20-90db-5e952f6f26ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Processing event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.278 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.284 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404290.284578, 75bc7b63-d45e-41a1-893d-ff78f18861f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.285 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.286 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.290 257641 INFO nova.virt.libvirt.driver [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Instance spawned successfully.#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.290 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.317 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.317 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.318 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.319 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.319 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.319 257641 DEBUG nova.virt.libvirt.driver [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.330 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.332 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.373 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.433 257641 INFO nova.scheduler.client.report [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Deleted allocations for instance 68c19565-6fe1-4c2c-927d-87f801074e18#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.500 257641 INFO nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Took 11.28 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.501 257641 DEBUG nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.503 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:10.505 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.545 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.546 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.590 257641 INFO nova.compute.manager [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Took 12.32 seconds to build instance.#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.605 257641 DEBUG oslo_concurrency.processutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:10 np0005539550 nova_compute[257631]: 2025-11-29 08:18:10.633 257641 DEBUG oslo_concurrency.lockutils [None req-417560e6-d0eb-40df-a3bb-c930fe76fb0a eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.443s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:10 np0005539550 podman[324663]: 2025-11-29 08:18:10.598880054 +0000 UTC m=+0.024900864 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:10 np0005539550 podman[324663]: 2025-11-29 08:18:10.756371437 +0000 UTC m=+0.182392217 container create 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:18:10 np0005539550 systemd[1]: Started libpod-conmon-2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c.scope.
Nov 29 03:18:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab3a913a9c566ef7080822d3d710ae7ba1286dce9f1b75eb92da846a194ea374/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:10 np0005539550 podman[324663]: 2025-11-29 08:18:10.881361585 +0000 UTC m=+0.307382385 container init 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:18:10 np0005539550 podman[324663]: 2025-11-29 08:18:10.888584248 +0000 UTC m=+0.314605028 container start 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:18:10 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [NOTICE]   (324702) : New worker (324704) forked
Nov 29 03:18:10 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [NOTICE]   (324702) : Loading success.
Nov 29 03:18:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:11.018 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:18:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474515018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:11 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:18:11 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:18:11 np0005539550 nova_compute[257631]: 2025-11-29 08:18:11.081 257641 DEBUG oslo_concurrency.processutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:11 np0005539550 nova_compute[257631]: 2025-11-29 08:18:11.088 257641 DEBUG nova.compute.provider_tree [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:11 np0005539550 nova_compute[257631]: 2025-11-29 08:18:11.108 257641 DEBUG nova.scheduler.client.report [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:11 np0005539550 nova_compute[257631]: 2025-11-29 08:18:11.127 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:11 np0005539550 nova_compute[257631]: 2025-11-29 08:18:11.195 257641 DEBUG oslo_concurrency.lockutils [None req-7b22c5ab-58bc-4534-8d90-3e02771db71b c5e3ade3963d47be97b545b2e3779b6b 1b8899f76f554afc96bb2441424e5a77 - - default default] Lock "68c19565-6fe1-4c2c-927d-87f801074e18" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 20.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:11.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 411 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 29 03:18:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.053 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:12.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.766 257641 DEBUG nova.compute.manager [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.767 257641 DEBUG oslo_concurrency.lockutils [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.768 257641 DEBUG oslo_concurrency.lockutils [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.768 257641 DEBUG oslo_concurrency.lockutils [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.768 257641 DEBUG nova.compute.manager [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] No waiting events found dispatching network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:12 np0005539550 nova_compute[257631]: 2025-11-29 08:18:12.769 257641 WARNING nova.compute.manager [req-07816d68-cbfa-4271-800c-bf7b3301f2bd req-7c00baaa-0d2d-462a-b685-681a1c9da9d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received unexpected event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:13.020 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 372 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 745 KiB/s wr, 112 op/s
Nov 29 03:18:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:14.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:15.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 372 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 148 op/s
Nov 29 03:18:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.588541) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296588595, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 568, "num_deletes": 252, "total_data_size": 615178, "memory_usage": 625384, "flush_reason": "Manual Compaction"}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296594637, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 607843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43125, "largest_seqno": 43692, "table_properties": {"data_size": 604733, "index_size": 1018, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7508, "raw_average_key_size": 19, "raw_value_size": 598457, "raw_average_value_size": 1554, "num_data_blocks": 45, "num_entries": 385, "num_filter_entries": 385, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404262, "oldest_key_time": 1764404262, "file_creation_time": 1764404296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 6134 microseconds, and 2473 cpu microseconds.
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.594674) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 607843 bytes OK
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.594692) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.596354) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.596368) EVENT_LOG_v1 {"time_micros": 1764404296596363, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.596385) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 612018, prev total WAL file size 612018, number of live WAL files 2.
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.596767) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(593KB)], [92(12MB)]
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296596790, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13862554, "oldest_snapshot_seqno": -1}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7671 keys, 11981097 bytes, temperature: kUnknown
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296730478, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11981097, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11929330, "index_size": 31495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199595, "raw_average_key_size": 26, "raw_value_size": 11791509, "raw_average_value_size": 1537, "num_data_blocks": 1237, "num_entries": 7671, "num_filter_entries": 7671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.731256) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11981097 bytes
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.859416) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.3 rd, 89.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(42.5) write-amplify(19.7) OK, records in: 8187, records dropped: 516 output_compression: NoCompression
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.859455) EVENT_LOG_v1 {"time_micros": 1764404296859440, "job": 54, "event": "compaction_finished", "compaction_time_micros": 134233, "compaction_time_cpu_micros": 32573, "output_level": 6, "num_output_files": 1, "total_output_size": 11981097, "num_input_records": 8187, "num_output_records": 7671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296859719, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404296861685, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.596682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.861731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.861738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.861739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.861741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:18:16.861743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:18:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 372 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 142 op/s
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.424 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.424 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.425 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.425 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.425 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.427 257641 INFO nova.compute.manager [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Terminating instance#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.428 257641 DEBUG nova.compute.manager [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:18:17 np0005539550 kernel: tap47febb89-e1 (unregistering): left promiscuous mode
Nov 29 03:18:17 np0005539550 NetworkManager[49039]: <info>  [1764404297.7494] device (tap47febb89-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:17Z|00431|binding|INFO|Releasing lport 47febb89-e1c8-407b-a42f-268eaaf0febd from this chassis (sb_readonly=0)
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.757 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:17Z|00432|binding|INFO|Setting lport 47febb89-e1c8-407b-a42f-268eaaf0febd down in Southbound
Nov 29 03:18:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:17Z|00433|binding|INFO|Removing iface tap47febb89-e1 ovn-installed in OVS
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:17.765 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:e0:4b 10.100.0.4'], port_security=['fa:16:3e:e8:e0:4b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '75bc7b63-d45e-41a1-893d-ff78f18861f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4e5f917d899b41e7b5ec4efeeb4fc74f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '17d19dd4-3284-4764-8bc1-544e618c9d02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e05d00fb-0101-48b1-abca-11115e27d55e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=47febb89-e1c8-407b-a42f-268eaaf0febd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:17.767 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 47febb89-e1c8-407b-a42f-268eaaf0febd in datapath 7ac1161c-bccc-48c7-b430-7e1f1ab891df unbound from our chassis#033[00m
Nov 29 03:18:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:17.768 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ac1161c-bccc-48c7-b430-7e1f1ab891df, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:17.769 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7c0e08-0527-43d8-b0b5-07ae2a7cf822]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:17.770 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df namespace which is not needed anymore#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539550 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000006d.scope: Consumed 7.954s CPU time.
Nov 29 03:18:17 np0005539550 systemd-machined[216673]: Machine qemu-51-instance-0000006d terminated.
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.869 257641 INFO nova.virt.libvirt.driver [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Instance destroyed successfully.#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.870 257641 DEBUG nova.objects.instance [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lazy-loading 'resources' on Instance uuid 75bc7b63-d45e-41a1-893d-ff78f18861f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.887 257641 DEBUG nova.virt.libvirt.vif [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-585833635',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-585833635',id=109,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4e5f917d899b41e7b5ec4efeeb4fc74f',ramdisk_id='',reservation_id='r-bkbgf0zj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-222518474',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-222518474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:10Z,user_data=None,user_id='eb363261bdab40e790d3017750f85069',uuid=75bc7b63-d45e-41a1-893d-ff78f18861f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.888 257641 DEBUG nova.network.os_vif_util [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converting VIF {"id": "47febb89-e1c8-407b-a42f-268eaaf0febd", "address": "fa:16:3e:e8:e0:4b", "network": {"id": "7ac1161c-bccc-48c7-b430-7e1f1ab891df", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1267455479-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4e5f917d899b41e7b5ec4efeeb4fc74f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47febb89-e1", "ovs_interfaceid": "47febb89-e1c8-407b-a42f-268eaaf0febd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.889 257641 DEBUG nova.network.os_vif_util [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.889 257641 DEBUG os_vif [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [NOTICE]   (324702) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [NOTICE]   (324702) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [WARNING]  (324702) : Exiting Master process...
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [WARNING]  (324702) : Exiting Master process...
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.891 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.892 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47febb89-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [ALERT]    (324702) : Current worker (324704) exited with code 143 (Terminated)
Nov 29 03:18:17 np0005539550 neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df[324698]: [WARNING]  (324702) : All workers exited. Exiting... (0)
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.893 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.896 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:17 np0005539550 systemd[1]: libpod-2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c.scope: Deactivated successfully.
Nov 29 03:18:17 np0005539550 nova_compute[257631]: 2025-11-29 08:18:17.899 257641 INFO os_vif [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:e0:4b,bridge_name='br-int',has_traffic_filtering=True,id=47febb89-e1c8-407b-a42f-268eaaf0febd,network=Network(7ac1161c-bccc-48c7-b430-7e1f1ab891df),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47febb89-e1')#033[00m
Nov 29 03:18:17 np0005539550 podman[324794]: 2025-11-29 08:18:17.902090935 +0000 UTC m=+0.053090481 container died 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.081 257641 DEBUG nova.compute.manager [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-unplugged-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.081 257641 DEBUG oslo_concurrency.lockutils [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.083 257641 DEBUG oslo_concurrency.lockutils [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.083 257641 DEBUG oslo_concurrency.lockutils [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.084 257641 DEBUG nova.compute.manager [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] No waiting events found dispatching network-vif-unplugged-47febb89-e1c8-407b-a42f-268eaaf0febd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.084 257641 DEBUG nova.compute.manager [req-c93e4443-a46e-423f-b59d-08618bcf83c8 req-1e132f65-81cf-4041-bc95-afdeab9efb0c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-unplugged-47febb89-e1c8-407b-a42f-268eaaf0febd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:18:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ab3a913a9c566ef7080822d3d710ae7ba1286dce9f1b75eb92da846a194ea374-merged.mount: Deactivated successfully.
Nov 29 03:18:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:18 np0005539550 podman[324794]: 2025-11-29 08:18:18.187226504 +0000 UTC m=+0.338226050 container cleanup 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:18 np0005539550 systemd[1]: libpod-conmon-2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c.scope: Deactivated successfully.
Nov 29 03:18:18 np0005539550 podman[324853]: 2025-11-29 08:18:18.575681978 +0000 UTC m=+0.368009936 container remove 2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.587 257641 INFO nova.virt.libvirt.driver [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Deleting instance files /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0_del#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.588 257641 INFO nova.virt.libvirt.driver [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Deletion of /var/lib/nova/instances/75bc7b63-d45e-41a1-893d-ff78f18861f0_del complete#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[422021b2-e80e-45a8-b234-9b73912d4319]: (4, ('Sat Nov 29 08:18:17 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df (2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c)\n2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c\nSat Nov 29 08:18:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df (2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c)\n2590f3dd76c8083bcb1283c52d0259d9a8bc0d314bcb2961b4d5ae7c99cc7b3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.591 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a6eadc36-484c-49e3-88b4-f4543b866574]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.591 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ac1161c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:18 np0005539550 kernel: tap7ac1161c-b0: left promiscuous mode
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.598 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58e50342-4515-4ba4-8a6e-88f8c721b678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.610 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7ee4b0-061c-42ee-a7a0-014f97afd351]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.612 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa3d52a-cda1-4345-bba2-37dab51988f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.627 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[61c5aa4e-42cc-497a-9088-212f4c7b8e8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 733770, 'reachable_time': 43861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324869, 'error': None, 'target': 'ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7ac1161c\x2dbccc\x2d48c7\x2db430\x2d7e1f1ab891df.mount: Deactivated successfully.
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.632 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ac1161c-bccc-48c7-b430-7e1f1ab891df deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.632 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[14f290bf-3584-47b7-9ac5-52c5712926c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.657 257641 INFO nova.compute.manager [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.658 257641 DEBUG oslo.service.loopingcall [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.658 257641 DEBUG nova.compute.manager [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:18:18 np0005539550 nova_compute[257631]: 2025-11-29 08:18:18.658 257641 DEBUG nova.network.neutron [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.950 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.952 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:18.952 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:19.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 372 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 121 op/s
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005337196689465848 of space, bias 1.0, pg target 1.6011590068397543 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005059113565512749 of space, bias 1.0, pg target 1.512674956088312 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:18:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:18:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.170 257641 DEBUG nova.compute.manager [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.170 257641 DEBUG oslo_concurrency.lockutils [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.171 257641 DEBUG oslo_concurrency.lockutils [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.171 257641 DEBUG oslo_concurrency.lockutils [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.171 257641 DEBUG nova.compute.manager [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] No waiting events found dispatching network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.171 257641 WARNING nova.compute.manager [req-22d226de-1252-472f-adfb-520ff990050a req-3d4c3199-5299-4155-948e-7f70d9419e28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received unexpected event network-vif-plugged-47febb89-e1c8-407b-a42f-268eaaf0febd for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.493 257641 DEBUG nova.network.neutron [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.516 257641 INFO nova.compute.manager [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Took 1.86 seconds to deallocate network for instance.#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.611 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.612 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.625 257641 DEBUG nova.compute.manager [req-3d6487d4-8580-4a84-a1d1-cf39de679d3e req-45b14915-6558-42d4-8546-b0794c9ce6f5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Received event network-vif-deleted-47febb89-e1c8-407b-a42f-268eaaf0febd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.673 257641 DEBUG oslo_concurrency.processutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:20 np0005539550 nova_compute[257631]: 2025-11-29 08:18:20.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/726313573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.134 257641 DEBUG oslo_concurrency.processutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.140 257641 DEBUG nova.compute.provider_tree [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.161 257641 DEBUG nova.scheduler.client.report [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.182 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.228 257641 INFO nova.scheduler.client.report [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Deleted allocations for instance 75bc7b63-d45e-41a1-893d-ff78f18861f0#033[00m
Nov 29 03:18:21 np0005539550 nova_compute[257631]: 2025-11-29 08:18:21.299 257641 DEBUG oslo_concurrency.lockutils [None req-eb3a8486-17fe-4366-9af2-eb868f0eb9f3 eb363261bdab40e790d3017750f85069 4e5f917d899b41e7b5ec4efeeb4fc74f - - default default] Lock "75bc7b63-d45e-41a1-893d-ff78f18861f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:21.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 348 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 17 KiB/s wr, 192 op/s
Nov 29 03:18:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:22 np0005539550 nova_compute[257631]: 2025-11-29 08:18:22.057 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:22.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:22 np0005539550 nova_compute[257631]: 2025-11-29 08:18:22.894 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:23.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 325 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 KiB/s wr, 174 op/s
Nov 29 03:18:23 np0005539550 podman[324920]: 2025-11-29 08:18:23.506652643 +0000 UTC m=+0.060443508 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:18:23 np0005539550 podman[324919]: 2025-11-29 08:18:23.506737015 +0000 UTC m=+0.066459300 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:18:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 16b73989-d5d7-4544-af82-26f3e108703f does not exist
Nov 29 03:18:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a487e1a9-1873-42be-8951-9493f7987c05 does not exist
Nov 29 03:18:24 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 486a4bb8-15d7-4e0f-ad75-3b6294169aa5 does not exist
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:18:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.879490093 +0000 UTC m=+0.044264276 container create f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:24 np0005539550 systemd[1]: Started libpod-conmon-f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d.scope.
Nov 29 03:18:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.858927781 +0000 UTC m=+0.023701984 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.966176097 +0000 UTC m=+0.130950310 container init f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.974756355 +0000 UTC m=+0.139530528 container start f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.977997668 +0000 UTC m=+0.142771851 container attach f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:18:24 np0005539550 nice_rhodes[325223]: 167 167
Nov 29 03:18:24 np0005539550 systemd[1]: libpod-f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d.scope: Deactivated successfully.
Nov 29 03:18:24 np0005539550 podman[325207]: 2025-11-29 08:18:24.983356224 +0000 UTC m=+0.148130417 container died f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bcfebd963a31abb519acd9646f5c38338c27307fc933764ffeb7e7924cbf3a7b-merged.mount: Deactivated successfully.
Nov 29 03:18:25 np0005539550 podman[325207]: 2025-11-29 08:18:25.027787503 +0000 UTC m=+0.192561686 container remove f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:18:25 np0005539550 systemd[1]: libpod-conmon-f78da4e393f804868912de51541f4988a8cf1d8de31b5d12472696e717216c1d.scope: Deactivated successfully.
Nov 29 03:18:25 np0005539550 podman[325247]: 2025-11-29 08:18:25.203736706 +0000 UTC m=+0.042678966 container create 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:25 np0005539550 systemd[1]: Started libpod-conmon-39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947.scope.
Nov 29 03:18:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:25 np0005539550 podman[325247]: 2025-11-29 08:18:25.186804576 +0000 UTC m=+0.025746856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:25 np0005539550 podman[325247]: 2025-11-29 08:18:25.291242711 +0000 UTC m=+0.130185001 container init 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:18:25 np0005539550 podman[325247]: 2025-11-29 08:18:25.299376468 +0000 UTC m=+0.138318728 container start 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:18:25 np0005539550 podman[325247]: 2025-11-29 08:18:25.302660191 +0000 UTC m=+0.141602491 container attach 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 29 03:18:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:25.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 325 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.5 KiB/s wr, 143 op/s
Nov 29 03:18:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:25 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:18:25 np0005539550 nova_compute[257631]: 2025-11-29 08:18:25.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:25 np0005539550 nova_compute[257631]: 2025-11-29 08:18:25.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:18:25 np0005539550 nova_compute[257631]: 2025-11-29 08:18:25.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:18:25 np0005539550 nova_compute[257631]: 2025-11-29 08:18:25.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:18:26 np0005539550 vigorous_matsumoto[325263]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:18:26 np0005539550 vigorous_matsumoto[325263]: --> relative data size: 1.0
Nov 29 03:18:26 np0005539550 vigorous_matsumoto[325263]: --> All data devices are unavailable
Nov 29 03:18:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:26 np0005539550 systemd[1]: libpod-39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947.scope: Deactivated successfully.
Nov 29 03:18:26 np0005539550 conmon[325263]: conmon 39a93fb6d35a75a84b4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947.scope/container/memory.events
Nov 29 03:18:26 np0005539550 podman[325247]: 2025-11-29 08:18:26.170534853 +0000 UTC m=+1.009477113 container died 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:18:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-162590a80b38964c1d397f52331ab38e48558f28876ebf158b3a05247b99bc60-merged.mount: Deactivated successfully.
Nov 29 03:18:26 np0005539550 podman[325247]: 2025-11-29 08:18:26.227560313 +0000 UTC m=+1.066502573 container remove 39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_matsumoto, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:18:26 np0005539550 systemd[1]: libpod-conmon-39a93fb6d35a75a84b4c89b0a84d1695da9a2aec1b459ceaebca2ea78f2eb947.scope: Deactivated successfully.
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.809032695 +0000 UTC m=+0.039938996 container create 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:18:26 np0005539550 systemd[1]: Started libpod-conmon-6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6.scope.
Nov 29 03:18:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.793041489 +0000 UTC m=+0.023947810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.891398079 +0000 UTC m=+0.122304400 container init 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.897414072 +0000 UTC m=+0.128320373 container start 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:18:26 np0005539550 wonderful_snyder[325449]: 167 167
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.9012721 +0000 UTC m=+0.132178421 container attach 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:18:26 np0005539550 systemd[1]: libpod-6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6.scope: Deactivated successfully.
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.902980134 +0000 UTC m=+0.133886435 container died 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6df9ba412f6b5afb423f5961398d7b575f365aa169e1f7cccd1921e2effc8c47-merged.mount: Deactivated successfully.
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.946 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:18:26 np0005539550 nova_compute[257631]: 2025-11-29 08:18:26.946 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:26 np0005539550 podman[325433]: 2025-11-29 08:18:26.952850301 +0000 UTC m=+0.183756602 container remove 6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:18:26 np0005539550 systemd[1]: libpod-conmon-6e7734eee660b965a17b76cbddf52948163f66c3d51723be85a00055918c6ec6.scope: Deactivated successfully.
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.059 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:27 np0005539550 podman[325475]: 2025-11-29 08:18:27.130748784 +0000 UTC m=+0.043337603 container create 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:18:27 np0005539550 systemd[1]: Started libpod-conmon-53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6.scope.
Nov 29 03:18:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae1eb4b45fa653b90fe44cad73aa17296f2b4ae1f9b8ab2fdb8e26935b51b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae1eb4b45fa653b90fe44cad73aa17296f2b4ae1f9b8ab2fdb8e26935b51b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae1eb4b45fa653b90fe44cad73aa17296f2b4ae1f9b8ab2fdb8e26935b51b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:27 np0005539550 podman[325475]: 2025-11-29 08:18:27.113187087 +0000 UTC m=+0.025775926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:27 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ae1eb4b45fa653b90fe44cad73aa17296f2b4ae1f9b8ab2fdb8e26935b51b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:27 np0005539550 podman[325475]: 2025-11-29 08:18:27.225371979 +0000 UTC m=+0.137960828 container init 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:18:27 np0005539550 podman[325475]: 2025-11-29 08:18:27.231954527 +0000 UTC m=+0.144543346 container start 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:27 np0005539550 podman[325475]: 2025-11-29 08:18:27.235839065 +0000 UTC m=+0.148427914 container attach 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:18:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:27.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 339 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 656 KiB/s wr, 149 op/s
Nov 29 03:18:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272344465' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.462 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.642 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.645 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4407MB free_disk=20.897159576416016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.645 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.646 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.796 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.797 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.813 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:27 np0005539550 nova_compute[257631]: 2025-11-29 08:18:27.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]: {
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:    "0": [
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:        {
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "devices": [
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "/dev/loop3"
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            ],
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "lv_name": "ceph_lv0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "lv_size": "7511998464",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "name": "ceph_lv0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "tags": {
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.cluster_name": "ceph",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.crush_device_class": "",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.encrypted": "0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.osd_id": "0",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.type": "block",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:                "ceph.vdo": "0"
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            },
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "type": "block",
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:            "vg_name": "ceph_vg0"
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:        }
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]:    ]
Nov 29 03:18:28 np0005539550 trusting_liskov[325509]: }
Nov 29 03:18:28 np0005539550 systemd[1]: libpod-53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6.scope: Deactivated successfully.
Nov 29 03:18:28 np0005539550 podman[325475]: 2025-11-29 08:18:28.059168446 +0000 UTC m=+0.971757285 container died 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ae1eb4b45fa653b90fe44cad73aa17296f2b4ae1f9b8ab2fdb8e26935b51b82-merged.mount: Deactivated successfully.
Nov 29 03:18:28 np0005539550 podman[325475]: 2025-11-29 08:18:28.122704821 +0000 UTC m=+1.035293640 container remove 53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_liskov, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:18:28 np0005539550 systemd[1]: libpod-conmon-53e8ed333cedf5340d351d0b399e67ae3a011a52ba377ef181251a70d7f8c7a6.scope: Deactivated successfully.
Nov 29 03:18:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147257842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:28 np0005539550 nova_compute[257631]: 2025-11-29 08:18:28.281 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:28 np0005539550 nova_compute[257631]: 2025-11-29 08:18:28.289 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:28 np0005539550 nova_compute[257631]: 2025-11-29 08:18:28.320 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:28 np0005539550 nova_compute[257631]: 2025-11-29 08:18:28.348 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:18:28 np0005539550 nova_compute[257631]: 2025-11-29 08:18:28.348 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.796255104 +0000 UTC m=+0.035410971 container create c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:28 np0005539550 systemd[1]: Started libpod-conmon-c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780.scope.
Nov 29 03:18:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.869789844 +0000 UTC m=+0.108945741 container init c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.782212897 +0000 UTC m=+0.021368794 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.878237869 +0000 UTC m=+0.117393736 container start c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.881971914 +0000 UTC m=+0.121127801 container attach c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:28 np0005539550 jovial_turing[325715]: 167 167
Nov 29 03:18:28 np0005539550 systemd[1]: libpod-c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780.scope: Deactivated successfully.
Nov 29 03:18:28 np0005539550 conmon[325715]: conmon c141ec615825a6bc095c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780.scope/container/memory.events
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.88378417 +0000 UTC m=+0.122940037 container died c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1fc215808d76bae773867ab03dd31412bf1a6cacb0c89d1630b5bde062ce4269-merged.mount: Deactivated successfully.
Nov 29 03:18:28 np0005539550 podman[325699]: 2025-11-29 08:18:28.920301618 +0000 UTC m=+0.159457485 container remove c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:18:28 np0005539550 systemd[1]: libpod-conmon-c141ec615825a6bc095cbdc176f40e38b2dff423602f044b758d30db629f7780.scope: Deactivated successfully.
Nov 29 03:18:29 np0005539550 podman[325739]: 2025-11-29 08:18:29.082517272 +0000 UTC m=+0.046625837 container create 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:29 np0005539550 systemd[1]: Started libpod-conmon-7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df.scope.
Nov 29 03:18:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee321bff65bde4f66054b53fd661e62e8d0e236320bf781be03b21a4c7d8285b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:29 np0005539550 podman[325739]: 2025-11-29 08:18:29.064645997 +0000 UTC m=+0.028754592 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:18:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee321bff65bde4f66054b53fd661e62e8d0e236320bf781be03b21a4c7d8285b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee321bff65bde4f66054b53fd661e62e8d0e236320bf781be03b21a4c7d8285b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee321bff65bde4f66054b53fd661e62e8d0e236320bf781be03b21a4c7d8285b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:29 np0005539550 podman[325739]: 2025-11-29 08:18:29.169273587 +0000 UTC m=+0.133382162 container init 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:18:29 np0005539550 podman[325739]: 2025-11-29 08:18:29.179838906 +0000 UTC m=+0.143947471 container start 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:29 np0005539550 podman[325739]: 2025-11-29 08:18:29.183379836 +0000 UTC m=+0.147488421 container attach 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:18:29 np0005539550 nova_compute[257631]: 2025-11-29 08:18:29.349 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:29 np0005539550 nova_compute[257631]: 2025-11-29 08:18:29.352 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:29 np0005539550 nova_compute[257631]: 2025-11-29 08:18:29.352 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:29 np0005539550 nova_compute[257631]: 2025-11-29 08:18:29.352 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:29.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 372 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 167 op/s
Nov 29 03:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 29 03:18:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 29 03:18:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]: {
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:        "osd_id": 0,
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:        "type": "bluestore"
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]:    }
Nov 29 03:18:30 np0005539550 dreamy_torvalds[325755]: }
Nov 29 03:18:30 np0005539550 systemd[1]: libpod-7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df.scope: Deactivated successfully.
Nov 29 03:18:30 np0005539550 podman[325739]: 2025-11-29 08:18:30.082322608 +0000 UTC m=+1.046431173 container died 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:18:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ee321bff65bde4f66054b53fd661e62e8d0e236320bf781be03b21a4c7d8285b-merged.mount: Deactivated successfully.
Nov 29 03:18:30 np0005539550 podman[325739]: 2025-11-29 08:18:30.136238988 +0000 UTC m=+1.100347553 container remove 7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:18:30 np0005539550 systemd[1]: libpod-conmon-7ae247db64395ec8dcdc96f70adb7949851068b55fc0b8de5cc5fca0786fb3df.scope: Deactivated successfully.
Nov 29 03:18:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 42b98552-1da8-4d18-b631-fe1224a65595 does not exist
Nov 29 03:18:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 476c88b4-f95a-4464-9e52-82329f64a35f does not exist
Nov 29 03:18:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0515a874-5e8d-422e-ac60-4300a038b77c does not exist
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:18:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:31.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 369 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 180 op/s
Nov 29 03:18:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:31 np0005539550 nova_compute[257631]: 2025-11-29 08:18:31.917 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:31 np0005539550 nova_compute[257631]: 2025-11-29 08:18:31.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:31 np0005539550 nova_compute[257631]: 2025-11-29 08:18:31.918 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:18:32 np0005539550 nova_compute[257631]: 2025-11-29 08:18:32.060 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:32.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:32 np0005539550 nova_compute[257631]: 2025-11-29 08:18:32.867 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404297.8670595, 75bc7b63-d45e-41a1-893d-ff78f18861f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:32 np0005539550 nova_compute[257631]: 2025-11-29 08:18:32.868 257641 INFO nova.compute.manager [-] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:18:32 np0005539550 nova_compute[257631]: 2025-11-29 08:18:32.899 257641 DEBUG nova.compute.manager [None req-6fc5c86e-2c10-47b2-bfc7-1d36819cf4dd - - - - - -] [instance: 75bc7b63-d45e-41a1-893d-ff78f18861f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:32 np0005539550 nova_compute[257631]: 2025-11-29 08:18:32.935 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:33.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 355 MiB data, 999 MiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.7 MiB/s wr, 212 op/s
Nov 29 03:18:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 29 03:18:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 29 03:18:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 29 03:18:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:34.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:35.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 314 MiB data, 977 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 276 op/s
Nov 29 03:18:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 29 03:18:36 np0005539550 podman[325867]: 2025-11-29 08:18:36.896010076 +0000 UTC m=+0.105646017 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:18:36 np0005539550 nova_compute[257631]: 2025-11-29 08:18:36.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:37 np0005539550 nova_compute[257631]: 2025-11-29 08:18:37.062 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 29 03:18:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 29 03:18:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:37.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 287 MiB data, 963 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.2 MiB/s wr, 247 op/s
Nov 29 03:18:37 np0005539550 nova_compute[257631]: 2025-11-29 08:18:37.937 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:38.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 279 MiB data, 962 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 41 KiB/s wr, 174 op/s
Nov 29 03:18:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:39.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:40.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 235 MiB data, 937 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 56 KiB/s wr, 132 op/s
Nov 29 03:18:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:41.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 29 03:18:42 np0005539550 nova_compute[257631]: 2025-11-29 08:18:42.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:42.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 29 03:18:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 29 03:18:42 np0005539550 nova_compute[257631]: 2025-11-29 08:18:42.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 202 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 42 KiB/s wr, 71 op/s
Nov 29 03:18:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:43.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:43 np0005539550 nova_compute[257631]: 2025-11-29 08:18:43.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:18:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:44.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 202 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 61 op/s
Nov 29 03:18:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:18:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:45.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:18:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.486 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.487 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.500 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.602 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.603 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.610 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.611 257641 INFO nova.compute.claims [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:18:46 np0005539550 nova_compute[257631]: 2025-11-29 08:18:46.781 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.105 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:18:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944244309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.394 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.402 257641 DEBUG nova.compute.provider_tree [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:18:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 202 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 34 KiB/s wr, 52 op/s
Nov 29 03:18:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:47.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.433 257641 DEBUG nova.scheduler.client.report [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.480 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.481 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.539 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.540 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.562 257641 INFO nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.583 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.722 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.724 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.724 257641 INFO nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Creating image(s)#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.753 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.781 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.808 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.812 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.848 257641 DEBUG nova.policy [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c2d81b2cb6a140dc94e2c0965d4219cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '742c4de6c1314ca1aeaf3f5666a16967', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.879 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.880 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.881 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.881 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.911 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.915 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:47 np0005539550 nova_compute[257631]: 2025-11-29 08:18:47.941 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:48.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:48 np0005539550 nova_compute[257631]: 2025-11-29 08:18:48.215 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:48 np0005539550 nova_compute[257631]: 2025-11-29 08:18:48.278 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] resizing rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:18:48 np0005539550 nova_compute[257631]: 2025-11-29 08:18:48.727 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Successfully created port: 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.197 257641 DEBUG nova.objects.instance [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'migration_context' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.217 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.218 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Ensure instance console log exists: /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.219 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.219 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.219 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 172 MiB data, 899 MiB used, 20 GiB / 21 GiB avail; 789 KiB/s rd, 18 KiB/s wr, 98 op/s
Nov 29 03:18:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:49.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.682 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Successfully updated port: 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.701 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.702 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquired lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.702 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.838 257641 DEBUG nova.compute.manager [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-changed-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.838 257641 DEBUG nova.compute.manager [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Refreshing instance network info cache due to event network-changed-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.839 257641 DEBUG oslo_concurrency.lockutils [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:49 np0005539550 nova_compute[257631]: 2025-11-29 08:18:49.904 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:18:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:50.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.948 257641 DEBUG nova.network.neutron [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Updating instance_info_cache with network_info: [{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.965 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Releasing lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.966 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance network_info: |[{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.966 257641 DEBUG oslo_concurrency.lockutils [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.967 257641 DEBUG nova.network.neutron [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Refreshing network info cache for port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.971 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start _get_guest_xml network_info=[{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.976 257641 WARNING nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.980 257641 DEBUG nova.virt.libvirt.host [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.981 257641 DEBUG nova.virt.libvirt.host [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.987 257641 DEBUG nova.virt.libvirt.host [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.988 257641 DEBUG nova.virt.libvirt.host [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.989 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.989 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.990 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.990 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.990 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.990 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.990 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.991 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.991 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.991 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.992 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.992 257641 DEBUG nova.virt.hardware [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:50 np0005539550 nova_compute[257631]: 2025-11-29 08:18:50.995 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 166 MiB data, 888 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 178 op/s
Nov 29 03:18:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:51.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381667580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.452 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.480 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.485 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054491824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.914 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.918 257641 DEBUG nova.virt.libvirt.vif [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:47Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.919 257641 DEBUG nova.network.os_vif_util [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.921 257641 DEBUG nova.network.os_vif_util [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.926 257641 DEBUG nova.objects.instance [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'pci_devices' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.930 257641 DEBUG nova.network.neutron [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Updated VIF entry in instance network info cache for port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.931 257641 DEBUG nova.network.neutron [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Updating instance_info_cache with network_info: [{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.957 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <uuid>a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</uuid>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <name>instance-0000006e</name>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-InstanceActionsTestJSON-server-292121147</nova:name>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:18:50</nova:creationTime>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:user uuid="c2d81b2cb6a140dc94e2c0965d4219cb">tempest-InstanceActionsTestJSON-2049339947-project-member</nova:user>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:project uuid="742c4de6c1314ca1aeaf3f5666a16967">tempest-InstanceActionsTestJSON-2049339947</nova:project>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <nova:port uuid="86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="serial">a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="uuid">a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:57:e6:36"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <target dev="tap86a9f0a7-7d"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/console.log" append="off"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:18:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:18:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:18:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:18:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.959 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Preparing to wait for external event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.960 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.960 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.960 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.961 257641 DEBUG nova.virt.libvirt.vif [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:18:47Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.961 257641 DEBUG nova.network.os_vif_util [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.962 257641 DEBUG nova.network.os_vif_util [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.962 257641 DEBUG os_vif [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.963 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.963 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.964 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.965 257641 DEBUG oslo_concurrency.lockutils [req-6644badd-a728-4bff-bf03-7d04b2dafeb0 req-f34adcc9-e69f-495c-b265-cf23808c6f17 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.967 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.967 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86a9f0a7-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.968 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86a9f0a7-7d, col_values=(('external_ids', {'iface-id': '86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:e6:36', 'vm-uuid': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.969 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539550 NetworkManager[49039]: <info>  [1764404331.9706] manager: (tap86a9f0a7-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/192)
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.977 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:51 np0005539550 nova_compute[257631]: 2025-11-29 08:18:51.978 257641 INFO os_vif [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d')#033[00m
Nov 29 03:18:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.029 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.029 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.030 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] No VIF found with MAC fa:16:3e:57:e6:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.030 257641 INFO nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Using config drive#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.113 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.119 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:52.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.449 257641 INFO nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Creating config drive at /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.456 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7xqdz91 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.588 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7xqdz91" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.615 257641 DEBUG nova.storage.rbd_utils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] rbd image a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.619 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.767 257641 DEBUG oslo_concurrency.processutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.768 257641 INFO nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Deleting local config drive /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:18:52 np0005539550 kernel: tap86a9f0a7-7d: entered promiscuous mode
Nov 29 03:18:52 np0005539550 NetworkManager[49039]: <info>  [1764404332.8291] manager: (tap86a9f0a7-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/193)
Nov 29 03:18:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:52Z|00434|binding|INFO|Claiming lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for this chassis.
Nov 29 03:18:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:52Z|00435|binding|INFO|86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3: Claiming fa:16:3e:57:e6:36 10.100.0.11
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.834 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.843 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e6:36 10.100.0.11'], port_security=['fa:16:3e:57:e6:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2139732-1006-47ec-ac7a-1fc524679243', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '742c4de6c1314ca1aeaf3f5666a16967', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3580c5f6-234a-4467-9b1b-884cdcfc8244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39238d15-52ac-4b26-9f2b-82be3c84cd26, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.844 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 in datapath a2139732-1006-47ec-ac7a-1fc524679243 bound to our chassis#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.845 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a2139732-1006-47ec-ac7a-1fc524679243#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.857 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c48e3160-2ddd-450e-afc0-d0872f1e706a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.858 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa2139732-11 in ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.860 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa2139732-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.860 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55dc46bd-cdda-4b64-bdd6-86a9c38d5baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.861 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[240b5b63-245e-455e-9faf-c02806912f19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 systemd-udevd[326249]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:52 np0005539550 systemd-machined[216673]: New machine qemu-52-instance-0000006e.
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.871 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[81e5c2f3-4918-40ec-a187-11d596de4291]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 NetworkManager[49039]: <info>  [1764404332.8816] device (tap86a9f0a7-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:52 np0005539550 NetworkManager[49039]: <info>  [1764404332.8833] device (tap86a9f0a7-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:52 np0005539550 systemd[1]: Started Virtual Machine qemu-52-instance-0000006e.
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.900 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[750d8c59-73be-4a63-9cf3-0fde6f0304ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:52Z|00436|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 ovn-installed in OVS
Nov 29 03:18:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:52Z|00437|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 up in Southbound
Nov 29 03:18:52 np0005539550 nova_compute[257631]: 2025-11-29 08:18:52.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.930 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e35c1957-c25f-49b0-9637-f67ea569c5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 NetworkManager[49039]: <info>  [1764404332.9366] manager: (tapa2139732-10): new Veth device (/org/freedesktop/NetworkManager/Devices/194)
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.935 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef47406d-7f03-4666-ad1d-031060f9fd13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.969 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[58fc56c1-2b87-43d5-9068-e0aa0b5505a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.972 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[238585a0-69ff-4f6f-b311-0e3a54bb80f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:52 np0005539550 NetworkManager[49039]: <info>  [1764404332.9934] device (tapa2139732-10): carrier: link connected
Nov 29 03:18:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:52.998 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[72dfe384-58d1-4a78-9152-6de09048ccb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.014 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cf9844e2-0555-4112-b384-0e21c27f7fcb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2139732-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:db:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738065, 'reachable_time': 37849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326281, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.028 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7510b33e-25fb-4058-9d25-de82d999d3bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:db48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738065, 'tstamp': 738065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326282, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.053 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16c5e6f7-c74b-4de4-b70b-5cc3198263c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2139732-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:db:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738065, 'reachable_time': 37849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326283, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.080 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[11df91c5-1cd3-41e6-9a23-e00773aa017a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.140 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb06ddd-558f-48ca-8fd0-97ee628464bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.141 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2139732-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.142 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.142 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2139732-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.144 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:53 np0005539550 kernel: tapa2139732-10: entered promiscuous mode
Nov 29 03:18:53 np0005539550 NetworkManager[49039]: <info>  [1764404333.1465] manager: (tapa2139732-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.148 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.149 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa2139732-10, col_values=(('external_ids', {'iface-id': 'eb43f9e1-bffe-4017-ae54-6ff13736ca49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.150 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:53Z|00438|binding|INFO|Releasing lport eb43f9e1-bffe-4017-ae54-6ff13736ca49 from this chassis (sb_readonly=0)
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.151 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.152 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb5fb99-9139-46dd-8ef3-a03d527e6be7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.152 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a2139732-1006-47ec-ac7a-1fc524679243
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a2139732-1006-47ec-ac7a-1fc524679243
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:53.153 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'env', 'PROCESS_TAG=haproxy-a2139732-1006-47ec-ac7a-1fc524679243', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a2139732-1006-47ec-ac7a-1fc524679243.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.165 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.299 257641 DEBUG nova.compute.manager [req-8bb93845-e560-4063-b351-0b63e4189900 req-272cdf1b-6c74-42e4-886a-4e3aba875b0b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.300 257641 DEBUG oslo_concurrency.lockutils [req-8bb93845-e560-4063-b351-0b63e4189900 req-272cdf1b-6c74-42e4-886a-4e3aba875b0b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.300 257641 DEBUG oslo_concurrency.lockutils [req-8bb93845-e560-4063-b351-0b63e4189900 req-272cdf1b-6c74-42e4-886a-4e3aba875b0b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.301 257641 DEBUG oslo_concurrency.lockutils [req-8bb93845-e560-4063-b351-0b63e4189900 req-272cdf1b-6c74-42e4-886a-4e3aba875b0b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.301 257641 DEBUG nova.compute.manager [req-8bb93845-e560-4063-b351-0b63e4189900 req-272cdf1b-6c74-42e4-886a-4e3aba875b0b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Processing event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.339 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.341 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404333.3391092, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.341 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.345 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.350 257641 INFO nova.virt.libvirt.driver [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance spawned successfully.#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.352 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.365 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.370 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.378 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.379 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.379 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.380 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.380 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.381 257641 DEBUG nova.virt.libvirt.driver [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:18:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 184 op/s
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.409 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.410 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404333.3404455, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.410 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:18:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:53.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.469 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.475 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404333.3440995, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.475 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.497 257641 INFO nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Took 5.77 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.497 257641 DEBUG nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:53 np0005539550 podman[326358]: 2025-11-29 08:18:53.540250941 +0000 UTC m=+0.053898012 container create 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.547 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.555 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:53 np0005539550 systemd[1]: Started libpod-conmon-06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3.scope.
Nov 29 03:18:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:53 np0005539550 podman[326358]: 2025-11-29 08:18:53.511410367 +0000 UTC m=+0.025057458 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297062b6e4e018cc9c8aca6ad2a38e590bb7e348e6889968a08eb1c1fefb5cbb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:53 np0005539550 podman[326358]: 2025-11-29 08:18:53.629338267 +0000 UTC m=+0.142985348 container init 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.636 257641 INFO nova.compute.manager [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Took 7.06 seconds to build instance.#033[00m
Nov 29 03:18:53 np0005539550 podman[326358]: 2025-11-29 08:18:53.637771131 +0000 UTC m=+0.151418192 container start 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:18:53 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [NOTICE]   (326408) : New worker (326415) forked
Nov 29 03:18:53 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [NOTICE]   (326408) : Loading success.
Nov 29 03:18:53 np0005539550 nova_compute[257631]: 2025-11-29 08:18:53.663 257641 DEBUG oslo_concurrency.lockutils [None req-9416de8e-cd9e-4969-a267-8a9199882f3f c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:53 np0005539550 podman[326371]: 2025-11-29 08:18:53.667151608 +0000 UTC m=+0.086239964 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 03:18:53 np0005539550 podman[326374]: 2025-11-29 08:18:53.677369918 +0000 UTC m=+0.094923265 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:18:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:54.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 205 op/s
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.413 257641 DEBUG nova.compute.manager [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.414 257641 DEBUG oslo_concurrency.lockutils [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.414 257641 DEBUG oslo_concurrency.lockutils [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.415 257641 DEBUG oslo_concurrency.lockutils [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.415 257641 DEBUG nova.compute.manager [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.415 257641 WARNING nova.compute.manager [req-d845c594-923d-4727-b07f-cf634d7b7bc2 req-3fde5890-b0e5-4798-bbfc-ce81291249a5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:18:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:55.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.534 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.535 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.536 257641 INFO nova.compute.manager [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Rebooting instance#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.568 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.569 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquired lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:18:55 np0005539550 nova_compute[257631]: 2025-11-29 08:18:55.570 257641 DEBUG nova.network.neutron [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:18:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:56.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:56 np0005539550 nova_compute[257631]: 2025-11-29 08:18:56.971 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 234 op/s
Nov 29 03:18:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:57.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.429 257641 DEBUG nova.network.neutron [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Updating instance_info_cache with network_info: [{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.446 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Releasing lock "refresh_cache-a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.447 257641 DEBUG nova.compute.manager [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:57 np0005539550 kernel: tap86a9f0a7-7d (unregistering): left promiscuous mode
Nov 29 03:18:57 np0005539550 NetworkManager[49039]: <info>  [1764404337.5714] device (tap86a9f0a7-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:18:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:57Z|00439|binding|INFO|Releasing lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 from this chassis (sb_readonly=0)
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.580 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:57Z|00440|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 down in Southbound
Nov 29 03:18:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:57Z|00441|binding|INFO|Removing iface tap86a9f0a7-7d ovn-installed in OVS
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.588 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e6:36 10.100.0.11'], port_security=['fa:16:3e:57:e6:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2139732-1006-47ec-ac7a-1fc524679243', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '742c4de6c1314ca1aeaf3f5666a16967', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3580c5f6-234a-4467-9b1b-884cdcfc8244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39238d15-52ac-4b26-9f2b-82be3c84cd26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.589 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 in datapath a2139732-1006-47ec-ac7a-1fc524679243 unbound from our chassis#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.590 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a2139732-1006-47ec-ac7a-1fc524679243, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.592 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66c78916-0db0-4ade-9c07-619c3cff345f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.593 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 namespace which is not needed anymore#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 29 03:18:57 np0005539550 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000006e.scope: Consumed 4.831s CPU time.
Nov 29 03:18:57 np0005539550 systemd-machined[216673]: Machine qemu-52-instance-0000006e terminated.
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [NOTICE]   (326408) : haproxy version is 2.8.14-c23fe91
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [NOTICE]   (326408) : path to executable is /usr/sbin/haproxy
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [WARNING]  (326408) : Exiting Master process...
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [WARNING]  (326408) : Exiting Master process...
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [ALERT]    (326408) : Current worker (326415) exited with code 143 (Terminated)
Nov 29 03:18:57 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326386]: [WARNING]  (326408) : All workers exited. Exiting... (0)
Nov 29 03:18:57 np0005539550 systemd[1]: libpod-06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3.scope: Deactivated successfully.
Nov 29 03:18:57 np0005539550 conmon[326386]: conmon 06fd7771f01c55a90ba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3.scope/container/memory.events
Nov 29 03:18:57 np0005539550 podman[326501]: 2025-11-29 08:18:57.71956391 +0000 UTC m=+0.042240135 container died 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:18:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3-userdata-shm.mount: Deactivated successfully.
Nov 29 03:18:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-297062b6e4e018cc9c8aca6ad2a38e590bb7e348e6889968a08eb1c1fefb5cbb-merged.mount: Deactivated successfully.
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.769 257641 INFO nova.virt.libvirt.driver [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance destroyed successfully.#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.770 257641 DEBUG nova.objects.instance [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'resources' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:57 np0005539550 podman[326501]: 2025-11-29 08:18:57.776174 +0000 UTC m=+0.098850235 container cleanup 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:18:57 np0005539550 systemd[1]: libpod-conmon-06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3.scope: Deactivated successfully.
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.794 257641 DEBUG nova.virt.libvirt.vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:57Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.795 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.797 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.797 257641 DEBUG os_vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.802 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86a9f0a7-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.805 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.811 257641 INFO os_vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d')#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.820 257641 DEBUG nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start _get_guest_xml network_info=[{"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.824 257641 WARNING nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.831 257641 DEBUG nova.virt.libvirt.host [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.833 257641 DEBUG nova.virt.libvirt.host [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.836 257641 DEBUG nova.virt.libvirt.host [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.836 257641 DEBUG nova.virt.libvirt.host [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.837 257641 DEBUG nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.838 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.838 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.838 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.838 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.839 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.839 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.839 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.839 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.839 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.840 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.840 257641 DEBUG nova.virt.hardware [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.840 257641 DEBUG nova.objects.instance [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:57 np0005539550 podman[326540]: 2025-11-29 08:18:57.851495126 +0000 UTC m=+0.049930731 container remove 06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.857 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[734c0289-be82-4c69-989b-1d5583f391ee]: (4, ('Sat Nov 29 08:18:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 (06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3)\n06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3\nSat Nov 29 08:18:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 (06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3)\n06fd7771f01c55a90ba7666ac7208799b757e762ff30350f109c4ba24a3112b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.858 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e52490f4-1c94-495c-94e8-3f2424a1c882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.859 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2139732-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.861 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.870 257641 DEBUG oslo_concurrency.processutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:57 np0005539550 kernel: tapa2139732-10: left promiscuous mode
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.880 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[744e0743-9bd9-4c42-9d9e-70087130d6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.891 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e68d67fb-ec94-4db3-ac7f-9a509c69b70b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.892 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[05c7ec6f-43a2-4ee4-9809-14b01c12df05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.906 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e5a4e220-2b6b-41c8-aa90-156c687c4851]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738059, 'reachable_time': 19507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326556, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 systemd[1]: run-netns-ovnmeta\x2da2139732\x2d1006\x2d47ec\x2dac7a\x2d1fc524679243.mount: Deactivated successfully.
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.908 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:18:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:57.909 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[02b7b990-8871-4ac6-ae81-3fa7b28ca35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.959 257641 DEBUG nova.compute.manager [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.960 257641 DEBUG oslo_concurrency.lockutils [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.960 257641 DEBUG oslo_concurrency.lockutils [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.960 257641 DEBUG oslo_concurrency.lockutils [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.960 257641 DEBUG nova.compute.manager [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:18:57 np0005539550 nova_compute[257631]: 2025-11-29 08:18:57.961 257641 WARNING nova.compute.manager [req-6a1d7ea8-f7d3-429e-ad1a-6d174cbb3e1a req-2dcf42fb-7725-4fbd-8714-4437f63bcbdc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 29 03:18:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:18:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:18:58.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:18:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:18:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:18:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039681052' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.368 257641 DEBUG oslo_concurrency.processutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.409 257641 DEBUG oslo_concurrency.processutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:18:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:18:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579115858' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.843 257641 DEBUG oslo_concurrency.processutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.846 257641 DEBUG nova.virt.libvirt.vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:57Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.847 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.849 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.851 257641 DEBUG nova.objects.instance [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'pci_devices' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.877 257641 DEBUG nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <uuid>a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</uuid>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <name>instance-0000006e</name>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:name>tempest-InstanceActionsTestJSON-server-292121147</nova:name>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:18:57</nova:creationTime>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:user uuid="c2d81b2cb6a140dc94e2c0965d4219cb">tempest-InstanceActionsTestJSON-2049339947-project-member</nova:user>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:project uuid="742c4de6c1314ca1aeaf3f5666a16967">tempest-InstanceActionsTestJSON-2049339947</nova:project>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <nova:port uuid="86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="serial">a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="uuid">a258fb8b-4163-4a1d-b0fd-4006ed31fd0a</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_disk.config">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:57:e6:36"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <target dev="tap86a9f0a7-7d"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a/console.log" append="off"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:18:58 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:18:58 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:18:58 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:18:58 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.879 257641 DEBUG nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.880 257641 DEBUG nova.virt.libvirt.driver [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.881 257641 DEBUG nova.virt.libvirt.vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:57Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.881 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.881 257641 DEBUG nova.network.os_vif_util [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.882 257641 DEBUG os_vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.882 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.883 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.883 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.885 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.886 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86a9f0a7-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.886 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap86a9f0a7-7d, col_values=(('external_ids', {'iface-id': '86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:e6:36', 'vm-uuid': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:58 np0005539550 NetworkManager[49039]: <info>  [1764404338.8883] manager: (tap86a9f0a7-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.887 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.893 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.894 257641 INFO os_vif [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d')#033[00m
Nov 29 03:18:58 np0005539550 kernel: tap86a9f0a7-7d: entered promiscuous mode
Nov 29 03:18:58 np0005539550 NetworkManager[49039]: <info>  [1764404338.9661] manager: (tap86a9f0a7-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Nov 29 03:18:58 np0005539550 systemd-udevd[326480]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:18:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:58Z|00442|binding|INFO|Claiming lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for this chassis.
Nov 29 03:18:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:58Z|00443|binding|INFO|86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3: Claiming fa:16:3e:57:e6:36 10.100.0.11
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.968 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.974 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e6:36 10.100.0.11'], port_security=['fa:16:3e:57:e6:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2139732-1006-47ec-ac7a-1fc524679243', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '742c4de6c1314ca1aeaf3f5666a16967', 'neutron:revision_number': '5', 'neutron:security_group_ids': '3580c5f6-234a-4467-9b1b-884cdcfc8244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39238d15-52ac-4b26-9f2b-82be3c84cd26, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.975 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 in datapath a2139732-1006-47ec-ac7a-1fc524679243 bound to our chassis#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.977 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a2139732-1006-47ec-ac7a-1fc524679243#033[00m
Nov 29 03:18:58 np0005539550 NetworkManager[49039]: <info>  [1764404338.9793] device (tap86a9f0a7-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:18:58 np0005539550 NetworkManager[49039]: <info>  [1764404338.9801] device (tap86a9f0a7-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:18:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:58Z|00444|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 ovn-installed in OVS
Nov 29 03:18:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:58Z|00445|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 up in Southbound
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.985 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 nova_compute[257631]: 2025-11-29 08:18:58.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.989 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb8896d-f4bc-471e-8ca6-e9c03d46b52b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.990 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa2139732-11 in ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.992 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa2139732-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.992 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b0735c7-ab38-4042-9c9a-de9bf6177866]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:58.994 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[895e59ed-2c86-48b0-b0b8-9c7370075063]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 systemd-machined[216673]: New machine qemu-53-instance-0000006e.
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.007 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b284032a-f283-4f1f-86d9-8d66b7a56251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 systemd[1]: Started Virtual Machine qemu-53-instance-0000006e.
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.030 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[463e69c2-d386-472b-bae8-facceb431653]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.060 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1007cf-3018-45d5-86b1-abeb6d1f1aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.065 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbf9d50-970c-4407-87dd-664bb1acb09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 NetworkManager[49039]: <info>  [1764404339.0673] manager: (tapa2139732-10): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.099 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bf67c873-ebe4-4623-92ac-4adff523d732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.104 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[34b255e5-14b5-4fa7-b642-fcb0ceecaf2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 NetworkManager[49039]: <info>  [1764404339.1293] device (tapa2139732-10): carrier: link connected
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.136 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8945fa6d-5989-4fc0-b01b-575c529cb439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.153 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9390ed9e-e7af-4558-98d9-22d65b568616]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2139732-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:db:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738679, 'reachable_time': 20269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326664, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.168 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1618cd-ffae-42ca-af26-b47ac86ac4c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:db48'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 738679, 'tstamp': 738679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326665, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.184 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2fccaa-dec3-4dde-8e66-3caa5cbb11e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa2139732-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:db:48'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 122], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738679, 'reachable_time': 20269, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 326666, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.213 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b4d5bd8d-2670-4495-85ce-9e0f7d7419eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.266 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[90cb1747-d32c-4143-a140-62c54ca2d8e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.267 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2139732-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.267 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.268 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2139732-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:59 np0005539550 NetworkManager[49039]: <info>  [1764404339.2700] manager: (tapa2139732-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Nov 29 03:18:59 np0005539550 kernel: tapa2139732-10: entered promiscuous mode
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.273 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa2139732-10, col_values=(('external_ids', {'iface-id': 'eb43f9e1-bffe-4017-ae54-6ff13736ca49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:18:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:18:59Z|00446|binding|INFO|Releasing lport eb43f9e1-bffe-4017-ae54-6ff13736ca49 from this chassis (sb_readonly=0)
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.284 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.293 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.294 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.295 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c26de28b-1659-401d-9d2b-8b5ccbd194a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.295 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a2139732-1006-47ec-ac7a-1fc524679243
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a2139732-1006-47ec-ac7a-1fc524679243.pid.haproxy
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a2139732-1006-47ec-ac7a-1fc524679243
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:18:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:18:59.296 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'env', 'PROCESS_TAG=haproxy-a2139732-1006-47ec-ac7a-1fc524679243', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a2139732-1006-47ec-ac7a-1fc524679243.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:18:59
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'vms']
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:18:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Nov 29 03:18:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:18:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:18:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:18:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.654 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for a258fb8b-4163-4a1d-b0fd-4006ed31fd0a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.655 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404339.6537054, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.655 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.658 257641 DEBUG nova.compute.manager [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.663 257641 INFO nova.virt.libvirt.driver [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance rebooted successfully.#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.663 257641 DEBUG nova.compute.manager [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.687 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.690 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.720 257641 DEBUG oslo_concurrency.lockutils [None req-2a83ce48-3190-4e7e-9221-7af432b76ee7 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.723 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404339.6545618, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.723 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:18:59 np0005539550 podman[326739]: 2025-11-29 08:18:59.726276226 +0000 UTC m=+0.054711753 container create 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.744 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:18:59 np0005539550 nova_compute[257631]: 2025-11-29 08:18:59.749 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:18:59 np0005539550 systemd[1]: Started libpod-conmon-77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92.scope.
Nov 29 03:18:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:18:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3c454fc3a39e08318899bca2cb05f90d8129653aa9eb5fc9b01dbaacfcb20ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:18:59 np0005539550 podman[326739]: 2025-11-29 08:18:59.698797037 +0000 UTC m=+0.027232584 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:18:59 np0005539550 podman[326739]: 2025-11-29 08:18:59.809287967 +0000 UTC m=+0.137723514 container init 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:18:59 np0005539550 podman[326739]: 2025-11-29 08:18:59.814826278 +0000 UTC m=+0.143261805 container start 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:18:59 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [NOTICE]   (326758) : New worker (326760) forked
Nov 29 03:18:59 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [NOTICE]   (326758) : Loading success.
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.086 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.087 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.087 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.088 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.088 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.088 257641 WARNING nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.089 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.089 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.089 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.090 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.090 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.090 257641 WARNING nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.090 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.091 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.091 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.091 257641 DEBUG oslo_concurrency.lockutils [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.092 257641 DEBUG nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:00 np0005539550 nova_compute[257631]: 2025-11-29 08:19:00.092 257641 WARNING nova.compute.manager [req-dc6490d1-4afb-4ef0-a757-923ebe3235e2 req-0ca6be44-ee1e-4be4-9ef2-7711f0136483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:00.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 222 op/s
Nov 29 03:19:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:01.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:02.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.499 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.500 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.500 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.500 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.500 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.501 257641 INFO nova.compute.manager [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Terminating instance#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.502 257641 DEBUG nova.compute.manager [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:02 np0005539550 kernel: tap86a9f0a7-7d (unregistering): left promiscuous mode
Nov 29 03:19:02 np0005539550 NetworkManager[49039]: <info>  [1764404342.5390] device (tap86a9f0a7-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.546 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:02Z|00447|binding|INFO|Releasing lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 from this chassis (sb_readonly=0)
Nov 29 03:19:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:02Z|00448|binding|INFO|Setting lport 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 down in Southbound
Nov 29 03:19:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:02Z|00449|binding|INFO|Removing iface tap86a9f0a7-7d ovn-installed in OVS
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.561 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:e6:36 10.100.0.11'], port_security=['fa:16:3e:57:e6:36 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a258fb8b-4163-4a1d-b0fd-4006ed31fd0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a2139732-1006-47ec-ac7a-1fc524679243', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '742c4de6c1314ca1aeaf3f5666a16967', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3580c5f6-234a-4467-9b1b-884cdcfc8244', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39238d15-52ac-4b26-9f2b-82be3c84cd26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.564 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 in datapath a2139732-1006-47ec-ac7a-1fc524679243 unbound from our chassis#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.565 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a2139732-1006-47ec-ac7a-1fc524679243, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.566 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[40fd159c-6131-4874-98fd-02a8090d18e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.566 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 namespace which is not needed anymore#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Nov 29 03:19:02 np0005539550 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000006e.scope: Consumed 3.677s CPU time.
Nov 29 03:19:02 np0005539550 systemd-machined[216673]: Machine qemu-53-instance-0000006e terminated.
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [NOTICE]   (326758) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [NOTICE]   (326758) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [WARNING]  (326758) : Exiting Master process...
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [WARNING]  (326758) : Exiting Master process...
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [ALERT]    (326758) : Current worker (326760) exited with code 143 (Terminated)
Nov 29 03:19:02 np0005539550 neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243[326754]: [WARNING]  (326758) : All workers exited. Exiting... (0)
Nov 29 03:19:02 np0005539550 systemd[1]: libpod-77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92.scope: Deactivated successfully.
Nov 29 03:19:02 np0005539550 podman[326795]: 2025-11-29 08:19:02.691826196 +0000 UTC m=+0.044781870 container died 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e3c454fc3a39e08318899bca2cb05f90d8129653aa9eb5fc9b01dbaacfcb20ce-merged.mount: Deactivated successfully.
Nov 29 03:19:02 np0005539550 podman[326795]: 2025-11-29 08:19:02.727214516 +0000 UTC m=+0.080170190 container cleanup 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:19:02 np0005539550 systemd[1]: libpod-conmon-77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92.scope: Deactivated successfully.
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.735 257641 INFO nova.virt.libvirt.driver [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Instance destroyed successfully.#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.736 257641 DEBUG nova.objects.instance [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lazy-loading 'resources' on Instance uuid a258fb8b-4163-4a1d-b0fd-4006ed31fd0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.755 257641 DEBUG nova.virt.libvirt.vif [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:18:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-292121147',display_name='tempest-InstanceActionsTestJSON-server-292121147',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-292121147',id=110,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:18:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='742c4de6c1314ca1aeaf3f5666a16967',ramdisk_id='',reservation_id='r-qo0toh0z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-2049339947',owner_user_name='tempest-InstanceActionsTestJSON-2049339947-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:18:59Z,user_data=None,user_id='c2d81b2cb6a140dc94e2c0965d4219cb',uuid=a258fb8b-4163-4a1d-b0fd-4006ed31fd0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.756 257641 DEBUG nova.network.os_vif_util [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converting VIF {"id": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "address": "fa:16:3e:57:e6:36", "network": {"id": "a2139732-1006-47ec-ac7a-1fc524679243", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-948283519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "742c4de6c1314ca1aeaf3f5666a16967", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap86a9f0a7-7d", "ovs_interfaceid": "86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.756 257641 DEBUG nova.network.os_vif_util [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.756 257641 DEBUG os_vif [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.758 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86a9f0a7-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.759 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.763 257641 INFO os_vif [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:e6:36,bridge_name='br-int',has_traffic_filtering=True,id=86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3,network=Network(a2139732-1006-47ec-ac7a-1fc524679243),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap86a9f0a7-7d')#033[00m
Nov 29 03:19:02 np0005539550 podman[326832]: 2025-11-29 08:19:02.793687217 +0000 UTC m=+0.043472737 container remove 77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.799 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[32dcd8a0-b87e-4d42-a62d-6261c1251bfa]: (4, ('Sat Nov 29 08:19:02 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 (77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92)\n77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92\nSat Nov 29 08:19:02 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 (77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92)\n77ea114ee1117579614e805e3459a148faa26a26ddb5731fdfdab813e488cb92\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.801 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb3ed12-5f38-4fd5-92c9-d65499324784]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.802 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2139732-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 kernel: tapa2139732-10: left promiscuous mode
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.818 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.820 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fcaa1f-4111-4cfa-ba9c-acb5ab58f905]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.835 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1a5435-b24b-4332-abc4-b9a8c1a478eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.836 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7bae4053-0a72-4714-b12c-0768fad1f589]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.850 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[76bd7bf9-e2e3-46de-a3ac-db9b4392b59f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 738671, 'reachable_time': 43889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326865, 'error': None, 'target': 'ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.852 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a2139732-1006-47ec-ac7a-1fc524679243 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:02.852 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[52ba44fd-40fc-427b-aee1-8e2b2237218e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:02 np0005539550 systemd[1]: run-netns-ovnmeta\x2da2139732\x2d1006\x2d47ec\x2dac7a\x2d1fc524679243.mount: Deactivated successfully.
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.990 257641 DEBUG nova.compute.manager [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.991 257641 DEBUG oslo_concurrency.lockutils [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.991 257641 DEBUG oslo_concurrency.lockutils [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.991 257641 DEBUG oslo_concurrency.lockutils [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.991 257641 DEBUG nova.compute.manager [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:02 np0005539550 nova_compute[257631]: 2025-11-29 08:19:02.991 257641 DEBUG nova.compute.manager [req-9441424a-08a5-4bfa-99bd-41ad463318dd req-6414343e-43a2-4336-ac81-60ca1a70e36b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-unplugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:19:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 169 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 109 KiB/s wr, 148 op/s
Nov 29 03:19:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:03.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.562 257641 INFO nova.virt.libvirt.driver [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Deleting instance files /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_del#033[00m
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.562 257641 INFO nova.virt.libvirt.driver [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Deletion of /var/lib/nova/instances/a258fb8b-4163-4a1d-b0fd-4006ed31fd0a_del complete#033[00m
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.633 257641 INFO nova.compute.manager [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.634 257641 DEBUG oslo.service.loopingcall [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.634 257641 DEBUG nova.compute.manager [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:03 np0005539550 nova_compute[257631]: 2025-11-29 08:19:03.634 257641 DEBUG nova.network.neutron [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.685 257641 DEBUG nova.network.neutron [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.708 257641 INFO nova.compute.manager [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Took 1.07 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.832 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.832 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.869 257641 DEBUG nova.compute.manager [req-880c1cdf-b16c-4a72-8118-8927c718f2b4 req-2485f40f-00af-42b3-b065-eef65509e17c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-deleted-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:04 np0005539550 nova_compute[257631]: 2025-11-29 08:19:04.925 257641 DEBUG oslo_concurrency.processutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.106 257641 DEBUG nova.compute.manager [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.107 257641 DEBUG oslo_concurrency.lockutils [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.107 257641 DEBUG oslo_concurrency.lockutils [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.108 257641 DEBUG oslo_concurrency.lockutils [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.108 257641 DEBUG nova.compute.manager [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] No waiting events found dispatching network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.108 257641 WARNING nova.compute.manager [req-a018a55d-dc6c-4a07-a4b8-7fb50148a9fc req-4d7f3d86-2945-4ddc-8ed9-d52ce582afd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Received unexpected event network-vif-plugged-86a9f0a7-7dcb-40f4-93a7-033cb85c0fb3 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:19:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/505807939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.364 257641 DEBUG oslo_concurrency.processutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.370 257641 DEBUG nova.compute.provider_tree [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.387 257641 DEBUG nova.scheduler.client.report [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 148 MiB data, 877 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 24 KiB/s wr, 202 op/s
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.415 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:05.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.504 257641 INFO nova.scheduler.client.report [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Deleted allocations for instance a258fb8b-4163-4a1d-b0fd-4006ed31fd0a#033[00m
Nov 29 03:19:05 np0005539550 nova_compute[257631]: 2025-11-29 08:19:05.648 257641 DEBUG oslo_concurrency.lockutils [None req-29159a8a-4603-4fdb-98cf-a982b92ad084 c2d81b2cb6a140dc94e2c0965d4219cb 742c4de6c1314ca1aeaf3f5666a16967 - - default default] Lock "a258fb8b-4163-4a1d-b0fd-4006ed31fd0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:06.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:07 np0005539550 nova_compute[257631]: 2025-11-29 08:19:07.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:07 np0005539550 podman[326893]: 2025-11-29 08:19:07.332960069 +0000 UTC m=+0.077917763 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:19:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 126 MiB data, 869 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 24 KiB/s wr, 183 op/s
Nov 29 03:19:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:07.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:07 np0005539550 nova_compute[257631]: 2025-11-29 08:19:07.807 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:08.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:19:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:19:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 161 op/s
Nov 29 03:19:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:09.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:09 np0005539550 nova_compute[257631]: 2025-11-29 08:19:09.741 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:10.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:10.566 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:10.567 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:10 np0005539550 nova_compute[257631]: 2025-11-29 08:19:10.567 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 22 KiB/s wr, 154 op/s
Nov 29 03:19:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:11.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:12 np0005539550 nova_compute[257631]: 2025-11-29 08:19:12.167 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:12.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:12 np0005539550 nova_compute[257631]: 2025-11-29 08:19:12.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 25 KiB/s wr, 123 op/s
Nov 29 03:19:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:13.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:14.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 116 op/s
Nov 29 03:19:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:16.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:17 np0005539550 nova_compute[257631]: 2025-11-29 08:19:17.167 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 859 KiB/s rd, 23 KiB/s wr, 39 op/s
Nov 29 03:19:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:17.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:17 np0005539550 nova_compute[257631]: 2025-11-29 08:19:17.734 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404342.7337677, a258fb8b-4163-4a1d-b0fd-4006ed31fd0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:17 np0005539550 nova_compute[257631]: 2025-11-29 08:19:17.734 257641 INFO nova.compute.manager [-] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:19:17 np0005539550 nova_compute[257631]: 2025-11-29 08:19:17.761 257641 DEBUG nova.compute.manager [None req-e1847adc-6ffa-437e-9431-256b433cf007 - - - - - -] [instance: a258fb8b-4163-4a1d-b0fd-4006ed31fd0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:17 np0005539550 nova_compute[257631]: 2025-11-29 08:19:17.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:18.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:18.951 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:18.951 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:18.951 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 123 MiB data, 868 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 44 op/s
Nov 29 03:19:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:19.569 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:19 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002176682021217197 of space, bias 1.0, pg target 0.6530046063651591 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:19:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:19:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.533 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.534 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.551 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.629 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.630 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.636 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.637 257641 INFO nova.compute.claims [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:19:20 np0005539550 nova_compute[257631]: 2025-11-29 08:19:20.732 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2595666590' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.201 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.211 257641 DEBUG nova.compute.provider_tree [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.232 257641 DEBUG nova.scheduler.client.report [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.259 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.261 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.334 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.335 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.358 257641 INFO nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.377 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 144 MiB data, 882 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 885 KiB/s wr, 113 op/s
Nov 29 03:19:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:21.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.472 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.474 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.475 257641 INFO nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Creating image(s)#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.502 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.529 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.558 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.562 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.596 257641 DEBUG nova.policy [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.630 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.631 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.632 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.632 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.665 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:21 np0005539550 nova_compute[257631]: 2025-11-29 08:19:21.671 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.022 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.101 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.226 257641 DEBUG nova.objects.instance [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.244 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.244 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Ensure instance console log exists: /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.245 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.245 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.246 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.591 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Successfully created port: 0bdc8d4b-e261-4398-8465-58392acd35a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:22 np0005539550 nova_compute[257631]: 2025-11-29 08:19:22.852 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 170 MiB data, 893 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Nov 29 03:19:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:23.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:24.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:24 np0005539550 podman[327167]: 2025-11-29 08:19:24.326711563 +0000 UTC m=+0.064758418 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.335 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Successfully updated port: 0bdc8d4b-e261-4398-8465-58392acd35a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:24 np0005539550 podman[327168]: 2025-11-29 08:19:24.348249621 +0000 UTC m=+0.086140142 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.364 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.364 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.365 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.462 257641 DEBUG nova.compute.manager [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.462 257641 DEBUG nova.compute.manager [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing instance network info cache due to event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.463 257641 DEBUG oslo_concurrency.lockutils [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.533 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.534 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.547 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.564 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.950 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.951 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.961 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:24 np0005539550 nova_compute[257631]: 2025-11-29 08:19:24.961 257641 INFO nova.compute.claims [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.187 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 199 MiB data, 904 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.6 MiB/s wr, 129 op/s
Nov 29 03:19:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:25.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3895627530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.660 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.667 257641 DEBUG nova.compute.provider_tree [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.689 257641 DEBUG nova.scheduler.client.report [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.734 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.735 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.743 257641 DEBUG nova.network.neutron [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.777 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.778 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance network_info: |[{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.778 257641 DEBUG oslo_concurrency.lockutils [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.778 257641 DEBUG nova.network.neutron [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.781 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Start _get_guest_xml network_info=[{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.785 257641 WARNING nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.789 257641 DEBUG nova.virt.libvirt.host [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.790 257641 DEBUG nova.virt.libvirt.host [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.796 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.797 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.800 257641 DEBUG nova.virt.libvirt.host [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.800 257641 DEBUG nova.virt.libvirt.host [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.801 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.801 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.802 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.802 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.802 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.803 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.803 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.803 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.803 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.804 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.804 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.804 257641 DEBUG nova.virt.hardware [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.807 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.839 257641 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.861 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.945 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.986 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.987 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.987 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.996 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.997 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:19:25 np0005539550 nova_compute[257631]: 2025-11-29 08:19:25.997 257641 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Creating image(s)#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.026 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.071 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.104 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.108 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.171 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.172 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.173 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.174 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.212 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.216 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/198334771' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.303 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.327 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.331 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.570 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.641 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] resizing rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.674 257641 DEBUG nova.policy [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '14d293467f8e498eaa87b6b8976b34d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27fd30263a7f4717b84946720a5770b5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.750 257641 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'migration_context' on Instance uuid bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.766 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.767 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Ensure instance console log exists: /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.767 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.767 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.767 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/252694727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.822 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.824 257641 DEBUG nova.virt.libvirt.vif [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:21Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.824 257641 DEBUG nova.network.os_vif_util [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.825 257641 DEBUG nova.network.os_vif_util [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.827 257641 DEBUG nova.objects.instance [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.857 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <uuid>cf3d3db9-f753-47a8-93d5-7f0491bb03fd</uuid>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <name>instance-0000006f</name>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1568692522</nova:name>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:19:25</nova:creationTime>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <nova:port uuid="0bdc8d4b-e261-4398-8465-58392acd35a8">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="serial">cf3d3db9-f753-47a8-93d5-7f0491bb03fd</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="uuid">cf3d3db9-f753-47a8-93d5-7f0491bb03fd</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:5a:07:26"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <target dev="tap0bdc8d4b-e2"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/console.log" append="off"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:19:26 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:19:26 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:19:26 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:19:26 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.858 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Preparing to wait for external event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.858 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.859 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.859 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.859 257641 DEBUG nova.virt.libvirt.vif [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:21Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.860 257641 DEBUG nova.network.os_vif_util [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.860 257641 DEBUG nova.network.os_vif_util [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.861 257641 DEBUG os_vif [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.861 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.862 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.862 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.867 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0bdc8d4b-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.868 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0bdc8d4b-e2, col_values=(('external_ids', {'iface-id': '0bdc8d4b-e261-4398-8465-58392acd35a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5a:07:26', 'vm-uuid': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.869 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539550 NetworkManager[49039]: <info>  [1764404366.8705] manager: (tap0bdc8d4b-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.871 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.876 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.878 257641 INFO os_vif [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2')#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.957 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.958 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.959 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:5a:07:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.959 257641 INFO nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Using config drive#033[00m
Nov 29 03:19:26 np0005539550 nova_compute[257631]: 2025-11-29 08:19:26.987 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.172 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 215 MiB data, 915 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 133 op/s
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.447 257641 INFO nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Creating config drive at /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config#033[00m
Nov 29 03:19:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:27.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.452 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo25atot5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.587 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo25atot5" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.629 257641 DEBUG nova.storage.rbd_utils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.633 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.788 257641 DEBUG oslo_concurrency.processutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.789 257641 INFO nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Deleting local config drive /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:27 np0005539550 kernel: tap0bdc8d4b-e2: entered promiscuous mode
Nov 29 03:19:27 np0005539550 NetworkManager[49039]: <info>  [1764404367.8398] manager: (tap0bdc8d4b-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/201)
Nov 29 03:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:27 np0005539550 systemd-udevd[327526]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.887 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:27Z|00450|binding|INFO|Claiming lport 0bdc8d4b-e261-4398-8465-58392acd35a8 for this chassis.
Nov 29 03:19:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:27Z|00451|binding|INFO|0bdc8d4b-e261-4398-8465-58392acd35a8: Claiming fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.891 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539550 NetworkManager[49039]: <info>  [1764404367.9016] device (tap0bdc8d4b-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:27 np0005539550 NetworkManager[49039]: <info>  [1764404367.9028] device (tap0bdc8d4b-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.903 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.905 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 bound to our chassis#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.907 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.918 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5959c372-fde0-4a75-adf8-441de5522265]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.919 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96a9f8d0-91 in ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.924 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96a9f8d0-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.924 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2bbd815f-81ee-43d2-9991-01d03f268d4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:27 np0005539550 systemd-machined[216673]: New machine qemu-54-instance-0000006f.
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.927 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1033527-86d0-4642-a353-748291ba3e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.941 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[02449219-c21e-4803-a428-66e8a53d1efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:27 np0005539550 systemd[1]: Started Virtual Machine qemu-54-instance-0000006f.
Nov 29 03:19:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:27Z|00452|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 ovn-installed in OVS
Nov 29 03:19:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:27Z|00453|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 up in Southbound
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.965 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d16f02-0a92-4f21-bcfa-2a2493db3250]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:27 np0005539550 nova_compute[257631]: 2025-11-29 08:19:27.967 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:27.994 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e79148-0cd2-477f-8d6b-2104c302138d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 systemd-udevd[327530]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.001 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eebc75-68d5-44db-bc44-d9f255f058d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 NetworkManager[49039]: <info>  [1764404368.0031] manager: (tap96a9f8d0-90): new Veth device (/org/freedesktop/NetworkManager/Devices/202)
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.032 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[09f72692-ba76-4bee-b171-7b52868e2ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.035 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[80a75f81-956d-41b1-b1b7-d6f8fc9cc206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 NetworkManager[49039]: <info>  [1764404368.0595] device (tap96a9f8d0-90): carrier: link connected
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.066 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[45220fc9-5f59-4ef9-80b9-bcd4c0f9f96b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.086 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06e6c507-f518-40d1-aa69-2778ad35396f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96a9f8d0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3f:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741572, 'reachable_time': 18731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327563, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.100 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22ffc1b6-0284-478b-b35d-ec3c119adfe9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:3f3b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741572, 'tstamp': 741572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327564, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.115 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0dbf834f-5999-4afd-a2c0-e2185ac87813]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96a9f8d0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:3f:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741572, 'reachable_time': 18731, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327565, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.146 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82d99770-7b9b-4b18-bf85-8db338d0c531]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.203 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9d90602f-1700-46b5-aadd-3153f7602a0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96a9f8d0-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.205 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96a9f8d0-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:28 np0005539550 NetworkManager[49039]: <info>  [1764404368.2075] manager: (tap96a9f8d0-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Nov 29 03:19:28 np0005539550 kernel: tap96a9f8d0-90: entered promiscuous mode
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.207 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.209 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.210 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96a9f8d0-90, col_values=(('external_ids', {'iface-id': '74c0e253-5186-4acd-84b9-fb779ff161ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.211 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:28Z|00454|binding|INFO|Releasing lport 74c0e253-5186-4acd-84b9-fb779ff161ee from this chassis (sb_readonly=0)
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.228 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.229 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[442a5def-dcbe-4cc8-bff7-6aeadf1dbd96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.230 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.pid.haproxy
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:28.230 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'env', 'PROCESS_TAG=haproxy-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96a9f8d0-94cb-4ef1-b5fc-814aeb66b309.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.368 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Successfully created port: 44cf97f4-db65-445f-bb49-d4da69bd4b75 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:19:28 np0005539550 podman[327597]: 2025-11-29 08:19:28.585563334 +0000 UTC m=+0.049438459 container create 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:28 np0005539550 systemd[1]: Started libpod-conmon-4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e.scope.
Nov 29 03:19:28 np0005539550 podman[327597]: 2025-11-29 08:19:28.556505695 +0000 UTC m=+0.020380840 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2938f6398261dc8ae5dd412bebd3f0e7ffc96999a7b7b3d82a7a8043a35f2dfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:28 np0005539550 podman[327597]: 2025-11-29 08:19:28.675632854 +0000 UTC m=+0.139507999 container init 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:19:28 np0005539550 podman[327597]: 2025-11-29 08:19:28.684643644 +0000 UTC m=+0.148518769 container start 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:19:28 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [NOTICE]   (327634) : New worker (327652) forked
Nov 29 03:19:28 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [NOTICE]   (327634) : Loading success.
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.763 257641 DEBUG nova.network.neutron [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated VIF entry in instance network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.764 257641 DEBUG nova.network.neutron [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.799 257641 DEBUG oslo_concurrency.lockutils [req-992fae62-0cd1-4bf3-8b64-f60c34589a3a req-6ddbba2f-996e-4e3c-af16-441d5120d800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.824 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404368.8244941, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.825 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.855 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.859 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404368.824667, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.859 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.878 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.881 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.901 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.940 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.941 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:19:28 np0005539550 nova_compute[257631]: 2025-11-29 08:19:28.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.128 257641 DEBUG nova.compute.manager [req-412f20f4-2816-4dce-8ae7-617062fbfdb7 req-ee12d265-29b9-45ad-a36e-fb84caffff3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.130 257641 DEBUG oslo_concurrency.lockutils [req-412f20f4-2816-4dce-8ae7-617062fbfdb7 req-ee12d265-29b9-45ad-a36e-fb84caffff3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.131 257641 DEBUG oslo_concurrency.lockutils [req-412f20f4-2816-4dce-8ae7-617062fbfdb7 req-ee12d265-29b9-45ad-a36e-fb84caffff3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.131 257641 DEBUG oslo_concurrency.lockutils [req-412f20f4-2816-4dce-8ae7-617062fbfdb7 req-ee12d265-29b9-45ad-a36e-fb84caffff3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.132 257641 DEBUG nova.compute.manager [req-412f20f4-2816-4dce-8ae7-617062fbfdb7 req-ee12d265-29b9-45ad-a36e-fb84caffff3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Processing event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.133 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.139 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404369.138575, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.139 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.144 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.149 257641 INFO nova.virt.libvirt.driver [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance spawned successfully.#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.150 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.174 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.187 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.188 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.188 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.189 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.190 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.190 257641 DEBUG nova.virt.libvirt.driver [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.195 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.232 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.292 257641 INFO nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Took 7.82 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.293 257641 DEBUG nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1600139111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.379 257641 INFO nova.compute.manager [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Took 8.77 seconds to build instance.#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.398 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 244 MiB data, 924 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.4 MiB/s wr, 158 op/s
Nov 29 03:19:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:29.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.509 257641 DEBUG oslo_concurrency.lockutils [None req-476c3b38-293e-4843-b1bd-eb7066571577 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.572 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.573 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.747 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.748 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=20.921844482421875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.748 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.748 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.832 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance cf3d3db9-f753-47a8-93d5-7f0491bb03fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.833 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.833 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.834 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:19:29 np0005539550 nova_compute[257631]: 2025-11-29 08:19:29.900 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:30.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.269 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Successfully updated port: 44cf97f4-db65-445f-bb49-d4da69bd4b75 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.287 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.287 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquired lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.288 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3315573152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.373 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.379 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.445 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.465 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:19:30 np0005539550 nova_compute[257631]: 2025-11-29 08:19:30.465 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.248 257641 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.250 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.251 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.251 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.252 257641 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.252 257641 WARNING nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.253 257641 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-changed-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.253 257641 DEBUG nova.compute.manager [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Refreshing instance network info cache due to event network-changed-44cf97f4-db65-445f-bb49-d4da69bd4b75. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.254 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.255 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:19:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 354 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 8.9 MiB/s wr, 265 op/s
Nov 29 03:19:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.466 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.466 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.467 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:31 np0005539550 nova_compute[257631]: 2025-11-29 08:19:31.871 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:32 np0005539550 nova_compute[257631]: 2025-11-29 08:19:32.179 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:32.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:19:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:19:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:32 np0005539550 nova_compute[257631]: 2025-11-29 08:19:32.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 354 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.0 MiB/s wr, 206 op/s
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4868aadf-12c4-4a83-a015-44fdd2841e2c does not exist
Nov 29 03:19:33 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 33cd7111-59d8-4f49-81aa-b982558a5fbe does not exist
Nov 29 03:19:33 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev be1f31b7-4af2-4c1e-b5e2-48b3a621d57f does not exist
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:19:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:33.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:33 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:19:33 np0005539550 nova_compute[257631]: 2025-11-29 08:19:33.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:33 np0005539550 nova_compute[257631]: 2025-11-29 08:19:33.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.035454344 +0000 UTC m=+0.040100841 container create 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:19:34 np0005539550 systemd[1]: Started libpod-conmon-3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba.scope.
Nov 29 03:19:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.016303547 +0000 UTC m=+0.020950064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.115610483 +0000 UTC m=+0.120256980 container init 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.12220222 +0000 UTC m=+0.126848717 container start 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.125762881 +0000 UTC m=+0.130409378 container attach 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:19:34 np0005539550 hopeful_bassi[328002]: 167 167
Nov 29 03:19:34 np0005539550 systemd[1]: libpod-3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba.scope: Deactivated successfully.
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.129934707 +0000 UTC m=+0.134581204 container died 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f06cfbc84120f95a4fd3ce730bb27350ae1470efcd2a2fcfee0fff76a687b58a-merged.mount: Deactivated successfully.
Nov 29 03:19:34 np0005539550 podman[327984]: 2025-11-29 08:19:34.1693621 +0000 UTC m=+0.174008597 container remove 3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:19:34 np0005539550 systemd[1]: libpod-conmon-3591e7dc90db9c6a264c204bd018b3e3cf83c7e273b580a010f2967d947b07ba.scope: Deactivated successfully.
Nov 29 03:19:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:34.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:34 np0005539550 podman[328025]: 2025-11-29 08:19:34.365834856 +0000 UTC m=+0.049610962 container create fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:19:34 np0005539550 systemd[1]: Started libpod-conmon-fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731.scope.
Nov 29 03:19:34 np0005539550 podman[328025]: 2025-11-29 08:19:34.344480493 +0000 UTC m=+0.028256629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:34 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:34 np0005539550 podman[328025]: 2025-11-29 08:19:34.475473745 +0000 UTC m=+0.159249851 container init fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:19:34 np0005539550 podman[328025]: 2025-11-29 08:19:34.482929614 +0000 UTC m=+0.166705700 container start fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:19:34 np0005539550 podman[328025]: 2025-11-29 08:19:34.487177202 +0000 UTC m=+0.170953308 container attach fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.176 257641 DEBUG nova.network.neutron [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Updating instance_info_cache with network_info: [{"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.213 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Releasing lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.214 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Instance network_info: |[{"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.215 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.215 257641 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Refreshing network info cache for port 44cf97f4-db65-445f-bb49-d4da69bd4b75 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.220 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Start _get_guest_xml network_info=[{"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.227 257641 WARNING nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.232 257641 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.234 257641 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.237 257641 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.238 257641 DEBUG nova.virt.libvirt.host [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.241 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.241 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.242 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.242 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.243 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.243 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.243 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.244 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.244 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.245 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.245 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.245 257641 DEBUG nova.virt.hardware [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.250 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:35 np0005539550 nostalgic_bouman[328041]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:19:35 np0005539550 nostalgic_bouman[328041]: --> relative data size: 1.0
Nov 29 03:19:35 np0005539550 nostalgic_bouman[328041]: --> All data devices are unavailable
Nov 29 03:19:35 np0005539550 systemd[1]: libpod-fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731.scope: Deactivated successfully.
Nov 29 03:19:35 np0005539550 podman[328025]: 2025-11-29 08:19:35.339995661 +0000 UTC m=+1.023771777 container died fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:19:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3fe3e2139663b898a86bbad4885c666894a5e2448460096d07b21ca40d451a08-merged.mount: Deactivated successfully.
Nov 29 03:19:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 355 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.1 MiB/s wr, 287 op/s
Nov 29 03:19:35 np0005539550 podman[328025]: 2025-11-29 08:19:35.427549678 +0000 UTC m=+1.111325774 container remove fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_bouman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:19:35 np0005539550 systemd[1]: libpod-conmon-fba6ec247b58ec72497f91a22236e0d359938bf6b42483e266ea2a16fdd64731.scope: Deactivated successfully.
Nov 29 03:19:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780027572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.764 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.792 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:35 np0005539550 nova_compute[257631]: 2025-11-29 08:19:35.796 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.062875766 +0000 UTC m=+0.037417283 container create 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:19:36 np0005539550 systemd[1]: Started libpod-conmon-3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674.scope.
Nov 29 03:19:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.048620123 +0000 UTC m=+0.023161660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.155720357 +0000 UTC m=+0.130261924 container init 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.163668569 +0000 UTC m=+0.138210096 container start 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.168836041 +0000 UTC m=+0.143377578 container attach 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:19:36 np0005539550 nice_chatterjee[328285]: 167 167
Nov 29 03:19:36 np0005539550 systemd[1]: libpod-3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674.scope: Deactivated successfully.
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.170818421 +0000 UTC m=+0.145359958 container died 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:19:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-037338d67bc72c52a4fec6bf5048497873196399fcfbe95d533ecc3d1433c0df-merged.mount: Deactivated successfully.
Nov 29 03:19:36 np0005539550 podman[328268]: 2025-11-29 08:19:36.212824189 +0000 UTC m=+0.187365706 container remove 3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:36 np0005539550 systemd[1]: libpod-conmon-3c1ba0552ed4ce47d19833b11a804559f96be829b0722b8ddba8dd4bc7db1674.scope: Deactivated successfully.
Nov 29 03:19:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:19:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1927645770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:19:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.284 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.287 257641 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-2',id=114,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=bc3449b1-54d4-4e2e-9390-bde3cc1d61f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.288 257641 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.289 257641 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.290 257641 DEBUG nova.objects.instance [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.305 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <uuid>bc3449b1-54d4-4e2e-9390-bde3cc1d61f6</uuid>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <name>instance-00000072</name>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1670228511-2</nova:name>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:19:35</nova:creationTime>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:user uuid="14d293467f8e498eaa87b6b8976b34d9">tempest-ListServersNegativeTestJSON-1508942438-project-member</nova:user>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:project uuid="27fd30263a7f4717b84946720a5770b5">tempest-ListServersNegativeTestJSON-1508942438</nova:project>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <nova:port uuid="44cf97f4-db65-445f-bb49-d4da69bd4b75">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="serial">bc3449b1-54d4-4e2e-9390-bde3cc1d61f6</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="uuid">bc3449b1-54d4-4e2e-9390-bde3cc1d61f6</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ab:b1:38"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <target dev="tap44cf97f4-db"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/console.log" append="off"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:19:36 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:19:36 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:19:36 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:19:36 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.307 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Preparing to wait for external event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.307 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.308 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.308 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.309 257641 DEBUG nova.virt.libvirt.vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-2',id=114,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:19:25Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=bc3449b1-54d4-4e2e-9390-bde3cc1d61f6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.309 257641 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.310 257641 DEBUG nova.network.os_vif_util [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.310 257641 DEBUG os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.311 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.311 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.311 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.315 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.315 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44cf97f4-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.316 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44cf97f4-db, col_values=(('external_ids', {'iface-id': '44cf97f4-db65-445f-bb49-d4da69bd4b75', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:b1:38', 'vm-uuid': 'bc3449b1-54d4-4e2e-9390-bde3cc1d61f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:36 np0005539550 NetworkManager[49039]: <info>  [1764404376.3189] manager: (tap44cf97f4-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.320 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.325 257641 INFO os_vif [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db')#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.390 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.391 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.391 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] No VIF found with MAC fa:16:3e:ab:b1:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.392 257641 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Using config drive#033[00m
Nov 29 03:19:36 np0005539550 podman[328315]: 2025-11-29 08:19:36.410498477 +0000 UTC m=+0.041962408 container create f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:19:36 np0005539550 nova_compute[257631]: 2025-11-29 08:19:36.422 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:36 np0005539550 systemd[1]: Started libpod-conmon-f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba.scope.
Nov 29 03:19:36 np0005539550 podman[328315]: 2025-11-29 08:19:36.394843769 +0000 UTC m=+0.026307720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a003a75beac063ffc66edb41422d3b95e5469a97cbb31700dfba884664198e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a003a75beac063ffc66edb41422d3b95e5469a97cbb31700dfba884664198e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a003a75beac063ffc66edb41422d3b95e5469a97cbb31700dfba884664198e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a003a75beac063ffc66edb41422d3b95e5469a97cbb31700dfba884664198e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:36 np0005539550 podman[328315]: 2025-11-29 08:19:36.52583996 +0000 UTC m=+0.157303891 container init f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:19:36 np0005539550 podman[328315]: 2025-11-29 08:19:36.53682177 +0000 UTC m=+0.168285701 container start f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:36 np0005539550 podman[328315]: 2025-11-29 08:19:36.540869223 +0000 UTC m=+0.172333154 container attach f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:19:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.178 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.317 257641 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Creating config drive at /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.326 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rfb_hc_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]: {
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:    "0": [
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:        {
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "devices": [
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "/dev/loop3"
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            ],
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "lv_name": "ceph_lv0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "lv_size": "7511998464",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "name": "ceph_lv0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "tags": {
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.cluster_name": "ceph",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.crush_device_class": "",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.encrypted": "0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.osd_id": "0",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.type": "block",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:                "ceph.vdo": "0"
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            },
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "type": "block",
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:            "vg_name": "ceph_vg0"
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:        }
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]:    ]
Nov 29 03:19:37 np0005539550 amazing_mcclintock[328350]: }
Nov 29 03:19:37 np0005539550 systemd[1]: libpod-f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba.scope: Deactivated successfully.
Nov 29 03:19:37 np0005539550 podman[328315]: 2025-11-29 08:19:37.394089921 +0000 UTC m=+1.025553852 container died f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:19:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 355 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 6.3 MiB/s wr, 328 op/s
Nov 29 03:19:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.461 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rfb_hc_" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.494 257641 DEBUG nova.storage.rbd_utils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] rbd image bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.517 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.5189] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.5201] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.549 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a003a75beac063ffc66edb41422d3b95e5469a97cbb31700dfba884664198e7f-merged.mount: Deactivated successfully.
Nov 29 03:19:37 np0005539550 podman[328315]: 2025-11-29 08:19:37.675173399 +0000 UTC m=+1.306637330 container remove f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:19:37 np0005539550 systemd[1]: libpod-conmon-f2305b5d8430bfd09b6d6f6a9678d191aa7011a1c9e52838473d0ba051091eba.scope: Deactivated successfully.
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.706 257641 DEBUG oslo_concurrency.processutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.707 257641 INFO nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Deleting local config drive /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.737 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:37Z|00455|binding|INFO|Releasing lport 74c0e253-5186-4acd-84b9-fb779ff161ee from this chassis (sb_readonly=0)
Nov 29 03:19:37 np0005539550 podman[328413]: 2025-11-29 08:19:37.762256944 +0000 UTC m=+0.335518144 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 kernel: tap44cf97f4-db: entered promiscuous mode
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.7693] manager: (tap44cf97f4-db): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Nov 29 03:19:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:37Z|00456|binding|INFO|Claiming lport 44cf97f4-db65-445f-bb49-d4da69bd4b75 for this chassis.
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.771 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:37Z|00457|binding|INFO|44cf97f4-db65-445f-bb49-d4da69bd4b75: Claiming fa:16:3e:ab:b1:38 10.100.0.4
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.788 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:b1:38 10.100.0.4'], port_security=['fa:16:3e:ab:b1:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3449b1-54d4-4e2e-9390-bde3cc1d61f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=44cf97f4-db65-445f-bb49-d4da69bd4b75) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:37Z|00458|binding|INFO|Setting lport 44cf97f4-db65-445f-bb49-d4da69bd4b75 ovn-installed in OVS
Nov 29 03:19:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:37Z|00459|binding|INFO|Setting lport 44cf97f4-db65-445f-bb49-d4da69bd4b75 up in Southbound
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.789 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 44cf97f4-db65-445f-bb49-d4da69bd4b75 in datapath 26782821-34df-4010-9d17-f8854e221b4e bound to our chassis#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.791 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 26782821-34df-4010-9d17-f8854e221b4e#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.790 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 nova_compute[257631]: 2025-11-29 08:19:37.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.801 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4e53de53-f7a6-49b1-b212-7fb35efdf808]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.802 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap26782821-31 in ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.804 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap26782821-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.804 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d02526b9-ab82-4a33-894b-bf405fdd4a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.805 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7c47bf97-1090-4660-b578-62258c598adb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 systemd-udevd[328526]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:19:37 np0005539550 systemd-machined[216673]: New machine qemu-55-instance-00000072.
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.816 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff3eb1e-9f34-408c-8f1e-5465d325c73a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.8214] device (tap44cf97f4-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:19:37 np0005539550 systemd[1]: Started Virtual Machine qemu-55-instance-00000072.
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.8238] device (tap44cf97f4-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.843 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[891dfdb1-957f-45f0-a11d-f09bc5209382]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.877 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[21ee44de-f5a0-48cf-96ef-3f103f88fdcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.8866] manager: (tap26782821-30): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.886 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b068daad-9c3f-4eaa-ad82-a7bde52a56ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.920 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e41b5d0f-9022-4b5f-a44b-a496c6b53e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.923 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[464afa4c-6018-45fe-abc7-3f264e1c80c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 NetworkManager[49039]: <info>  [1764404377.9440] device (tap26782821-30): carrier: link connected
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.948 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c7613846-f411-45ab-ab08-8289d70be800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.973 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4edf716d-d103-4ea3-93f6-44bfd84e49c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742560, 'reachable_time': 19043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328611, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:37.987 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5a930516-d888-4f7b-a975-a2d4a30f930c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:13dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 742560, 'tstamp': 742560}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328630, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.003 257641 DEBUG nova.compute.manager [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.003 257641 DEBUG nova.compute.manager [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing instance network info cache due to event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.004 257641 DEBUG oslo_concurrency.lockutils [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.004 257641 DEBUG oslo_concurrency.lockutils [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.004 257641 DEBUG nova.network.neutron [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.004 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[411ba13d-4ed1-4bda-b0e9-dd72199133ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap26782821-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:13:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742560, 'reachable_time': 19043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328633, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[044b714e-b7ca-4b68-870b-2a660fdb5280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.113 257641 DEBUG nova.compute.manager [req-2982b4f9-f7e7-4469-a81c-90a84e2cde04 req-942c877b-02c8-4512-bbd5-6ff2b8853209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.114 257641 DEBUG oslo_concurrency.lockutils [req-2982b4f9-f7e7-4469-a81c-90a84e2cde04 req-942c877b-02c8-4512-bbd5-6ff2b8853209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.115 257641 DEBUG oslo_concurrency.lockutils [req-2982b4f9-f7e7-4469-a81c-90a84e2cde04 req-942c877b-02c8-4512-bbd5-6ff2b8853209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.115 257641 DEBUG oslo_concurrency.lockutils [req-2982b4f9-f7e7-4469-a81c-90a84e2cde04 req-942c877b-02c8-4512-bbd5-6ff2b8853209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.115 257641 DEBUG nova.compute.manager [req-2982b4f9-f7e7-4469-a81c-90a84e2cde04 req-942c877b-02c8-4512-bbd5-6ff2b8853209 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Processing event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.117 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[601b4d48-acbe-40d2-affd-8ce94c7c3d1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.119 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26782821-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539550 NetworkManager[49039]: <info>  [1764404378.1655] manager: (tap26782821-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Nov 29 03:19:38 np0005539550 kernel: tap26782821-30: entered promiscuous mode
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.164 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.169 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap26782821-30, col_values=(('external_ids', {'iface-id': 'a221b966-8231-4ead-a0ed-2978f32e8746'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.170 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:38Z|00460|binding|INFO|Releasing lport a221b966-8231-4ead-a0ed-2978f32e8746 from this chassis (sb_readonly=0)
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.171 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.174 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.174 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[89041b46-5053-47e3-b2fb-cbe27ea209ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.175 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/26782821-34df-4010-9d17-f8854e221b4e.pid.haproxy
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 26782821-34df-4010-9d17-f8854e221b4e
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:19:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:38.177 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'env', 'PROCESS_TAG=haproxy-26782821-34df-4010-9d17-f8854e221b4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/26782821-34df-4010-9d17-f8854e221b4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.188 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:38.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.337938845 +0000 UTC m=+0.045804976 container create 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:19:38 np0005539550 systemd[1]: Started libpod-conmon-9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a.scope.
Nov 29 03:19:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.314911099 +0000 UTC m=+0.022777280 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.412118142 +0000 UTC m=+0.119984293 container init 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.418698689 +0000 UTC m=+0.126564840 container start 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.4222649 +0000 UTC m=+0.130131051 container attach 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:19:38 np0005539550 vigorous_williams[328699]: 167 167
Nov 29 03:19:38 np0005539550 systemd[1]: libpod-9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a.scope: Deactivated successfully.
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.425128793 +0000 UTC m=+0.132994924 container died 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.439 257641 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Updated VIF entry in instance network info cache for port 44cf97f4-db65-445f-bb49-d4da69bd4b75. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.441 257641 DEBUG nova.network.neutron [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Updating instance_info_cache with network_info: [{"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9a4559ed8588c621a333644ad77f0070f07d0cc1725d15aa573ccba5adf79900-merged.mount: Deactivated successfully.
Nov 29 03:19:38 np0005539550 podman[328682]: 2025-11-29 08:19:38.468619059 +0000 UTC m=+0.176485200 container remove 9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.470 257641 DEBUG oslo_concurrency.lockutils [req-866d990d-c043-4378-939d-0f267cd9edee req-f4f36a95-f68b-4c7b-a1cf-b18f07a77fe7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:38 np0005539550 systemd[1]: libpod-conmon-9c977aaa24bd3328422ac9769b0824ddd6b5742dc6f6b260864eeccef3591e7a.scope: Deactivated successfully.
Nov 29 03:19:38 np0005539550 podman[328739]: 2025-11-29 08:19:38.585107451 +0000 UTC m=+0.060392487 container create 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:38 np0005539550 systemd[1]: Started libpod-conmon-1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65.scope.
Nov 29 03:19:38 np0005539550 podman[328739]: 2025-11-29 08:19:38.549384993 +0000 UTC m=+0.024670059 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:19:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01084cf035d9ccb9bfba1c153b266d53a6277fe38be5bf66d049ec567d151e54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539550 podman[328759]: 2025-11-29 08:19:38.66294076 +0000 UTC m=+0.041478725 container create 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:19:38 np0005539550 podman[328739]: 2025-11-29 08:19:38.678113546 +0000 UTC m=+0.153398612 container init 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:19:38 np0005539550 podman[328739]: 2025-11-29 08:19:38.6873197 +0000 UTC m=+0.162604726 container start 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:19:38 np0005539550 systemd[1]: Started libpod-conmon-5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c.scope.
Nov 29 03:19:38 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [NOTICE]   (328779) : New worker (328783) forked
Nov 29 03:19:38 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [NOTICE]   (328779) : Loading success.
Nov 29 03:19:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:19:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f320ea81575d896dd8c5b1e7b4a8672d7ea384e1821e3020b20c1e64ef141/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f320ea81575d896dd8c5b1e7b4a8672d7ea384e1821e3020b20c1e64ef141/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f320ea81575d896dd8c5b1e7b4a8672d7ea384e1821e3020b20c1e64ef141/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1f320ea81575d896dd8c5b1e7b4a8672d7ea384e1821e3020b20c1e64ef141/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:19:38 np0005539550 podman[328759]: 2025-11-29 08:19:38.645908327 +0000 UTC m=+0.024446302 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:19:38 np0005539550 podman[328759]: 2025-11-29 08:19:38.749178314 +0000 UTC m=+0.127716299 container init 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:19:38 np0005539550 podman[328759]: 2025-11-29 08:19:38.757751592 +0000 UTC m=+0.136289557 container start 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:19:38 np0005539550 podman[328759]: 2025-11-29 08:19:38.761685862 +0000 UTC m=+0.140223857 container attach 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.988 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.990 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404378.988005, bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.990 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:19:38 np0005539550 nova_compute[257631]: 2025-11-29 08:19:38.994 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.004 257641 INFO nova.virt.libvirt.driver [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Instance spawned successfully.#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.005 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.011 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.014 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.024 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.025 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.025 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.025 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.026 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.026 257641 DEBUG nova.virt.libvirt.driver [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.037 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.038 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404378.9891732, bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.038 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.066 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.070 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404378.9933567, bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.070 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.093 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.096 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.101 257641 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Took 13.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.102 257641 DEBUG nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.133 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.162 257641 INFO nova.compute.manager [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Took 14.50 seconds to build instance.#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.180 257641 DEBUG oslo_concurrency.lockutils [None req-63b1cba8-7345-4b26-a01b-936b7a7c49b4 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 355 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 5.4 MiB/s wr, 352 op/s
Nov 29 03:19:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.493 257641 DEBUG nova.network.neutron [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated VIF entry in instance network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.495 257641 DEBUG nova.network.neutron [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.513 257641 DEBUG oslo_concurrency.lockutils [req-d294eeac-1584-4843-8059-c124a035f959 req-5a43e0d7-82b6-41ea-b6c8-0ebd0f42a2a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]: {
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:        "osd_id": 0,
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:        "type": "bluestore"
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]:    }
Nov 29 03:19:39 np0005539550 zen_proskuriakova[328781]: }
Nov 29 03:19:39 np0005539550 systemd[1]: libpod-5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c.scope: Deactivated successfully.
Nov 29 03:19:39 np0005539550 podman[328853]: 2025-11-29 08:19:39.670347261 +0000 UTC m=+0.024808252 container died 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:19:39 np0005539550 nova_compute[257631]: 2025-11-29 08:19:39.939 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:19:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9d1f320ea81575d896dd8c5b1e7b4a8672d7ea384e1821e3020b20c1e64ef141-merged.mount: Deactivated successfully.
Nov 29 03:19:40 np0005539550 podman[328853]: 2025-11-29 08:19:40.013286883 +0000 UTC m=+0.367747844 container remove 5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_proskuriakova, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:40 np0005539550 systemd[1]: libpod-conmon-5a4e22a006359a5462849797466d784ea5a789d3eb440a452f53468d198e5a6c.scope: Deactivated successfully.
Nov 29 03:19:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:19:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:19:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2a0cf089-a25c-49b9-99c7-96c6986c3931 does not exist
Nov 29 03:19:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2c52347e-255f-4039-be99-44068d0a5cf6 does not exist
Nov 29 03:19:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 544a4815-6bdf-4d7a-9e65-51c5143f5fe3 does not exist
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.219 257641 DEBUG nova.compute.manager [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.219 257641 DEBUG oslo_concurrency.lockutils [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.220 257641 DEBUG oslo_concurrency.lockutils [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.220 257641 DEBUG oslo_concurrency.lockutils [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.220 257641 DEBUG nova.compute.manager [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] No waiting events found dispatching network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:40 np0005539550 nova_compute[257631]: 2025-11-29 08:19:40.221 257641 WARNING nova.compute.manager [req-552abdc9-84fe-442e-8056-23e6d8167130 req-d5bdb1dd-ec0c-4d74-9d52-a3545728da1a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received unexpected event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:19:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:40.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:19:41 np0005539550 nova_compute[257631]: 2025-11-29 08:19:41.319 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 355 MiB data, 979 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 4.6 MiB/s wr, 389 op/s
Nov 29 03:19:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:41.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:42 np0005539550 nova_compute[257631]: 2025-11-29 08:19:42.180 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:42.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 354 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 863 KiB/s wr, 300 op/s
Nov 29 03:19:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:43.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:44.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:44Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:19:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:44Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5a:07:26 10.100.0.3
Nov 29 03:19:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 345 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 2.3 MiB/s wr, 380 op/s
Nov 29 03:19:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:45.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:46.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.926 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.927 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.927 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.927 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.928 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.929 257641 INFO nova.compute.manager [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Terminating instance#033[00m
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.930 257641 DEBUG nova.compute.manager [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:19:46 np0005539550 kernel: tap44cf97f4-db (unregistering): left promiscuous mode
Nov 29 03:19:46 np0005539550 NetworkManager[49039]: <info>  [1764404386.9775] device (tap44cf97f4-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.987 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:46Z|00461|binding|INFO|Releasing lport 44cf97f4-db65-445f-bb49-d4da69bd4b75 from this chassis (sb_readonly=0)
Nov 29 03:19:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:46Z|00462|binding|INFO|Setting lport 44cf97f4-db65-445f-bb49-d4da69bd4b75 down in Southbound
Nov 29 03:19:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:46Z|00463|binding|INFO|Removing iface tap44cf97f4-db ovn-installed in OVS
Nov 29 03:19:46 np0005539550 nova_compute[257631]: 2025-11-29 08:19:46.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.000 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:b1:38 10.100.0.4'], port_security=['fa:16:3e:ab:b1:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'bc3449b1-54d4-4e2e-9390-bde3cc1d61f6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-26782821-34df-4010-9d17-f8854e221b4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27fd30263a7f4717b84946720a5770b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '45467816-71fb-46ea-84fa-c25f49ea2a6e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c8dbca6-8e74-4fae-981d-344fddfca3c7, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=44cf97f4-db65-445f-bb49-d4da69bd4b75) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.002 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 44cf97f4-db65-445f-bb49-d4da69bd4b75 in datapath 26782821-34df-4010-9d17-f8854e221b4e unbound from our chassis#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.003 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 26782821-34df-4010-9d17-f8854e221b4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.005 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c679c3ca-2e83-4fa1-8e71-363ec17ec714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.005 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-26782821-34df-4010-9d17-f8854e221b4e namespace which is not needed anymore#033[00m
Nov 29 03:19:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000072.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539550 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000072.scope: Consumed 9.072s CPU time.
Nov 29 03:19:47 np0005539550 systemd-machined[216673]: Machine qemu-55-instance-00000072 terminated.
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [NOTICE]   (328779) : haproxy version is 2.8.14-c23fe91
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [NOTICE]   (328779) : path to executable is /usr/sbin/haproxy
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [WARNING]  (328779) : Exiting Master process...
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [WARNING]  (328779) : Exiting Master process...
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [ALERT]    (328779) : Current worker (328783) exited with code 143 (Terminated)
Nov 29 03:19:47 np0005539550 neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e[328766]: [WARNING]  (328779) : All workers exited. Exiting... (0)
Nov 29 03:19:47 np0005539550 systemd[1]: libpod-1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.176 257641 INFO nova.virt.libvirt.driver [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Instance destroyed successfully.#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.176 257641 DEBUG nova.objects.instance [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lazy-loading 'resources' on Instance uuid bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:19:47 np0005539550 podman[328943]: 2025-11-29 08:19:47.178578659 +0000 UTC m=+0.055940223 container died 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.181 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.189 257641 DEBUG nova.virt.libvirt.vif [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1670228511',display_name='tempest-ListServersNegativeTestJSON-server-1670228511-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1670228511-2',id=114,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2025-11-29T08:19:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27fd30263a7f4717b84946720a5770b5',ramdisk_id='',reservation_id='r-oe57na6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1508942438',owner_user_name='tempest-ListServersNegativeTestJSON-1508942438-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:39Z,user_data=None,user_id='14d293467f8e498eaa87b6b8976b34d9',uuid=bc3449b1-54d4-4e2e-9390-bde3cc1d61f6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.191 257641 DEBUG nova.network.os_vif_util [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converting VIF {"id": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "address": "fa:16:3e:ab:b1:38", "network": {"id": "26782821-34df-4010-9d17-f8854e221b4e", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1086271893-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27fd30263a7f4717b84946720a5770b5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44cf97f4-db", "ovs_interfaceid": "44cf97f4-db65-445f-bb49-d4da69bd4b75", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.193 257641 DEBUG nova.network.os_vif_util [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.194 257641 DEBUG os_vif [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.199 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.199 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44cf97f4-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.203 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.205 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.209 257641 INFO os_vif [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:b1:38,bridge_name='br-int',has_traffic_filtering=True,id=44cf97f4-db65-445f-bb49-d4da69bd4b75,network=Network(26782821-34df-4010-9d17-f8854e221b4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44cf97f4-db')#033[00m
Nov 29 03:19:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65-userdata-shm.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-01084cf035d9ccb9bfba1c153b266d53a6277fe38be5bf66d049ec567d151e54-merged.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539550 podman[328943]: 2025-11-29 08:19:47.22855788 +0000 UTC m=+0.105919434 container cleanup 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:19:47 np0005539550 systemd[1]: libpod-conmon-1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65.scope: Deactivated successfully.
Nov 29 03:19:47 np0005539550 podman[328994]: 2025-11-29 08:19:47.302176423 +0000 UTC m=+0.049207643 container remove 1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.308 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[777ed082-dbb7-4fd7-ae3d-d23bcc143d3b]: (4, ('Sat Nov 29 08:19:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65)\n1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65\nSat Nov 29 08:19:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-26782821-34df-4010-9d17-f8854e221b4e (1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65)\n1aca6529dd1feaabb9ddf0b659690ded4f3d247817b8552055d6a2cd84550f65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.310 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9187324-458e-49f7-9885-bcc353c8f74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.311 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26782821-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.313 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 kernel: tap26782821-30: left promiscuous mode
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.318 257641 DEBUG nova.compute.manager [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-unplugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.318 257641 DEBUG oslo_concurrency.lockutils [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.318 257641 DEBUG oslo_concurrency.lockutils [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.319 257641 DEBUG oslo_concurrency.lockutils [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.319 257641 DEBUG nova.compute.manager [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] No waiting events found dispatching network-vif-unplugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.319 257641 DEBUG nova.compute.manager [req-c69ea90c-67ce-4d9c-b737-38f79c6342d4 req-36364e6d-eb0b-4e48-ace8-9379fc6f3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-unplugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.327 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.330 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1e61dddc-9490-42fe-9629-8d2fed61f63b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.341 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4a38933f-cd0d-46fc-970c-1c142280b377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.342 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5e2efc-b908-441d-80df-0057971f7970]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.357 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[92c0c68c-c486-482c-a26e-bf4cfddb6f32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 742553, 'reachable_time': 35750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329012, 'error': None, 'target': 'ovnmeta-26782821-34df-4010-9d17-f8854e221b4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 systemd[1]: run-netns-ovnmeta\x2d26782821\x2d34df\x2d4010\x2d9d17\x2df8854e221b4e.mount: Deactivated successfully.
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.361 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-26782821-34df-4010-9d17-f8854e221b4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:19:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:47.362 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e8773d71-7464-4576-bc79-0e7d33dfe4c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:19:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 367 MiB data, 1008 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 4.3 MiB/s wr, 363 op/s
Nov 29 03:19:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:47.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.601 257641 INFO nova.virt.libvirt.driver [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Deleting instance files /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_del#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.602 257641 INFO nova.virt.libvirt.driver [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Deletion of /var/lib/nova/instances/bc3449b1-54d4-4e2e-9390-bde3cc1d61f6_del complete#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.659 257641 INFO nova.compute.manager [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.660 257641 DEBUG oslo.service.loopingcall [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.660 257641 DEBUG nova.compute.manager [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.660 257641 DEBUG nova.network.neutron [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:19:47 np0005539550 nova_compute[257631]: 2025-11-29 08:19:47.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:19:48 np0005539550 nova_compute[257631]: 2025-11-29 08:19:48.231 257641 DEBUG nova.network.neutron [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:19:48 np0005539550 nova_compute[257631]: 2025-11-29 08:19:48.245 257641 INFO nova.compute.manager [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Took 0.58 seconds to deallocate network for instance.#033[00m
Nov 29 03:19:48 np0005539550 nova_compute[257631]: 2025-11-29 08:19:48.285 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:48 np0005539550 nova_compute[257631]: 2025-11-29 08:19:48.286 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:48.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:48 np0005539550 nova_compute[257631]: 2025-11-29 08:19:48.631 257641 DEBUG oslo_concurrency.processutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:19:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:19:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2634404715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.085 257641 DEBUG oslo_concurrency.processutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.092 257641 DEBUG nova.compute.provider_tree [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.114 257641 DEBUG nova.scheduler.client.report [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.138 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.166 257641 INFO nova.scheduler.client.report [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Deleted allocations for instance bc3449b1-54d4-4e2e-9390-bde3cc1d61f6#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.234 257641 DEBUG oslo_concurrency.lockutils [None req-76b6a1fe-cc93-4a0a-b563-8ad1caa935c6 14d293467f8e498eaa87b6b8976b34d9 27fd30263a7f4717b84946720a5770b5 - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 369 MiB data, 1017 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 370 op/s
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.432 257641 DEBUG nova.compute.manager [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.434 257641 DEBUG oslo_concurrency.lockutils [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.434 257641 DEBUG oslo_concurrency.lockutils [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.435 257641 DEBUG oslo_concurrency.lockutils [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "bc3449b1-54d4-4e2e-9390-bde3cc1d61f6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.436 257641 DEBUG nova.compute.manager [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] No waiting events found dispatching network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.436 257641 WARNING nova.compute.manager [req-82d11121-1e46-4b11-920e-ce0a4e8435f0 req-3bd8506b-910e-4b5b-819e-b17fe9d16334 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received unexpected event network-vif-plugged-44cf97f4-db65-445f-bb49-d4da69bd4b75 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:19:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:49.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:49 np0005539550 nova_compute[257631]: 2025-11-29 08:19:49.555 257641 DEBUG nova.compute.manager [req-ffd02439-39da-4987-90f7-f9197128dcfb req-559a9bc8-0b2c-4e02-a51d-ace59a2371ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Received event network-vif-deleted-44cf97f4-db65-445f-bb49-d4da69bd4b75 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:19:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:50.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 281 MiB data, 989 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.4 MiB/s wr, 421 op/s
Nov 29 03:19:51 np0005539550 nova_compute[257631]: 2025-11-29 08:19:51.470 257641 INFO nova.compute.manager [None req-2a5d88fb-4bde-44f5-8cf6-856db17f6ae1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Get console output#033[00m
Nov 29 03:19:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:51.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:51 np0005539550 nova_compute[257631]: 2025-11-29 08:19:51.481 257641 INFO oslo.privsep.daemon [None req-2a5d88fb-4bde-44f5-8cf6-856db17f6ae1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp0hfyxha4/privsep.sock']#033[00m
Nov 29 03:19:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.184 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.238 257641 INFO oslo.privsep.daemon [None req-2a5d88fb-4bde-44f5-8cf6-856db17f6ae1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.110 329043 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.114 329043 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.116 329043 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.116 329043 INFO oslo.privsep.daemon [-] privsep daemon running as pid 329043#033[00m
Nov 29 03:19:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:52.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:52 np0005539550 nova_compute[257631]: 2025-11-29 08:19:52.346 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:19:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 281 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 6.4 MiB/s wr, 359 op/s
Nov 29 03:19:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:53.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:53.895 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:19:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:53.896 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:19:53 np0005539550 nova_compute[257631]: 2025-11-29 08:19:53.896 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:54.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:55 np0005539550 podman[329047]: 2025-11-29 08:19:55.347326125 +0000 UTC m=+0.070681319 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:19:55 np0005539550 podman[329046]: 2025-11-29 08:19:55.354836006 +0000 UTC m=+0.079861022 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:19:55 np0005539550 nova_compute[257631]: 2025-11-29 08:19:55.412 257641 INFO nova.compute.manager [None req-8eb23aa5-3853-4530-8936-65d446dbb8e3 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Get console output#033[00m
Nov 29 03:19:55 np0005539550 nova_compute[257631]: 2025-11-29 08:19:55.417 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:19:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 281 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.6 MiB/s wr, 315 op/s
Nov 29 03:19:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:55.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:19:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:19:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:19:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:19:57Z|00464|binding|INFO|Releasing lport 74c0e253-5186-4acd-84b9-fb779ff161ee from this chassis (sb_readonly=0)
Nov 29 03:19:57 np0005539550 nova_compute[257631]: 2025-11-29 08:19:57.198 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539550 nova_compute[257631]: 2025-11-29 08:19:57.203 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:19:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 281 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 4.2 MiB/s wr, 216 op/s
Nov 29 03:19:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:57.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:19:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:19:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:19:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:19:58.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:19:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:19:58.897 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:19:59
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:19:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 281 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 566 KiB/s rd, 2.2 MiB/s wr, 150 op/s
Nov 29 03:19:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:19:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:19:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:19:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.651 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.652 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.661 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.662 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.662 257641 DEBUG nova.network.neutron [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.668 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.761 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.761 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.772 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.773 257641 INFO nova.compute.claims [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:19:59 np0005539550 nova_compute[257631]: 2025-11-29 08:19:59.903 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:20:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:20:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4149250954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.350 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.358 257641 DEBUG nova.compute.provider_tree [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.374 257641 DEBUG nova.scheduler.client.report [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.399 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.400 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.480 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.481 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.512 257641 INFO nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.532 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.635 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.636 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.636 257641 INFO nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating image(s)#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.660 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.686 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.728 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.734 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.774 257641 DEBUG nova.policy [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80ceb9112b3a4f119c05f21fd617af11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '26e3508b949a4dbf960d7befc8f27869', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.810 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.811 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.811 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.812 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.833 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:00 np0005539550 nova_compute[257631]: 2025-11-29 08:20:00.836 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.282 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.362 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] resizing rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 281 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 300 KiB/s rd, 946 KiB/s wr, 96 op/s
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.473 257641 DEBUG nova.objects.instance [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:01.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.493 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.494 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Ensure instance console log exists: /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.494 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.495 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.495 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.910 257641 DEBUG nova.network.neutron [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.936 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:01 np0005539550 nova_compute[257631]: 2025-11-29 08:20:01.965 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Successfully created port: 1ac0f281-7dd4-441c-ad6b-82816fd3c242 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:20:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.125 257641 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.125 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Creating file /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/ac2958e4ca3f456882b0103132b8b249.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.126 257641 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/ac2958e4ca3f456882b0103132b8b249.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.175 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404387.1739118, bc3449b1-54d4-4e2e-9390-bde3cc1d61f6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.175 257641 INFO nova.compute.manager [-] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.204 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.212 257641 DEBUG nova.compute.manager [None req-5ecd06d2-db9b-42a2-b701-f9686aef4971 - - - - - -] [instance: bc3449b1-54d4-4e2e-9390-bde3cc1d61f6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:02.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.500 257641 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/ac2958e4ca3f456882b0103132b8b249.tmp" returned: 1 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.500 257641 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd/ac2958e4ca3f456882b0103132b8b249.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.501 257641 DEBUG nova.virt.libvirt.volume.remotefs [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Creating directory /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.501 257641 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.728 257641 DEBUG oslo_concurrency.processutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/cf3d3db9-f753-47a8-93d5-7f0491bb03fd" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:02 np0005539550 nova_compute[257631]: 2025-11-29 08:20:02.736 257641 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.312 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Successfully updated port: 1ac0f281-7dd4-441c-ad6b-82816fd3c242 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.331 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.332 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.332 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 283 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 7.9 KiB/s rd, 171 KiB/s wr, 11 op/s
Nov 29 03:20:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.521 257641 DEBUG nova.compute.manager [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-changed-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.522 257641 DEBUG nova.compute.manager [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Refreshing instance network info cache due to event network-changed-1ac0f281-7dd4-441c-ad6b-82816fd3c242. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.522 257641 DEBUG oslo_concurrency.lockutils [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:03 np0005539550 nova_compute[257631]: 2025-11-29 08:20:03.597 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:20:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:04.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:04 np0005539550 kernel: tap0bdc8d4b-e2 (unregistering): left promiscuous mode
Nov 29 03:20:05 np0005539550 NetworkManager[49039]: <info>  [1764404405.0027] device (tap0bdc8d4b-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:05Z|00465|binding|INFO|Releasing lport 0bdc8d4b-e261-4398-8465-58392acd35a8 from this chassis (sb_readonly=0)
Nov 29 03:20:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:05Z|00466|binding|INFO|Setting lport 0bdc8d4b-e261-4398-8465-58392acd35a8 down in Southbound
Nov 29 03:20:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:05Z|00467|binding|INFO|Removing iface tap0bdc8d4b-e2 ovn-installed in OVS
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.020 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.031 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5a:07:26 10.100.0.3'], port_security=['fa:16:3e:5a:07:26 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'cf3d3db9-f753-47a8-93d5-7f0491bb03fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fea4296f-ae17-483b-99ac-9c138bd93045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34584095-7fef-4e24-ba1b-1ecac0c29f47, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0bdc8d4b-e261-4398-8465-58392acd35a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.032 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0bdc8d4b-e261-4398-8465-58392acd35a8 in datapath 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 unbound from our chassis#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.033 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.035 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eda6bb20-fb98-4562-a63b-b7e75c457396]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.035 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 namespace which is not needed anymore#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.042 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Nov 29 03:20:05 np0005539550 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000006f.scope: Consumed 15.595s CPU time.
Nov 29 03:20:05 np0005539550 systemd-machined[216673]: Machine qemu-54-instance-0000006f terminated.
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [NOTICE]   (327634) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [NOTICE]   (327634) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [WARNING]  (327634) : Exiting Master process...
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [WARNING]  (327634) : Exiting Master process...
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [ALERT]    (327634) : Current worker (327652) exited with code 143 (Terminated)
Nov 29 03:20:05 np0005539550 neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309[327612]: [WARNING]  (327634) : All workers exited. Exiting... (0)
Nov 29 03:20:05 np0005539550 systemd[1]: libpod-4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e.scope: Deactivated successfully.
Nov 29 03:20:05 np0005539550 podman[329354]: 2025-11-29 08:20:05.168181046 +0000 UTC m=+0.045596231 container died 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:20:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2938f6398261dc8ae5dd412bebd3f0e7ffc96999a7b7b3d82a7a8043a35f2dfd-merged.mount: Deactivated successfully.
Nov 29 03:20:05 np0005539550 podman[329354]: 2025-11-29 08:20:05.201518164 +0000 UTC m=+0.078933349 container cleanup 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:20:05 np0005539550 systemd[1]: libpod-conmon-4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e.scope: Deactivated successfully.
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.239 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 podman[329384]: 2025-11-29 08:20:05.270665282 +0000 UTC m=+0.049355816 container remove 4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.277 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bdec22bd-7313-4948-8ef3-ebaf7af2e1bd]: (4, ('Sat Nov 29 08:20:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 (4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e)\n4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e\nSat Nov 29 08:20:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 (4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e)\n4451b812bf362eea0d368cb367939a28d8c9ab6d57e4533f50773c2e685e6e3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.279 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbc9e31-db24-4240-a6e4-8aa67b971f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.280 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96a9f8d0-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.282 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 kernel: tap96a9f8d0-90: left promiscuous mode
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.305 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0aac2644-cefc-447c-b5c9-4e73876dad36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.325 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[60b02980-3d88-44d5-8754-292f4ffb1277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.326 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[456bf274-fbfc-45ec-a780-4664aa341e9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.350 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c13fd85a-9cb7-47a3-8000-144e3f7fd684]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741565, 'reachable_time': 44425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329413, 'error': None, 'target': 'ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 systemd[1]: run-netns-ovnmeta\x2d96a9f8d0\x2d94cb\x2d4ef1\x2db5fc\x2d814aeb66b309.mount: Deactivated successfully.
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.354 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96a9f8d0-94cb-4ef1-b5fc-814aeb66b309 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:05.354 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[37a6abad-646b-40df-8bc1-0ecffe957504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.434 257641 DEBUG nova.network.neutron [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updating instance_info_cache with network_info: [{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 320 MiB data, 987 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.5 MiB/s wr, 44 op/s
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.455 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.456 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance network_info: |[{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.456 257641 DEBUG oslo_concurrency.lockutils [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.456 257641 DEBUG nova.network.neutron [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Refreshing network info cache for port 1ac0f281-7dd4-441c-ad6b-82816fd3c242 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.460 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start _get_guest_xml network_info=[{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.465 257641 WARNING nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.478 257641 DEBUG nova.virt.libvirt.host [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.479 257641 DEBUG nova.virt.libvirt.host [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.483 257641 DEBUG nova.virt.libvirt.host [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.484 257641 DEBUG nova.virt.libvirt.host [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.486 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.486 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.486 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.487 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.487 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.487 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.487 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.488 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.488 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.488 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.489 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.489 257641 DEBUG nova.virt.hardware [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:05.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.493 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.755 257641 INFO nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.760 257641 INFO nova.virt.libvirt.driver [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Instance destroyed successfully.#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.761 257641 DEBUG nova.virt.libvirt.vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:19:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:19:58Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.761 257641 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--589317975", "vif_mac": "fa:16:3e:5a:07:26"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.762 257641 DEBUG nova.network.os_vif_util [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.762 257641 DEBUG os_vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.764 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.764 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bdc8d4b-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.768 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.771 257641 INFO os_vif [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2')#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.776 257641 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.776 257641 DEBUG nova.virt.libvirt.driver [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1964186953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.939 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.964 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:05 np0005539550 nova_compute[257631]: 2025-11-29 08:20:05.968 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.000 257641 DEBUG neutronclient.v2_0.client [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0bdc8d4b-e261-4398-8465-58392acd35a8 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.120 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.120 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.121 257641 DEBUG oslo_concurrency.lockutils [None req-ff755ec3-0448-471c-90d3-fc7c16b2936c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715589567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.376 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.378 257641 DEBUG nova.virt.libvirt.vif [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-tempest.common.compute-instance-192669347',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:00Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.379 257641 DEBUG nova.network.os_vif_util [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.380 257641 DEBUG nova.network.os_vif_util [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.381 257641 DEBUG nova.objects.instance [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.766 257641 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.767 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.767 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.767 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.768 257641 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.768 257641 WARNING nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-unplugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.768 257641 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.769 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.769 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.769 257641 DEBUG oslo_concurrency.lockutils [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.769 257641 DEBUG nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] No waiting events found dispatching network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.770 257641 WARNING nova.compute.manager [req-f79485c5-bbd9-4055-bd3d-713d39264f86 req-0bfed806-3076-4f20-89e2-f92a637f7155 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received unexpected event network-vif-plugged-0bdc8d4b-e261-4398-8465-58392acd35a8 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.780 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <uuid>5cc9587a-1715-4ff6-80c1-b79d2fc224c3</uuid>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <name>instance-00000074</name>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:name>tempest-tempest.common.compute-instance-192669347</nova:name>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:20:05</nova:creationTime>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <nova:port uuid="1ac0f281-7dd4-441c-ad6b-82816fd3c242">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="serial">5cc9587a-1715-4ff6-80c1-b79d2fc224c3</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="uuid">5cc9587a-1715-4ff6-80c1-b79d2fc224c3</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:1d:ee:a4"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <target dev="tap1ac0f281-7d"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/console.log" append="off"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:20:06 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:20:06 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:20:06 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:20:06 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.783 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Preparing to wait for external event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.784 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.784 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.784 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.785 257641 DEBUG nova.virt.libvirt.vif [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-tempest.common.compute-instance-192669347',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:00Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.785 257641 DEBUG nova.network.os_vif_util [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.786 257641 DEBUG nova.network.os_vif_util [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.787 257641 DEBUG os_vif [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.788 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.789 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.796 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ac0f281-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.797 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ac0f281-7d, col_values=(('external_ids', {'iface-id': '1ac0f281-7dd4-441c-ad6b-82816fd3c242', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:ee:a4', 'vm-uuid': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.798 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:06 np0005539550 NetworkManager[49039]: <info>  [1764404406.7995] manager: (tap1ac0f281-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.803 257641 INFO os_vif [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d')#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.868 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.868 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.869 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:1d:ee:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.869 257641 INFO nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Using config drive#033[00m
Nov 29 03:20:06 np0005539550 nova_compute[257631]: 2025-11-29 08:20:06.889 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.203 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 327 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.8 MiB/s wr, 46 op/s
Nov 29 03:20:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:07.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.613 257641 INFO nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating config drive at /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.623 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdrmy6so execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.775 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdrmy6so" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.806 257641 DEBUG nova.storage.rbd_utils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.809 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.908 257641 DEBUG nova.network.neutron [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updated VIF entry in instance network info cache for port 1ac0f281-7dd4-441c-ad6b-82816fd3c242. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.909 257641 DEBUG nova.network.neutron [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updating instance_info_cache with network_info: [{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.942 257641 DEBUG oslo_concurrency.lockutils [req-a698beda-6480-46eb-b167-7b9b81c6ad30 req-a76cab9c-59fd-4f34-80fe-665c12d709d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.981 257641 DEBUG oslo_concurrency.processutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:07 np0005539550 nova_compute[257631]: 2025-11-29 08:20:07.982 257641 INFO nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deleting local config drive /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:08 np0005539550 kernel: tap1ac0f281-7d: entered promiscuous mode
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.0285] manager: (tap1ac0f281-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Nov 29 03:20:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:08Z|00468|binding|INFO|Claiming lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 for this chassis.
Nov 29 03:20:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:08Z|00469|binding|INFO|1ac0f281-7dd4-441c-ad6b-82816fd3c242: Claiming fa:16:3e:1d:ee:a4 10.100.0.13
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.043 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ee:a4 10.100.0.13'], port_security=['fa:16:3e:1d:ee:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e7c0561a-dbc2-4575-a537-f1269d068f5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1ac0f281-7dd4-441c-ad6b-82816fd3c242) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.044 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1ac0f281-7dd4-441c-ad6b-82816fd3c242 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.045 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:20:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:08Z|00470|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 ovn-installed in OVS
Nov 29 03:20:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:08Z|00471|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 up in Southbound
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.051 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.057 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b447cd3f-c50e-438a-b30a-cd7f1a29ef3a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.058 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.060 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.060 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec2d10b-ac86-4fe2-b888-5c0507d7a02d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.061 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3b060a81-da71-48ea-9427-61ef93d1ff36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 systemd-machined[216673]: New machine qemu-56-instance-00000074.
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.070 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b1515c1e-43c0-4de9-ab84-94fc213caf0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 systemd[1]: Started Virtual Machine qemu-56-instance-00000074.
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.089 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49bad701-4d61-45bc-8c2e-82e73fa8bd80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 systemd-udevd[329565]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.1201] device (tap1ac0f281-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.1210] device (tap1ac0f281-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.131 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a1eab3-ee0e-4b6b-8beb-27a6ccf1b6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.137 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7ca25a-b011-4b7e-a769-19404e3a7a7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 systemd-udevd[329575]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.1420] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/212)
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.154 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 podman[329549]: 2025-11-29 08:20:08.158783572 +0000 UTC m=+0.098881606 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.170 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[28b6f20a-764d-45ed-9109-65da4137deff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.173 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[be6e7f66-fb71-4cfc-9c32-c1ddcb28601e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.1993] device (tap58fd104d-40): carrier: link connected
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.205 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4b453bf2-4f0c-4162-a747-8f1686914c49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.221 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[26c3565e-7d55-4dc1-8cd7-eb08a047812b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745586, 'reachable_time': 36250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329608, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.234 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2faa70-4724-4793-b786-04fb8c0be9c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 745586, 'tstamp': 745586}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329609, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.250 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[86b9120c-8d30-44c9-83ad-0bea7b2ec3c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 131], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745586, 'reachable_time': 36250, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329610, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.279 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[87a166f1-cd19-4c2d-a2ef-bc536417b63e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.322 257641 DEBUG nova.compute.manager [req-912a0ffe-bfbe-42c5-94cb-b2b3d4a4f3c3 req-44273ba1-b069-48ed-a31f-f2256a209d89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.323 257641 DEBUG oslo_concurrency.lockutils [req-912a0ffe-bfbe-42c5-94cb-b2b3d4a4f3c3 req-44273ba1-b069-48ed-a31f-f2256a209d89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.323 257641 DEBUG oslo_concurrency.lockutils [req-912a0ffe-bfbe-42c5-94cb-b2b3d4a4f3c3 req-44273ba1-b069-48ed-a31f-f2256a209d89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.323 257641 DEBUG oslo_concurrency.lockutils [req-912a0ffe-bfbe-42c5-94cb-b2b3d4a4f3c3 req-44273ba1-b069-48ed-a31f-f2256a209d89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.324 257641 DEBUG nova.compute.manager [req-912a0ffe-bfbe-42c5-94cb-b2b3d4a4f3c3 req-44273ba1-b069-48ed-a31f-f2256a209d89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Processing event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.349 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d58dfd1b-38b8-443b-8f9e-fb55473ed0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.350 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.350 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.351 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.352 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:20:08 np0005539550 NetworkManager[49039]: <info>  [1764404408.3535] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.354 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:08Z|00472|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.371 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.372 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccdc8e6-ff80-41c3-a70b-5ee5f5e6e39a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.372 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:08.373 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.455 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404408.4554772, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.456 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.458 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.462 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.466 257641 INFO nova.virt.libvirt.driver [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance spawned successfully.#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.466 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.485 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.488 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.502 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.502 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.503 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.503 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.504 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.504 257641 DEBUG nova.virt.libvirt.driver [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.546 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.553 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404408.4564505, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.554 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.584 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.588 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404408.4615674, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.588 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.601 257641 INFO nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.602 257641 DEBUG nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.630 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.633 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.669 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.693 257641 INFO nova.compute.manager [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Took 8.98 seconds to build instance.#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.713 257641 DEBUG oslo_concurrency.lockutils [None req-e6211bd9-6d51-4d59-b9ff-db057cc7b96d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:08 np0005539550 podman[329684]: 2025-11-29 08:20:08.778428011 +0000 UTC m=+0.071954431 container create a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:20:08 np0005539550 systemd[1]: Started libpod-conmon-a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854.scope.
Nov 29 03:20:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:08 np0005539550 podman[329684]: 2025-11-29 08:20:08.751494956 +0000 UTC m=+0.045021466 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb467b6d2b9a49908ec908dac535312017ae59dbac1bcb09c776d8f1bcb3f77f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:08 np0005539550 podman[329684]: 2025-11-29 08:20:08.859582025 +0000 UTC m=+0.153108475 container init a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:20:08 np0005539550 podman[329684]: 2025-11-29 08:20:08.864536231 +0000 UTC m=+0.158062661 container start a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.883 257641 DEBUG nova.compute.manager [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Received event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.884 257641 DEBUG nova.compute.manager [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing instance network info cache due to event network-changed-0bdc8d4b-e261-4398-8465-58392acd35a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.884 257641 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.884 257641 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:08 np0005539550 nova_compute[257631]: 2025-11-29 08:20:08.884 257641 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Refreshing network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:08 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [NOTICE]   (329704) : New worker (329706) forked
Nov 29 03:20:08 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [NOTICE]   (329704) : Loading success.
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:20:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:20:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 305 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 61 op/s
Nov 29 03:20:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:10.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.478 257641 DEBUG nova.compute.manager [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.479 257641 DEBUG oslo_concurrency.lockutils [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.480 257641 DEBUG oslo_concurrency.lockutils [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.480 257641 DEBUG oslo_concurrency.lockutils [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.480 257641 DEBUG nova.compute.manager [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:10 np0005539550 nova_compute[257631]: 2025-11-29 08:20:10.481 257641 WARNING nova.compute.manager [req-125c8f41-a58a-49fb-98bb-96ccde765c6a req-aa5aacd3-5c62-48f3-93c4-0475fb6f6bc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received unexpected event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 295 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 141 op/s
Nov 29 03:20:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:11.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 29 03:20:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 29 03:20:11 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 29 03:20:11 np0005539550 nova_compute[257631]: 2025-11-29 08:20:11.666 257641 INFO nova.compute.manager [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Rebuilding instance#033[00m
Nov 29 03:20:11 np0005539550 nova_compute[257631]: 2025-11-29 08:20:11.671 257641 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updated VIF entry in instance network info cache for port 0bdc8d4b-e261-4398-8465-58392acd35a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:11 np0005539550 nova_compute[257631]: 2025-11-29 08:20:11.671 257641 DEBUG nova.network.neutron [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:11 np0005539550 nova_compute[257631]: 2025-11-29 08:20:11.714 257641 DEBUG oslo_concurrency.lockutils [req-35e296b9-a6c3-402d-bdfd-61c633ff6928 req-b6d5d505-6205-4581-825a-318bdaa487be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:11 np0005539550 nova_compute[257631]: 2025-11-29 08:20:11.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.093 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.118 257641 DEBUG nova.compute.manager [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.306 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.321 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:12.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.336 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.412 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.426 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:20:12 np0005539550 nova_compute[257631]: 2025-11-29 08:20:12.429 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:20:13 np0005539550 nova_compute[257631]: 2025-11-29 08:20:13.064 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 295 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 4.2 MiB/s wr, 167 op/s
Nov 29 03:20:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:13.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:14.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.042 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.044 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.044 257641 DEBUG nova.compute.manager [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Nov 29 03:20:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 295 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.6 MiB/s wr, 199 op/s
Nov 29 03:20:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.657 257641 DEBUG neutronclient.v2_0.client [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 0bdc8d4b-e261-4398-8465-58392acd35a8 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.658 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.658 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.659 257641 DEBUG nova.network.neutron [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:15 np0005539550 nova_compute[257631]: 2025-11-29 08:20:15.659 257641 DEBUG nova.objects.instance [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'info_cache' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:16.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:16 np0005539550 nova_compute[257631]: 2025-11-29 08:20:16.840 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:17 np0005539550 nova_compute[257631]: 2025-11-29 08:20:17.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 295 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 269 op/s
Nov 29 03:20:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:17 np0005539550 nova_compute[257631]: 2025-11-29 08:20:17.526 257641 DEBUG nova.network.neutron [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Updating instance_info_cache with network_info: [{"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:17 np0005539550 nova_compute[257631]: 2025-11-29 08:20:17.549 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-cf3d3db9-f753-47a8-93d5-7f0491bb03fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:17 np0005539550 nova_compute[257631]: 2025-11-29 08:20:17.549 257641 DEBUG nova.objects.instance [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid cf3d3db9-f753-47a8-93d5-7f0491bb03fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:17 np0005539550 nova_compute[257631]: 2025-11-29 08:20:17.647 257641 DEBUG nova.storage.rbd_utils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] removing snapshot(nova-resize) on rbd image(cf3d3db9-f753-47a8-93d5-7f0491bb03fd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:20:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 29 03:20:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 29 03:20:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.010 257641 DEBUG nova.virt.libvirt.vif [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1568692522',display_name='tempest-TestNetworkAdvancedServerOps-server-1568692522',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1568692522',id=111,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPNKUwFrrTjn8atdc6IVHURjdCwbc8WxyLGXpa+LJc5sLs2eoepMjjuqxjn33AoGUMizcXrpPXctDgQXs8T7l76aOuh+gBdm/mktVIbC7S76mvgSpzr3zbuH99OXaXcKFA==',key_name='tempest-TestNetworkAdvancedServerOps-1778483648',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-kpo3ssd1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:13Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=cf3d3db9-f753-47a8-93d5-7f0491bb03fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.011 257641 DEBUG nova.network.os_vif_util [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "0bdc8d4b-e261-4398-8465-58392acd35a8", "address": "fa:16:3e:5a:07:26", "network": {"id": "96a9f8d0-94cb-4ef1-b5fc-814aeb66b309", "bridge": "br-int", "label": "tempest-network-smoke--589317975", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0bdc8d4b-e2", "ovs_interfaceid": "0bdc8d4b-e261-4398-8465-58392acd35a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.013 257641 DEBUG nova.network.os_vif_util [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.014 257641 DEBUG os_vif [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.016 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0bdc8d4b-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.016 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.018 257641 INFO os_vif [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5a:07:26,bridge_name='br-int',has_traffic_filtering=True,id=0bdc8d4b-e261-4398-8465-58392acd35a8,network=Network(96a9f8d0-94cb-4ef1-b5fc-814aeb66b309),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0bdc8d4b-e2')#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.018 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.019 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.261 257641 DEBUG oslo_concurrency.processutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:18.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491032700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.705 257641 DEBUG oslo_concurrency.processutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.710 257641 DEBUG nova.compute.provider_tree [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.845 257641 DEBUG nova.scheduler.client.report [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:18 np0005539550 nova_compute[257631]: 2025-11-29 08:20:18.893 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:18.952 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:18.953 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:18.953 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:19 np0005539550 nova_compute[257631]: 2025-11-29 08:20:19.005 257641 INFO nova.scheduler.client.report [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocation for migration 96a5768a-dd2b-42ce-a3a7-1085b8029fe4#033[00m
Nov 29 03:20:19 np0005539550 nova_compute[257631]: 2025-11-29 08:20:19.051 257641 DEBUG oslo_concurrency.lockutils [None req-741e69bf-8a19-4d70-9eed-a0d457a380c1 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "cf3d3db9-f753-47a8-93d5-7f0491bb03fd" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 4.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 295 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 22 KiB/s wr, 216 op/s
Nov 29 03:20:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0053557355643960546 of space, bias 1.0, pg target 1.6067206693188163 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29568760323608784 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:20:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:20:20 np0005539550 nova_compute[257631]: 2025-11-29 08:20:20.251 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404405.2497969, cf3d3db9-f753-47a8-93d5-7f0491bb03fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:20 np0005539550 nova_compute[257631]: 2025-11-29 08:20:20.251 257641 INFO nova.compute.manager [-] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:20 np0005539550 nova_compute[257631]: 2025-11-29 08:20:20.272 257641 DEBUG nova.compute.manager [None req-bfa48197-9613-4df1-be8a-97448e58b9fb - - - - - -] [instance: cf3d3db9-f753-47a8-93d5-7f0491bb03fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 311 MiB data, 974 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 671 KiB/s wr, 251 op/s
Nov 29 03:20:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:21.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:21 np0005539550 nova_compute[257631]: 2025-11-29 08:20:21.844 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:22 np0005539550 nova_compute[257631]: 2025-11-29 08:20:22.231 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:22Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:ee:a4 10.100.0.13
Nov 29 03:20:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:22Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:ee:a4 10.100.0.13
Nov 29 03:20:22 np0005539550 nova_compute[257631]: 2025-11-29 08:20:22.471 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:20:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 337 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Nov 29 03:20:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:23.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:24 np0005539550 kernel: tap1ac0f281-7d (unregistering): left promiscuous mode
Nov 29 03:20:24 np0005539550 NetworkManager[49039]: <info>  [1764404424.9419] device (tap1ac0f281-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:24 np0005539550 nova_compute[257631]: 2025-11-29 08:20:24.952 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:24Z|00473|binding|INFO|Releasing lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 from this chassis (sb_readonly=0)
Nov 29 03:20:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:24Z|00474|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 down in Southbound
Nov 29 03:20:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:24Z|00475|binding|INFO|Removing iface tap1ac0f281-7d ovn-installed in OVS
Nov 29 03:20:24 np0005539550 nova_compute[257631]: 2025-11-29 08:20:24.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:24.959 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ee:a4 10.100.0.13'], port_security=['fa:16:3e:1d:ee:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e7c0561a-dbc2-4575-a537-f1269d068f5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1ac0f281-7dd4-441c-ad6b-82816fd3c242) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:24.960 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1ac0f281-7dd4-441c-ad6b-82816fd3c242 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:20:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:24.961 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:24.962 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9eb2e9-5b6c-4cf5-b726-531f731db7bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:24.963 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:20:24 np0005539550 nova_compute[257631]: 2025-11-29 08:20:24.983 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 29 03:20:25 np0005539550 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000074.scope: Consumed 14.213s CPU time.
Nov 29 03:20:25 np0005539550 systemd-machined[216673]: Machine qemu-56-instance-00000074 terminated.
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [NOTICE]   (329704) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [NOTICE]   (329704) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [WARNING]  (329704) : Exiting Master process...
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [WARNING]  (329704) : Exiting Master process...
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [ALERT]    (329704) : Current worker (329706) exited with code 143 (Terminated)
Nov 29 03:20:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[329700]: [WARNING]  (329704) : All workers exited. Exiting... (0)
Nov 29 03:20:25 np0005539550 systemd[1]: libpod-a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854.scope: Deactivated successfully.
Nov 29 03:20:25 np0005539550 podman[329858]: 2025-11-29 08:20:25.10007005 +0000 UTC m=+0.044074722 container died a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:20:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fb467b6d2b9a49908ec908dac535312017ae59dbac1bcb09c776d8f1bcb3f77f-merged.mount: Deactivated successfully.
Nov 29 03:20:25 np0005539550 podman[329858]: 2025-11-29 08:20:25.148545723 +0000 UTC m=+0.092550335 container cleanup a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:20:25 np0005539550 systemd[1]: libpod-conmon-a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854.scope: Deactivated successfully.
Nov 29 03:20:25 np0005539550 podman[329890]: 2025-11-29 08:20:25.220292118 +0000 UTC m=+0.048003782 container remove a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.227 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[645b5d4b-41d7-4a73-9f63-6b7c3c0077c3]: (4, ('Sat Nov 29 08:20:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854)\na2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854\nSat Nov 29 08:20:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (a2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854)\na2fd2957be036e58dd55b14e64dfe6c538ed1f4d629850a8874990ddc721f854\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.229 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f305ee01-c699-40d6-8a12-b5ef416a8b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.231 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.233 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.254 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.256 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55b92359-638f-4b33-979a-fd9389618b16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.272 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d3f0df-df58-4b6c-b4e5-a5e84bdcccfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.274 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0abe8a58-ea99-4d0f-ba06-68c96fd76227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.294 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb1e1b7-6dc4-49de-9e82-196b0d86b4bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 745579, 'reachable_time': 23207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329916, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.297 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:25.298 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc1e637-6c8c-4cc7-986c-f432ceb54064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:25 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.312 257641 DEBUG nova.compute.manager [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.312 257641 DEBUG oslo_concurrency.lockutils [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.313 257641 DEBUG oslo_concurrency.lockutils [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.313 257641 DEBUG oslo_concurrency.lockutils [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.313 257641 DEBUG nova.compute.manager [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.313 257641 WARNING nova.compute.manager [req-a4e33bba-5623-4998-b46c-0435dd364d8e req-51f7e0a1-dbdc-4389-84dd-cc951400f5d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received unexpected event network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with vm_state active and task_state rebuilding.#033[00m
Nov 29 03:20:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 371 MiB data, 1013 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.7 MiB/s wr, 257 op/s
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.485 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance shutdown successfully after 13 seconds.#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.493 257641 INFO nova.virt.libvirt.driver [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance destroyed successfully.#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.499 257641 INFO nova.virt.libvirt.driver [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance destroyed successfully.#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.501 257641 DEBUG nova.virt.libvirt.vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-ServerActionsTestJSON-server-1922181223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:11Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.502 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.503 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.504 257641 DEBUG os_vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.507 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac0f281-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.509 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.515 257641 INFO os_vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d')#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.898 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deleting instance files /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_del#033[00m
Nov 29 03:20:25 np0005539550 nova_compute[257631]: 2025-11-29 08:20:25.900 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deletion of /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_del complete#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.038 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.038 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating image(s)#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.059 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.084 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.107 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.111 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.180 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.181 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.183 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.183 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.214 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.217 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:26 np0005539550 podman[330014]: 2025-11-29 08:20:26.329701713 +0000 UTC m=+0.064243675 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:20:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:26 np0005539550 podman[330013]: 2025-11-29 08:20:26.354943225 +0000 UTC m=+0.082160921 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.500 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.560 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] resizing rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.648 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.649 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Ensure instance console log exists: /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.649 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.650 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.650 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.652 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start _get_guest_xml network_info=[{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.655 257641 WARNING nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.662 257641 DEBUG nova.virt.libvirt.host [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.663 257641 DEBUG nova.virt.libvirt.host [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.667 257641 DEBUG nova.virt.libvirt.host [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.668 257641 DEBUG nova.virt.libvirt.host [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.669 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.669 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.670 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.670 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.670 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.670 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.671 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.671 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.671 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.671 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.672 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.672 257641 DEBUG nova.virt.hardware [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.672 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:26 np0005539550 nova_compute[257631]: 2025-11-29 08:20:26.686 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030198103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.174 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.202 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.207 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.236 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.412 257641 DEBUG nova.compute.manager [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.413 257641 DEBUG oslo_concurrency.lockutils [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.413 257641 DEBUG oslo_concurrency.lockutils [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.413 257641 DEBUG oslo_concurrency.lockutils [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.413 257641 DEBUG nova.compute.manager [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.413 257641 WARNING nova.compute.manager [req-347c5677-87b6-4ce9-9b95-9165df52c178 req-3b22d777-ffe3-41ce-8fa0-be22133f9d9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received unexpected event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:20:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 363 MiB data, 1016 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.0 MiB/s wr, 204 op/s
Nov 29 03:20:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:27.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3370199840' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.692 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.694 257641 DEBUG nova.virt.libvirt.vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-ServerActionsTestJSON-server-1922181223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:25Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.695 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.695 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.699 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <uuid>5cc9587a-1715-4ff6-80c1-b79d2fc224c3</uuid>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <name>instance-00000074</name>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestJSON-server-1922181223</nova:name>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:20:26</nova:creationTime>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <nova:port uuid="1ac0f281-7dd4-441c-ad6b-82816fd3c242">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="serial">5cc9587a-1715-4ff6-80c1-b79d2fc224c3</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="uuid">5cc9587a-1715-4ff6-80c1-b79d2fc224c3</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:1d:ee:a4"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <target dev="tap1ac0f281-7d"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/console.log" append="off"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:20:27 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:20:27 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:20:27 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:20:27 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.700 257641 DEBUG nova.compute.manager [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Preparing to wait for external event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.701 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.702 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.702 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.703 257641 DEBUG nova.virt.libvirt.vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-ServerActionsTestJSON-server-1922181223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:25Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.704 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.705 257641 DEBUG nova.network.os_vif_util [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.706 257641 DEBUG os_vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.707 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.708 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.709 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.713 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ac0f281-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.714 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ac0f281-7d, col_values=(('external_ids', {'iface-id': '1ac0f281-7dd4-441c-ad6b-82816fd3c242', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:ee:a4', 'vm-uuid': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539550 NetworkManager[49039]: <info>  [1764404427.7668] manager: (tap1ac0f281-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.774 257641 INFO os_vif [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d')#033[00m
Nov 29 03:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.916 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.917 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.917 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:1d:ee:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.917 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Using config drive#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.943 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.951 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.951 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.951 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.979 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.983 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.983 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.983 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:20:27 np0005539550 nova_compute[257631]: 2025-11-29 08:20:27.984 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.013 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'keypairs' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:28.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.355 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Creating config drive at /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.361 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpcc4tny execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.502 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpcc4tny" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.538 257641 DEBUG nova.storage.rbd_utils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.542 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.724 257641 DEBUG oslo_concurrency.processutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config 5cc9587a-1715-4ff6-80c1-b79d2fc224c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.725 257641 INFO nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deleting local config drive /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:20:28 np0005539550 kernel: tap1ac0f281-7d: entered promiscuous mode
Nov 29 03:20:28 np0005539550 NetworkManager[49039]: <info>  [1764404428.7804] manager: (tap1ac0f281-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:28Z|00476|binding|INFO|Claiming lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 for this chassis.
Nov 29 03:20:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:28Z|00477|binding|INFO|1ac0f281-7dd4-441c-ad6b-82816fd3c242: Claiming fa:16:3e:1d:ee:a4 10.100.0.13
Nov 29 03:20:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:28Z|00478|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 ovn-installed in OVS
Nov 29 03:20:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:28Z|00479|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 up in Southbound
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.803 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ee:a4 10.100.0.13'], port_security=['fa:16:3e:1d:ee:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e7c0561a-dbc2-4575-a537-f1269d068f5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1ac0f281-7dd4-441c-ad6b-82816fd3c242) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:28 np0005539550 nova_compute[257631]: 2025-11-29 08:20:28.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.804 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1ac0f281-7dd4-441c-ad6b-82816fd3c242 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.806 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:20:28 np0005539550 systemd-udevd[330278]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.817 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[778d80c3-b696-458e-a82a-a640610b65a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.818 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.821 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.821 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[79c8857e-00ee-48f1-aafa-e1112cdeed7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.824 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fe0916a4-adc3-47d9-b7f9-3627284c6495]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 NetworkManager[49039]: <info>  [1764404428.8257] device (tap1ac0f281-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:28 np0005539550 systemd-machined[216673]: New machine qemu-57-instance-00000074.
Nov 29 03:20:28 np0005539550 NetworkManager[49039]: <info>  [1764404428.8272] device (tap1ac0f281-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.837 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[62bfa08d-f106-48b9-a90e-fe72d867b7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 systemd[1]: Started Virtual Machine qemu-57-instance-00000074.
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.863 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d824ef6-77d6-4f62-80f1-c2b5ca05c515]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.888 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[894335c3-aba1-4b75-9511-d1e216c9f45c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.894 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6f6f70-3eff-4234-9498-6095d34fe399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 NetworkManager[49039]: <info>  [1764404428.8953] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.928 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[059f43b8-c888-42c0-b473-e63574c7ac18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.931 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[de15303a-3a92-4bb4-8815-73765002ba26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 NetworkManager[49039]: <info>  [1764404428.9562] device (tap58fd104d-40): carrier: link connected
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.961 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ce316cd4-2561-4a8e-bfac-587c62b922c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.978 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b33de7-dce6-48a9-8fa6-a5492c0db9a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747662, 'reachable_time': 21886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330314, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:28.991 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[882c2ed0-47ca-4b21-99f8-8c1581f83d48]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 747662, 'tstamp': 747662}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330315, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.006 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[114496dd-ed5c-4e76-9e44-b8d08836d186]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 134], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747662, 'reachable_time': 21886, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 330316, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.040 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[381fdf45-11a9-4174-a897-b0c86eccc45f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.115 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b01920-e65b-4fea-a3e7-65e43f256e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.117 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.117 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.117 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.119 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539550 NetworkManager[49039]: <info>  [1764404429.1207] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Nov 29 03:20:29 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.123 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.124 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.125 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:29Z|00480|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.140 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.141 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.142 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2faee05a-d8ae-4ac6-9998-edbacfebb3e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.144 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:29.145 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 361 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.5 MiB/s wr, 269 op/s
Nov 29 03:20:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.545 257641 DEBUG nova.compute.manager [req-2cc4d87c-3e61-4b5f-962d-27e84ef4ada5 req-29cc7b4b-ca19-4cd6-a4ff-1b4009698cd3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.546 257641 DEBUG oslo_concurrency.lockutils [req-2cc4d87c-3e61-4b5f-962d-27e84ef4ada5 req-29cc7b4b-ca19-4cd6-a4ff-1b4009698cd3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.547 257641 DEBUG oslo_concurrency.lockutils [req-2cc4d87c-3e61-4b5f-962d-27e84ef4ada5 req-29cc7b4b-ca19-4cd6-a4ff-1b4009698cd3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.547 257641 DEBUG oslo_concurrency.lockutils [req-2cc4d87c-3e61-4b5f-962d-27e84ef4ada5 req-29cc7b4b-ca19-4cd6-a4ff-1b4009698cd3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.547 257641 DEBUG nova.compute.manager [req-2cc4d87c-3e61-4b5f-962d-27e84ef4ada5 req-29cc7b4b-ca19-4cd6-a4ff-1b4009698cd3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Processing event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:20:29 np0005539550 podman[330366]: 2025-11-29 08:20:29.556053485 +0000 UTC m=+0.053076371 container create 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 03:20:29 np0005539550 systemd[1]: Started libpod-conmon-65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c.scope.
Nov 29 03:20:29 np0005539550 podman[330366]: 2025-11-29 08:20:29.530585377 +0000 UTC m=+0.027608273 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1fd56848cae252ff30538f08f8c4dc2ae6c985b295ba42a402e9456c01682c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.639 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.640 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404429.6390126, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.641 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.644 257641 DEBUG nova.compute.manager [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.648 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.653 257641 INFO nova.virt.libvirt.driver [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance spawned successfully.#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.653 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.660 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539550 podman[330366]: 2025-11-29 08:20:29.661267881 +0000 UTC m=+0.158290817 container init 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.665 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:29 np0005539550 podman[330366]: 2025-11-29 08:20:29.669005327 +0000 UTC m=+0.166028213 container start 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.679 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.680 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.681 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.682 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.683 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.683 257641 DEBUG nova.virt.libvirt.driver [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.693 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.709 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404429.63934, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.709 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [NOTICE]   (330409) : New worker (330411) forked
Nov 29 03:20:29 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [NOTICE]   (330409) : Loading success.
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.758 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.761 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404429.6475396, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.762 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.768 257641 DEBUG nova.compute.manager [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.800 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.804 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.818 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updating instance_info_cache with network_info: [{"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.840 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.841 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.841 257641 DEBUG nova.objects.instance [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.865 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-5cc9587a-1715-4ff6-80c1-b79d2fc224c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.866 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.866 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.922 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.922 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.923 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.936 257641 DEBUG oslo_concurrency.lockutils [None req-419a16b8-46c6-44c0-9b1b-b755b3f6d321 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.964 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.965 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.965 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.965 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:20:29 np0005539550 nova_compute[257631]: 2025-11-29 08:20:29.966 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:30.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210349605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.464 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.542 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.543 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.715 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.717 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4255MB free_disk=20.839256286621094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.717 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.718 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.804 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.805 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.805 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.823 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.842 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.843 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.868 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.892 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:20:30 np0005539550 nova_compute[257631]: 2025-11-29 08:20:30.936 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/210966611' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.394 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.400 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.417 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.440 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.441 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 374 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 8.8 MiB/s wr, 350 op/s
Nov 29 03:20:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.646 257641 DEBUG nova.compute.manager [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.646 257641 DEBUG oslo_concurrency.lockutils [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.647 257641 DEBUG oslo_concurrency.lockutils [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.648 257641 DEBUG oslo_concurrency.lockutils [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.648 257641 DEBUG nova.compute.manager [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:31 np0005539550 nova_compute[257631]: 2025-11-29 08:20:31.648 257641 WARNING nova.compute.manager [req-79bff6bd-6cb5-44c6-a9b6-09e460197ba1 req-402f6331-83fb-44be-a304-00280f26d4ac 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received unexpected event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:20:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:32 np0005539550 nova_compute[257631]: 2025-11-29 08:20:32.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:32 np0005539550 nova_compute[257631]: 2025-11-29 08:20:32.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 374 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 7.6 MiB/s wr, 375 op/s
Nov 29 03:20:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:33.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.050 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.051 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.051 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.051 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.051 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.053 257641 INFO nova.compute.manager [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Terminating instance#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.054 257641 DEBUG nova.compute.manager [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:20:34 np0005539550 kernel: tap1ac0f281-7d (unregistering): left promiscuous mode
Nov 29 03:20:34 np0005539550 NetworkManager[49039]: <info>  [1764404434.0928] device (tap1ac0f281-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:20:34 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:34Z|00481|binding|INFO|Releasing lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 from this chassis (sb_readonly=0)
Nov 29 03:20:34 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:34Z|00482|binding|INFO|Setting lport 1ac0f281-7dd4-441c-ad6b-82816fd3c242 down in Southbound
Nov 29 03:20:34 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:34Z|00483|binding|INFO|Removing iface tap1ac0f281-7d ovn-installed in OVS
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.112 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ee:a4 10.100.0.13'], port_security=['fa:16:3e:1d:ee:a4 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5cc9587a-1715-4ff6-80c1-b79d2fc224c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e7c0561a-dbc2-4575-a537-f1269d068f5e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1ac0f281-7dd4-441c-ad6b-82816fd3c242) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.113 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1ac0f281-7dd4-441c-ad6b-82816fd3c242 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.114 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.117 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2d172fd1-18ad-445c-b94a-8a07eb866a35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.118 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.129 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000074.scope: Deactivated successfully.
Nov 29 03:20:34 np0005539550 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000074.scope: Consumed 5.345s CPU time.
Nov 29 03:20:34 np0005539550 systemd-machined[216673]: Machine qemu-57-instance-00000074 terminated.
Nov 29 03:20:34 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [NOTICE]   (330409) : haproxy version is 2.8.14-c23fe91
Nov 29 03:20:34 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [NOTICE]   (330409) : path to executable is /usr/sbin/haproxy
Nov 29 03:20:34 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [WARNING]  (330409) : Exiting Master process...
Nov 29 03:20:34 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [ALERT]    (330409) : Current worker (330411) exited with code 143 (Terminated)
Nov 29 03:20:34 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[330404]: [WARNING]  (330409) : All workers exited. Exiting... (0)
Nov 29 03:20:34 np0005539550 systemd[1]: libpod-65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c.scope: Deactivated successfully.
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 podman[330492]: 2025-11-29 08:20:34.279309916 +0000 UTC m=+0.055035791 container died 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.285 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.294 257641 INFO nova.virt.libvirt.driver [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Instance destroyed successfully.#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.296 257641 DEBUG nova.objects.instance [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:20:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4f1fd56848cae252ff30538f08f8c4dc2ae6c985b295ba42a402e9456c01682c-merged.mount: Deactivated successfully.
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.320 257641 DEBUG nova.virt.libvirt.vif [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:19:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-192669347',display_name='tempest-ServerActionsTestJSON-server-1922181223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-192669347',id=116,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-e740scar',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:20:29Z,user_data=None,user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5cc9587a-1715-4ff6-80c1-b79d2fc224c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.320 257641 DEBUG nova.network.os_vif_util [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "address": "fa:16:3e:1d:ee:a4", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ac0f281-7d", "ovs_interfaceid": "1ac0f281-7dd4-441c-ad6b-82816fd3c242", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.321 257641 DEBUG nova.network.os_vif_util [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.322 257641 DEBUG os_vif [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac0f281-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:34 np0005539550 podman[330492]: 2025-11-29 08:20:34.324809403 +0000 UTC m=+0.100535278 container cleanup 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.327 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.331 257641 INFO os_vif [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ee:a4,bridge_name='br-int',has_traffic_filtering=True,id=1ac0f281-7dd4-441c-ad6b-82816fd3c242,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ac0f281-7d')#033[00m
Nov 29 03:20:34 np0005539550 systemd[1]: libpod-conmon-65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c.scope: Deactivated successfully.
Nov 29 03:20:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:34.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:34 np0005539550 podman[330531]: 2025-11-29 08:20:34.394952817 +0000 UTC m=+0.045525569 container remove 65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.401 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aeba7b86-8eb1-4b6d-b06b-d52063be8c81]: (4, ('Sat Nov 29 08:20:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c)\n65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c\nSat Nov 29 08:20:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c)\n65d7fac3144ef0f146a4644a6ae39202fed5612e44478bfec09520b25eeebc8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.404 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[880dc442-b981-4d19-abc4-27a661e2724f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.405 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.428 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[42484f11-0e20-4e9f-b03a-f6ce5af474e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.445 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[52a2a408-e8bb-43e7-9b06-d843776afa97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.446 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fbae4fbb-689f-4f2f-a08d-0ac6739b8870]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.463 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9fac405b-3805-41ca-af43-7f5f421e7ff2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 747654, 'reachable_time': 15985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330564, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.467 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:20:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:34.467 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[cb4c556d-0249-4b89-85cf-025ec313220f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.595 257641 DEBUG nova.compute.manager [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.595 257641 DEBUG oslo_concurrency.lockutils [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.596 257641 DEBUG oslo_concurrency.lockutils [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.596 257641 DEBUG oslo_concurrency.lockutils [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.596 257641 DEBUG nova.compute.manager [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.596 257641 DEBUG nova.compute.manager [req-e05be91f-5b26-4568-8370-797636bff80e req-9168bcb0-8d04-4cfb-b8ee-b35671180318 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-unplugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.791 257641 INFO nova.virt.libvirt.driver [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deleting instance files /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_del#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.792 257641 INFO nova.virt.libvirt.driver [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deletion of /var/lib/nova/instances/5cc9587a-1715-4ff6-80c1-b79d2fc224c3_del complete#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.881 257641 INFO nova.compute.manager [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.882 257641 DEBUG oslo.service.loopingcall [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.882 257641 DEBUG nova.compute.manager [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:20:34 np0005539550 nova_compute[257631]: 2025-11-29 08:20:34.882 257641 DEBUG nova.network.neutron [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:20:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 362 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.8 MiB/s wr, 377 op/s
Nov 29 03:20:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:35.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:35 np0005539550 nova_compute[257631]: 2025-11-29 08:20:35.775 257641 DEBUG nova.network.neutron [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:35 np0005539550 nova_compute[257631]: 2025-11-29 08:20:35.810 257641 INFO nova.compute.manager [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Took 0.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:20:35 np0005539550 nova_compute[257631]: 2025-11-29 08:20:35.870 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:35 np0005539550 nova_compute[257631]: 2025-11-29 08:20:35.870 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:35 np0005539550 nova_compute[257631]: 2025-11-29 08:20:35.918 257641 DEBUG oslo_concurrency.processutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146030209' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.365 257641 DEBUG oslo_concurrency.processutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.372 257641 DEBUG nova.compute.provider_tree [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.387 257641 DEBUG nova.scheduler.client.report [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.410 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.434 257641 INFO nova.scheduler.client.report [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Deleted allocations for instance 5cc9587a-1715-4ff6-80c1-b79d2fc224c3#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.436 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.436 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.436 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.502 257641 DEBUG oslo_concurrency.lockutils [None req-2f9cb262-1284-4983-9aa3-ce4165bcea6d 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.608 257641 DEBUG nova.compute.manager [req-6b56a04a-3ecb-4fa5-9699-879919b667f5 req-5ed3e1d0-3035-486c-96b2-e2d2f7d9eb0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-deleted-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.745 257641 DEBUG nova.compute.manager [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.746 257641 DEBUG oslo_concurrency.lockutils [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.747 257641 DEBUG oslo_concurrency.lockutils [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.747 257641 DEBUG oslo_concurrency.lockutils [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5cc9587a-1715-4ff6-80c1-b79d2fc224c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.747 257641 DEBUG nova.compute.manager [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] No waiting events found dispatching network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:36 np0005539550 nova_compute[257631]: 2025-11-29 08:20:36.748 257641 WARNING nova.compute.manager [req-b93f2f7f-25fe-488e-970f-4de46f769aab req-374fd2d1-6895-49a7-940d-ba7a520f488f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Received unexpected event network-vif-plugged-1ac0f281-7dd4-441c-ad6b-82816fd3c242 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:20:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:37 np0005539550 nova_compute[257631]: 2025-11-29 08:20:37.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 343 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 362 op/s
Nov 29 03:20:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:37.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:38.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:38 np0005539550 podman[330642]: 2025-11-29 08:20:38.374816111 +0000 UTC m=+0.104001726 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Nov 29 03:20:38 np0005539550 nova_compute[257631]: 2025-11-29 08:20:38.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:20:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595497319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:20:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:20:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1595497319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:20:39 np0005539550 nova_compute[257631]: 2025-11-29 08:20:39.327 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 311 MiB data, 1009 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.0 MiB/s wr, 323 op/s
Nov 29 03:20:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:39.897 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:39 np0005539550 nova_compute[257631]: 2025-11-29 08:20:39.898 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:39.899 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:20:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:40.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.613 257641 DEBUG nova.compute.manager [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.673 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.674 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.692 257641 DEBUG nova.objects.instance [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_requests' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.706 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.706 257641 INFO nova.compute.claims [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.707 257641 DEBUG nova.objects.instance [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.720 257641 DEBUG nova.objects.instance [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.761 257641 INFO nova.compute.resource_tracker [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating resource usage from migration 638f6982-41f5-48ad-af75-0ac5e6748344#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.762 257641 DEBUG nova.compute.resource_tracker [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Starting to track incoming migration 638f6982-41f5-48ad-af75-0ac5e6748344 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:20:40 np0005539550 nova_compute[257631]: 2025-11-29 08:20:40.821 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1091310843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.321 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.327 257641 DEBUG nova.compute.provider_tree [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.347 257641 DEBUG nova.scheduler.client.report [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.383 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.384 257641 INFO nova.compute.manager [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Migrating#033[00m
Nov 29 03:20:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 249 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.3 MiB/s wr, 290 op/s
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:20:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 73af7ad9-ffc8-4163-8385-a3d1f48be8d2 does not exist
Nov 29 03:20:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 44161181-30cc-4174-86ab-8db80bc9f96d does not exist
Nov 29 03:20:41 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 76bf37a8-1649-4fae-94d4-b165a99f54b7 does not exist
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:20:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:41.901 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:41 np0005539550 nova_compute[257631]: 2025-11-29 08:20:41.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.162391746 +0000 UTC m=+0.040469380 container create 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:20:42 np0005539550 nova_compute[257631]: 2025-11-29 08:20:42.218 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539550 systemd[1]: Started libpod-conmon-76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b.scope.
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.142853389 +0000 UTC m=+0.020931033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:42 np0005539550 nova_compute[257631]: 2025-11-29 08:20:42.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.280833048 +0000 UTC m=+0.158910692 container init 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.28760199 +0000 UTC m=+0.165679614 container start 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.291478909 +0000 UTC m=+0.169556533 container attach 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:20:42 np0005539550 hopeful_heyrovsky[330980]: 167 167
Nov 29 03:20:42 np0005539550 systemd[1]: libpod-76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b.scope: Deactivated successfully.
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.293987963 +0000 UTC m=+0.172065587 container died 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:20:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b42ef7d51dd639296060d8dc23970db7fe4636c93908d8e4e1c5b1b6e3adc167-merged.mount: Deactivated successfully.
Nov 29 03:20:42 np0005539550 podman[330963]: 2025-11-29 08:20:42.332108002 +0000 UTC m=+0.210185626 container remove 76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:20:42 np0005539550 systemd[1]: libpod-conmon-76938d5a61fadf99bc63d40b5b0f9c44bf26ec50624f354822227e720222559b.scope: Deactivated successfully.
Nov 29 03:20:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:42 np0005539550 podman[331006]: 2025-11-29 08:20:42.496951345 +0000 UTC m=+0.041905777 container create 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:42 np0005539550 systemd[1]: Started libpod-conmon-6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39.scope.
Nov 29 03:20:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:42 np0005539550 podman[331006]: 2025-11-29 08:20:42.477023368 +0000 UTC m=+0.021977820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:42 np0005539550 podman[331006]: 2025-11-29 08:20:42.590283268 +0000 UTC m=+0.135237750 container init 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:42 np0005539550 podman[331006]: 2025-11-29 08:20:42.597178464 +0000 UTC m=+0.142132896 container start 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:42 np0005539550 podman[331006]: 2025-11-29 08:20:42.600337674 +0000 UTC m=+0.145292106 container attach 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:43 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:20:43 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:20:43 np0005539550 systemd-logind[788]: New session 57 of user nova.
Nov 29 03:20:43 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:20:43 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:20:43 np0005539550 systemd[331032]: Queued start job for default target Main User Target.
Nov 29 03:20:43 np0005539550 systemd[331032]: Created slice User Application Slice.
Nov 29 03:20:43 np0005539550 systemd[331032]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:20:43 np0005539550 systemd[331032]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:20:43 np0005539550 systemd[331032]: Reached target Paths.
Nov 29 03:20:43 np0005539550 systemd[331032]: Reached target Timers.
Nov 29 03:20:43 np0005539550 systemd[331032]: Starting D-Bus User Message Bus Socket...
Nov 29 03:20:43 np0005539550 systemd[331032]: Starting Create User's Volatile Files and Directories...
Nov 29 03:20:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 259 MiB data, 972 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 459 KiB/s wr, 171 op/s
Nov 29 03:20:43 np0005539550 systemd[331032]: Finished Create User's Volatile Files and Directories.
Nov 29 03:20:43 np0005539550 systemd[331032]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:20:43 np0005539550 systemd[331032]: Reached target Sockets.
Nov 29 03:20:43 np0005539550 systemd[331032]: Reached target Basic System.
Nov 29 03:20:43 np0005539550 systemd[331032]: Reached target Main User Target.
Nov 29 03:20:43 np0005539550 systemd[331032]: Startup finished in 139ms.
Nov 29 03:20:43 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:20:43 np0005539550 relaxed_lamarr[331023]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:20:43 np0005539550 relaxed_lamarr[331023]: --> relative data size: 1.0
Nov 29 03:20:43 np0005539550 relaxed_lamarr[331023]: --> All data devices are unavailable
Nov 29 03:20:43 np0005539550 systemd[1]: Started Session 57 of User nova.
Nov 29 03:20:43 np0005539550 systemd[1]: libpod-6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39.scope: Deactivated successfully.
Nov 29 03:20:43 np0005539550 podman[331006]: 2025-11-29 08:20:43.499204674 +0000 UTC m=+1.044159106 container died 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:20:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cce05f5b6dbf0d77767ddae50b757543b63ee21b28d41dfc47a1665472a4bc5e-merged.mount: Deactivated successfully.
Nov 29 03:20:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:43.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:43 np0005539550 systemd[1]: session-57.scope: Deactivated successfully.
Nov 29 03:20:43 np0005539550 systemd-logind[788]: Session 57 logged out. Waiting for processes to exit.
Nov 29 03:20:43 np0005539550 systemd-logind[788]: Removed session 57.
Nov 29 03:20:43 np0005539550 podman[331006]: 2025-11-29 08:20:43.557788714 +0000 UTC m=+1.102743146 container remove 6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:20:43 np0005539550 systemd[1]: libpod-conmon-6090aaf9bf9c606dfcae98416f3fb98602665a5f5ef096af8d51e968dcee8f39.scope: Deactivated successfully.
Nov 29 03:20:43 np0005539550 systemd-logind[788]: New session 59 of user nova.
Nov 29 03:20:43 np0005539550 systemd[1]: Started Session 59 of User nova.
Nov 29 03:20:43 np0005539550 systemd[1]: session-59.scope: Deactivated successfully.
Nov 29 03:20:43 np0005539550 systemd-logind[788]: Session 59 logged out. Waiting for processes to exit.
Nov 29 03:20:43 np0005539550 systemd-logind[788]: Removed session 59.
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.157225689 +0000 UTC m=+0.049762957 container create 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:20:44 np0005539550 systemd[1]: Started libpod-conmon-7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002.scope.
Nov 29 03:20:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.136971194 +0000 UTC m=+0.029508502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.238327821 +0000 UTC m=+0.130865109 container init 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.247291799 +0000 UTC m=+0.139829067 container start 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.251744563 +0000 UTC m=+0.144281851 container attach 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:20:44 np0005539550 unruffled_snyder[331231]: 167 167
Nov 29 03:20:44 np0005539550 systemd[1]: libpod-7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002.scope: Deactivated successfully.
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.254538204 +0000 UTC m=+0.147075482 container died 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:20:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4342ae332980c5c110a2441f97046a75c6eaafabbe73967edbfaef5e63e08b79-merged.mount: Deactivated successfully.
Nov 29 03:20:44 np0005539550 podman[331215]: 2025-11-29 08:20:44.29334015 +0000 UTC m=+0.185877428 container remove 7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:20:44 np0005539550 systemd[1]: libpod-conmon-7a7d4e187e84b9b128fcb8dc2c690d217acc60a47aa642eb9a8b32ab1a6bb002.scope: Deactivated successfully.
Nov 29 03:20:44 np0005539550 nova_compute[257631]: 2025-11-29 08:20:44.331 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:44.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:44 np0005539550 podman[331254]: 2025-11-29 08:20:44.473320298 +0000 UTC m=+0.045970700 container create f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:20:44 np0005539550 systemd[1]: Started libpod-conmon-f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d.scope.
Nov 29 03:20:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f10d67f795f307b5fcfa19f57e528c51419905f310478034071d58be45c32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f10d67f795f307b5fcfa19f57e528c51419905f310478034071d58be45c32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f10d67f795f307b5fcfa19f57e528c51419905f310478034071d58be45c32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c4f10d67f795f307b5fcfa19f57e528c51419905f310478034071d58be45c32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:44 np0005539550 podman[331254]: 2025-11-29 08:20:44.548809298 +0000 UTC m=+0.121459740 container init f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:44 np0005539550 podman[331254]: 2025-11-29 08:20:44.455937706 +0000 UTC m=+0.028588128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:44 np0005539550 podman[331254]: 2025-11-29 08:20:44.557133459 +0000 UTC m=+0.129783861 container start f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:20:44 np0005539550 podman[331254]: 2025-11-29 08:20:44.561370467 +0000 UTC m=+0.134020869 container attach f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]: {
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:    "0": [
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:        {
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "devices": [
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "/dev/loop3"
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            ],
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "lv_name": "ceph_lv0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "lv_size": "7511998464",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "name": "ceph_lv0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "tags": {
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.cluster_name": "ceph",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.crush_device_class": "",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.encrypted": "0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.osd_id": "0",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.type": "block",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:                "ceph.vdo": "0"
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            },
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "type": "block",
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:            "vg_name": "ceph_vg0"
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:        }
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]:    ]
Nov 29 03:20:45 np0005539550 angry_engelbart[331271]: }
Nov 29 03:20:45 np0005539550 systemd[1]: libpod-f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d.scope: Deactivated successfully.
Nov 29 03:20:45 np0005539550 podman[331254]: 2025-11-29 08:20:45.352986329 +0000 UTC m=+0.925636751 container died f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2c4f10d67f795f307b5fcfa19f57e528c51419905f310478034071d58be45c32-merged.mount: Deactivated successfully.
Nov 29 03:20:45 np0005539550 podman[331254]: 2025-11-29 08:20:45.418409722 +0000 UTC m=+0.991060134 container remove f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:20:45 np0005539550 systemd[1]: libpod-conmon-f3392d12846c47316d15be383c4b2c00f7efd14d1dee11678bb9ca9f5c55600d.scope: Deactivated successfully.
Nov 29 03:20:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 272 MiB data, 981 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 145 op/s
Nov 29 03:20:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.078329026 +0000 UTC m=+0.044319569 container create 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:46 np0005539550 systemd[1]: Started libpod-conmon-1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522.scope.
Nov 29 03:20:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.060208295 +0000 UTC m=+0.026198848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.153699782 +0000 UTC m=+0.119690345 container init 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.162983108 +0000 UTC m=+0.128973661 container start 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:20:46 np0005539550 elegant_gagarin[331452]: 167 167
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.167524304 +0000 UTC m=+0.133514937 container attach 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:20:46 np0005539550 systemd[1]: libpod-1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522.scope: Deactivated successfully.
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.168788246 +0000 UTC m=+0.134778809 container died 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1cc82069b625673336856cbd2f40433faa52a391919111501dc6cb526ad66ade-merged.mount: Deactivated successfully.
Nov 29 03:20:46 np0005539550 podman[331435]: 2025-11-29 08:20:46.211061551 +0000 UTC m=+0.177052114 container remove 1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_gagarin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:20:46 np0005539550 systemd[1]: libpod-conmon-1e6da07f162bccfe3ecd237bb26bc1fbbbb164deebce8843c3f0d83a9bef6522.scope: Deactivated successfully.
Nov 29 03:20:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:46.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:46 np0005539550 podman[331476]: 2025-11-29 08:20:46.40134006 +0000 UTC m=+0.052647360 container create 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:20:46 np0005539550 systemd[1]: Started libpod-conmon-625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905.scope.
Nov 29 03:20:46 np0005539550 podman[331476]: 2025-11-29 08:20:46.375969105 +0000 UTC m=+0.027276435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:20:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8319f109fd1a85a890b8172d554ef422229674aa61a16f58080d24bd90221b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8319f109fd1a85a890b8172d554ef422229674aa61a16f58080d24bd90221b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8319f109fd1a85a890b8172d554ef422229674aa61a16f58080d24bd90221b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d8319f109fd1a85a890b8172d554ef422229674aa61a16f58080d24bd90221b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.497 257641 DEBUG nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.499 257641 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.500 257641 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.500 257641 DEBUG oslo_concurrency.lockutils [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.501 257641 DEBUG nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.501 257641 WARNING nova.compute.manager [req-05966754-7c54-4e50-af0e-7f4d2358513a req-d1068a83-b6f6-4984-bd42-9b22956cfceb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:20:46 np0005539550 podman[331476]: 2025-11-29 08:20:46.509662345 +0000 UTC m=+0.160969665 container init 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:46 np0005539550 podman[331476]: 2025-11-29 08:20:46.517266559 +0000 UTC m=+0.168573859 container start 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:20:46 np0005539550 podman[331476]: 2025-11-29 08:20:46.520990363 +0000 UTC m=+0.172297663 container attach 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 03:20:46 np0005539550 nova_compute[257631]: 2025-11-29 08:20:46.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.298 257641 INFO nova.network.neutron [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating port 524180cf-279c-48d6-8bf1-04f8f159aef6 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]: {
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:        "osd_id": 0,
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:        "type": "bluestore"
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]:    }
Nov 29 03:20:47 np0005539550 suspicious_wilbur[331492]: }
Nov 29 03:20:47 np0005539550 systemd[1]: libpod-625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905.scope: Deactivated successfully.
Nov 29 03:20:47 np0005539550 podman[331476]: 2025-11-29 08:20:47.426246016 +0000 UTC m=+1.077553316 container died 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:20:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9d8319f109fd1a85a890b8172d554ef422229674aa61a16f58080d24bd90221b-merged.mount: Deactivated successfully.
Nov 29 03:20:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 279 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 639 KiB/s rd, 2.1 MiB/s wr, 126 op/s
Nov 29 03:20:47 np0005539550 podman[331476]: 2025-11-29 08:20:47.480816834 +0000 UTC m=+1.132124134 container remove 625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:20:47 np0005539550 systemd[1]: libpod-conmon-625d2516709b444de3a457cf3cf05866463fcf52b368a2f098ec04aa8f1d3905.scope: Deactivated successfully.
Nov 29 03:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:20:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 34147dc4-ac9f-4107-b840-88152f911879 does not exist
Nov 29 03:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9224539c-41e4-4f9e-ad27-d7b01b48642f does not exist
Nov 29 03:20:47 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0928b9f0-1b38-4c8e-9195-f1600f881a3b does not exist
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.881 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.882 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.882 257641 DEBUG nova.network.neutron [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.994 257641 DEBUG nova.compute.manager [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.995 257641 DEBUG nova.compute.manager [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing instance network info cache due to event network-changed-524180cf-279c-48d6-8bf1-04f8f159aef6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:20:47 np0005539550 nova_compute[257631]: 2025-11-29 08:20:47.996 257641 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:20:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:48.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.784 257641 DEBUG nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.785 257641 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.785 257641 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.785 257641 DEBUG oslo_concurrency.lockutils [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.785 257641 DEBUG nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:48 np0005539550 nova_compute[257631]: 2025-11-29 08:20:48.786 257641 WARNING nova.compute.manager [req-a72075e0-9d7b-4452-aab0-e96f611fd8d5 req-4b6de820-94cf-4da0-bd3a-ecc1c72a4d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.145 257641 DEBUG nova.network.neutron [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.168 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.172 257641 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.172 257641 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Refreshing network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.272 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.274 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.274 257641 INFO nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Creating image(s)#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.423 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404434.2920644, 5cc9587a-1715-4ff6-80c1-b79d2fc224c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.423 257641 INFO nova.compute.manager [-] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.430 257641 DEBUG nova.storage.rbd_utils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] creating snapshot(nova-resize) on rbd image(258dfc76-0ea9-4521-a3fc-5d64b3632451_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:20:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 281 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.461 257641 DEBUG nova.compute.manager [None req-79ccd5a6-88fe-4cb1-9b0b-0e021b79814a - - - - - -] [instance: 5cc9587a-1715-4ff6-80c1-b79d2fc224c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 29 03:20:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 29 03:20:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.606 257641 DEBUG nova.objects.instance [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.725 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.725 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Ensure instance console log exists: /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.726 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.726 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.726 257641 DEBUG oslo_concurrency.lockutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.729 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start _get_guest_xml network_info=[{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.734 257641 WARNING nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.739 257641 DEBUG nova.virt.libvirt.host [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.740 257641 DEBUG nova.virt.libvirt.host [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.743 257641 DEBUG nova.virt.libvirt.host [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.743 257641 DEBUG nova.virt.libvirt.host [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.744 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.745 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.745 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.746 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.746 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.746 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.746 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.747 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.747 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.747 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.747 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.748 257641 DEBUG nova.virt.hardware [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.748 257641 DEBUG nova.objects.instance [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:20:49 np0005539550 nova_compute[257631]: 2025-11-29 08:20:49.770 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/937499402' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.238 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.270 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:20:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:50.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:20:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131974754' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.732 257641 DEBUG oslo_concurrency.processutils [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.734 257641 DEBUG nova.virt.libvirt.vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.734 257641 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.735 257641 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.738 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <uuid>258dfc76-0ea9-4521-a3fc-5d64b3632451</uuid>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <name>instance-0000006c</name>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestJSON-server-1950416616</nova:name>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:20:49</nova:creationTime>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <nova:port uuid="524180cf-279c-48d6-8bf1-04f8f159aef6">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="serial">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="uuid">258dfc76-0ea9-4521-a3fc-5d64b3632451</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/258dfc76-0ea9-4521-a3fc-5d64b3632451_disk.config">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:b8:49:96"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <target dev="tap524180cf-27"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451/console.log" append="off"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:20:50 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:20:50 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:20:50 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:20:50 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.739 257641 DEBUG nova.virt.libvirt.vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:17:32Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:20:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.740 257641 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:b8:49:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.741 257641 DEBUG nova.network.os_vif_util [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.742 257641 DEBUG os_vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.743 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.743 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.744 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.747 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.747 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap524180cf-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.748 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap524180cf-27, col_values=(('external_ids', {'iface-id': '524180cf-279c-48d6-8bf1-04f8f159aef6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:49:96', 'vm-uuid': '258dfc76-0ea9-4521-a3fc-5d64b3632451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.7513] manager: (tap524180cf-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/218)
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.753 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.758 257641 INFO os_vif [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.813 257641 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updated VIF entry in instance network info cache for port 524180cf-279c-48d6-8bf1-04f8f159aef6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.814 257641 DEBUG nova.network.neutron [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [{"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.835 257641 DEBUG oslo_concurrency.lockutils [req-ddc13388-d0a1-490d-97f6-432e4e021666 req-e71f8f3f-6e07-4a38-a952-2eab17d7cf0a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-258dfc76-0ea9-4521-a3fc-5d64b3632451" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.842 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.843 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.843 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:b8:49:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.844 257641 INFO nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Using config drive#033[00m
Nov 29 03:20:50 np0005539550 kernel: tap524180cf-27: entered promiscuous mode
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.9435] manager: (tap524180cf-27): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Nov 29 03:20:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:50Z|00484|binding|INFO|Claiming lport 524180cf-279c-48d6-8bf1-04f8f159aef6 for this chassis.
Nov 29 03:20:50 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:50Z|00485|binding|INFO|524180cf-279c-48d6-8bf1-04f8f159aef6: Claiming fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.945 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 nova_compute[257631]: 2025-11-29 08:20:50.968 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.9703] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.9712] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Nov 29 03:20:50 np0005539550 systemd-udevd[331744]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:20:50 np0005539550 systemd-machined[216673]: New machine qemu-58-instance-0000006c.
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.978 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '12', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.980 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.982 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.9886] device (tap524180cf-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:20:50 np0005539550 NetworkManager[49039]: <info>  [1764404450.9893] device (tap524180cf-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.994 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[275810b8-2738-4e1d-9cf4-519727052086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.996 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.997 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.998 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8716b869-6a19-44f2-807f-24a36fe9c35b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:50.998 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28e0ca29-ebe8-4a31-8042-1d647ba3da09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.011 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[74090662-665a-49f0-9213-539d55be765d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 systemd[1]: Started Virtual Machine qemu-58-instance-0000006c.
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.036 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a838f1d8-c010-47e5-bed4-86ee5d8027e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.068 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[53f72095-fe84-47db-b6d5-ee46214c52e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.075 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a6d9ef-9c00-4ebf-bca7-b94566cfa7b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 NetworkManager[49039]: <info>  [1764404451.0767] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.118 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4505d291-bb40-4f76-b56b-d208d5ed2e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.123 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd58e22-4f5b-4c79-ae8f-eb3ee2448c76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 NetworkManager[49039]: <info>  [1764404451.1473] device (tap58fd104d-40): carrier: link connected
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.153 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e24c445d-1511-4a2b-a72e-2ff1005b292d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.169 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d91695-7773-4e82-8103-357b47586077]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749881, 'reachable_time': 16165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331777, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.185 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[654900ae-6821-4c63-912c-798df391a822]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749881, 'tstamp': 749881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331778, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.201 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe92674-2c07-42c4-a497-5db811a0c22e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749881, 'reachable_time': 16165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331779, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.204 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.226 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.235 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e99054fa-5d61-4594-a7b6-6b94bc7393be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:51Z|00486|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 ovn-installed in OVS
Nov 29 03:20:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:51Z|00487|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 up in Southbound
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.264 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.293 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[daa3804d-858f-4722-a639-6d80fe01fc85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.294 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.295 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.295 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.296 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 NetworkManager[49039]: <info>  [1764404451.2973] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Nov 29 03:20:51 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.300 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:20:51Z|00488|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.316 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.318 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.319 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e146486e-95ed-4959-bb32-8569bcbe3692]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.320 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:20:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:20:51.320 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:20:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 248 MiB data, 1010 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 125 op/s
Nov 29 03:20:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:51.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:51 np0005539550 podman[331852]: 2025-11-29 08:20:51.708351908 +0000 UTC m=+0.047345855 container create 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.711 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404451.710743, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.712 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.719 257641 DEBUG nova.compute.manager [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.723 257641 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance running successfully.#033[00m
Nov 29 03:20:51 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.726 257641 DEBUG nova.virt.libvirt.guest [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.727 257641 DEBUG nova.virt.libvirt.driver [None req-8c99d7b7-9af7-4352-a4b9-c95f331e6db4 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.731 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.734 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:51 np0005539550 systemd[1]: Started libpod-conmon-878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c.scope.
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.756 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.757 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404451.7175808, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.757 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Started (Lifecycle Event)#033[00m
Nov 29 03:20:51 np0005539550 podman[331852]: 2025-11-29 08:20:51.682248074 +0000 UTC m=+0.021242041 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:20:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:20:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2410e0dfab65a6e7b63c66f075a1481b72f1216f715280b4124eab35614e791/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:20:51 np0005539550 podman[331852]: 2025-11-29 08:20:51.807795867 +0000 UTC m=+0.146789824 container init 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:20:51 np0005539550 podman[331852]: 2025-11-29 08:20:51.814069646 +0000 UTC m=+0.153063593 container start 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.816 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.820 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:20:51 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [NOTICE]   (331873) : New worker (331875) forked
Nov 29 03:20:51 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [NOTICE]   (331873) : Loading success.
Nov 29 03:20:51 np0005539550 nova_compute[257631]: 2025-11-29 08:20:51.859 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:20:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:52 np0005539550 nova_compute[257631]: 2025-11-29 08:20:52.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:52.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 231 MiB data, 1004 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Nov 29 03:20:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:53.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:53 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:20:53 np0005539550 systemd[331032]: Activating special unit Exit the Session...
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped target Main User Target.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped target Basic System.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped target Paths.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped target Sockets.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped target Timers.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:20:53 np0005539550 systemd[331032]: Closed D-Bus User Message Bus Socket.
Nov 29 03:20:53 np0005539550 systemd[331032]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:20:53 np0005539550 systemd[331032]: Removed slice User Application Slice.
Nov 29 03:20:53 np0005539550 systemd[331032]: Reached target Shutdown.
Nov 29 03:20:53 np0005539550 systemd[331032]: Finished Exit the Session.
Nov 29 03:20:53 np0005539550 systemd[331032]: Reached target Exit the Session.
Nov 29 03:20:54 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:20:54 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:20:54 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:20:54 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:20:54 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:20:54 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:20:54 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:20:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 03:20:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:54.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.547 257641 DEBUG nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.549 257641 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.549 257641 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.549 257641 DEBUG oslo_concurrency.lockutils [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.550 257641 DEBUG nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:54 np0005539550 nova_compute[257631]: 2025-11-29 08:20:54.550 257641 WARNING nova.compute.manager [req-6a5c5593-a91d-4ce9-a612-f2bc6014b332 req-933553b1-65f7-43a3-8de9-318e3c5f572b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:20:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 202 MiB data, 985 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Nov 29 03:20:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:55 np0005539550 nova_compute[257631]: 2025-11-29 08:20:55.787 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:20:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:56.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.887 257641 DEBUG nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.887 257641 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.887 257641 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.888 257641 DEBUG oslo_concurrency.lockutils [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.888 257641 DEBUG nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:20:56 np0005539550 nova_compute[257631]: 2025-11-29 08:20:56.888 257641 WARNING nova.compute.manager [req-b37d265c-3f1d-4387-9180-972bbe19d0c5 req-fd80acaa-7f8a-4bba-980d-0980f1f167a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:20:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:20:57 np0005539550 nova_compute[257631]: 2025-11-29 08:20:57.307 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:20:57 np0005539550 podman[331889]: 2025-11-29 08:20:57.355962648 +0000 UTC m=+0.083452104 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:20:57 np0005539550 podman[331888]: 2025-11-29 08:20:57.366518116 +0000 UTC m=+0.090214275 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:20:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 202 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 24 KiB/s wr, 141 op/s
Nov 29 03:20:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:20:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:57.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:20:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:20:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:20:58.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:20:59
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:20:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 202 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 141 op/s
Nov 29 03:20:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:20:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:20:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:20:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 29 03:21:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:00.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 29 03:21:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 29 03:21:00 np0005539550 nova_compute[257631]: 2025-11-29 08:21:00.789 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 202 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 KiB/s wr, 104 op/s
Nov 29 03:21:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:01.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:01Z|00489|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:21:01 np0005539550 nova_compute[257631]: 2025-11-29 08:21:01.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:02 np0005539550 nova_compute[257631]: 2025-11-29 08:21:02.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:02.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 222 MiB data, 964 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 887 KiB/s wr, 70 op/s
Nov 29 03:21:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:03.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:04.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 254 MiB data, 983 MiB used, 20 GiB / 21 GiB avail; 845 KiB/s rd, 2.3 MiB/s wr, 87 op/s
Nov 29 03:21:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:05Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:49:96 10.100.0.5
Nov 29 03:21:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:05.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:05 np0005539550 nova_compute[257631]: 2025-11-29 08:21:05.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.128 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.129 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.129 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.129 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.130 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.131 257641 INFO nova.compute.manager [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Terminating instance#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.132 257641 DEBUG nova.compute.manager [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:06 np0005539550 kernel: tap524180cf-27 (unregistering): left promiscuous mode
Nov 29 03:21:06 np0005539550 NetworkManager[49039]: <info>  [1764404466.1843] device (tap524180cf-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:06Z|00490|binding|INFO|Releasing lport 524180cf-279c-48d6-8bf1-04f8f159aef6 from this chassis (sb_readonly=0)
Nov 29 03:21:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:06Z|00491|binding|INFO|Setting lport 524180cf-279c-48d6-8bf1-04f8f159aef6 down in Southbound
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.188 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:06Z|00492|binding|INFO|Removing iface tap524180cf-27 ovn-installed in OVS
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.209 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Nov 29 03:21:06 np0005539550 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d0000006c.scope: Consumed 13.828s CPU time.
Nov 29 03:21:06 np0005539550 systemd-machined[216673]: Machine qemu-58-instance-0000006c terminated.
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.291 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:49:96 10.100.0.5'], port_security=['fa:16:3e:b8:49:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '258dfc76-0ea9-4521-a3fc-5d64b3632451', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '14', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=524180cf-279c-48d6-8bf1-04f8f159aef6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.292 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 524180cf-279c-48d6-8bf1-04f8f159aef6 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.293 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.294 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d4a31e-d2a2-4a05-8230-d68da847e0c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.295 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.355 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.360 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.368 257641 INFO nova.virt.libvirt.driver [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Instance destroyed successfully.#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.369 257641 DEBUG nova.objects.instance [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 258dfc76-0ea9-4521-a3fc-5d64b3632451 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:06.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:06 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [NOTICE]   (331873) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:06 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [NOTICE]   (331873) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:06 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [WARNING]  (331873) : Exiting Master process...
Nov 29 03:21:06 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [ALERT]    (331873) : Current worker (331875) exited with code 143 (Terminated)
Nov 29 03:21:06 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[331869]: [WARNING]  (331873) : All workers exited. Exiting... (0)
Nov 29 03:21:06 np0005539550 systemd[1]: libpod-878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c.scope: Deactivated successfully.
Nov 29 03:21:06 np0005539550 podman[332019]: 2025-11-29 08:21:06.438670458 +0000 UTC m=+0.043253851 container died 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:21:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f2410e0dfab65a6e7b63c66f075a1481b72f1216f715280b4124eab35614e791-merged.mount: Deactivated successfully.
Nov 29 03:21:06 np0005539550 podman[332019]: 2025-11-29 08:21:06.4981339 +0000 UTC m=+0.102717323 container cleanup 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:21:06 np0005539550 systemd[1]: libpod-conmon-878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c.scope: Deactivated successfully.
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.579 257641 DEBUG nova.virt.libvirt.vif [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:17:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1950416616',display_name='tempest-ServerActionsTestJSON-server-1950416616',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1950416616',id=108,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:20:51Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-f50l1w69',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=258dfc76-0ea9-4521-a3fc-5d64b3632451,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.579 257641 DEBUG nova.network.os_vif_util [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "524180cf-279c-48d6-8bf1-04f8f159aef6", "address": "fa:16:3e:b8:49:96", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap524180cf-27", "ovs_interfaceid": "524180cf-279c-48d6-8bf1-04f8f159aef6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.580 257641 DEBUG nova.network.os_vif_util [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.580 257641 DEBUG os_vif [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.583 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap524180cf-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.590 257641 INFO os_vif [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:49:96,bridge_name='br-int',has_traffic_filtering=True,id=524180cf-279c-48d6-8bf1-04f8f159aef6,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap524180cf-27')#033[00m
Nov 29 03:21:06 np0005539550 podman[332049]: 2025-11-29 08:21:06.600192126 +0000 UTC m=+0.068148514 container remove 878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.606 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45c3a5f3-2e92-437c-a8f8-32af79a7b0b6]: (4, ('Sat Nov 29 08:21:06 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c)\n878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c\nSat Nov 29 08:21:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c)\n878284131d0eecf4884d6d522188c7ad98a043200949ce2b1d293c132ac3679c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.609 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ec36061f-9f59-4e20-ac6b-d2e7c5b5a3db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.610 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:21:06 np0005539550 nova_compute[257631]: 2025-11-29 08:21:06.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.637 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0c589510-0726-4114-a769-3f8b3cb8ad91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.653 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[57ddd3fc-275a-4e2d-85b9-7e29a5280124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.656 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[79e01441-612b-403f-982c-23255d439405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.671 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eac2025c-d2cb-4416-b9b4-ed9062e2536d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749872, 'reachable_time': 21757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332082, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:06 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.675 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:06.675 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[146b2054-e5fe-4916-a062-541c1fb82f3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.055 257641 DEBUG nova.compute.manager [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.055 257641 DEBUG oslo_concurrency.lockutils [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.056 257641 DEBUG oslo_concurrency.lockutils [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.056 257641 DEBUG oslo_concurrency.lockutils [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.056 257641 DEBUG nova.compute.manager [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.056 257641 DEBUG nova.compute.manager [req-0e12195d-82cd-4318-ac0e-dc706027d612 req-4c0115c5-76e3-49d1-82b5-c82b50094296 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-unplugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.187 257641 INFO nova.virt.libvirt.driver [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Deleting instance files /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451_del#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.188 257641 INFO nova.virt.libvirt.driver [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Deletion of /var/lib/nova/instances/258dfc76-0ea9-4521-a3fc-5d64b3632451_del complete#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.262 257641 INFO nova.compute.manager [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.262 257641 DEBUG oslo.service.loopingcall [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.263 257641 DEBUG nova.compute.manager [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.263 257641 DEBUG nova.network.neutron [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:07 np0005539550 nova_compute[257631]: 2025-11-29 08:21:07.311 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 274 MiB data, 992 MiB used, 20 GiB / 21 GiB avail; 428 KiB/s rd, 3.2 MiB/s wr, 93 op/s
Nov 29 03:21:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:07.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.348 257641 DEBUG nova.network.neutron [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.365 257641 INFO nova.compute.manager [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Took 1.10 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.416 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.416 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.421 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:08.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.500 257641 DEBUG nova.compute.manager [req-22dd6603-0da8-40fd-a78d-aae4ea0146f1 req-3d8067b7-5828-43f7-85d9-b2693a9ebbbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-deleted-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.512 257641 INFO nova.scheduler.client.report [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Deleted allocations for instance 258dfc76-0ea9-4521-a3fc-5d64b3632451#033[00m
Nov 29 03:21:08 np0005539550 nova_compute[257631]: 2025-11-29 08:21:08.612 257641 DEBUG oslo_concurrency.lockutils [None req-100c17fe-399e-4e97-b283-7c460f2a4da7 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:21:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.173 257641 DEBUG nova.compute.manager [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.173 257641 DEBUG oslo_concurrency.lockutils [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.174 257641 DEBUG oslo_concurrency.lockutils [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.174 257641 DEBUG oslo_concurrency.lockutils [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "258dfc76-0ea9-4521-a3fc-5d64b3632451-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.174 257641 DEBUG nova.compute.manager [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] No waiting events found dispatching network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:09 np0005539550 nova_compute[257631]: 2025-11-29 08:21:09.174 257641 WARNING nova.compute.manager [req-7d026b4b-ee4f-4e66-b91c-355f8d6d2f01 req-b3287e9b-51c3-4908-bc2c-8a9d9c263852 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Received unexpected event network-vif-plugged-524180cf-279c-48d6-8bf1-04f8f159aef6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:21:09 np0005539550 podman[332085]: 2025-11-29 08:21:09.369750231 +0000 UTC m=+0.110257315 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:21:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 260 MiB data, 991 MiB used, 20 GiB / 21 GiB avail; 551 KiB/s rd, 3.7 MiB/s wr, 129 op/s
Nov 29 03:21:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:09.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:10.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 213 MiB data, 960 MiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 3.9 MiB/s wr, 163 op/s
Nov 29 03:21:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:11.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:11 np0005539550 nova_compute[257631]: 2025-11-29 08:21:11.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 29 03:21:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.138 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "1439a5a5-effe-49c7-b030-71c47af51dc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.138 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.177 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.313 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.510 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.510 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.517 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.517 257641 INFO nova.compute.claims [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:21:12 np0005539550 nova_compute[257631]: 2025-11-29 08:21:12.613 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1498486277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.024 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.029 257641 DEBUG nova.compute.provider_tree [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.044 257641 DEBUG nova.scheduler.client.report [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.066 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.068 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.156 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.357 257641 INFO nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.381 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 213 MiB data, 960 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.4 MiB/s wr, 208 op/s
Nov 29 03:21:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:13.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.950 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.952 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.952 257641 INFO nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Creating image(s)#033[00m
Nov 29 03:21:13 np0005539550 nova_compute[257631]: 2025-11-29 08:21:13.978 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.005 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.034 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.038 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.113 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.115 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.115 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.116 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.146 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.150 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1439a5a5-effe-49c7-b030-71c47af51dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.430 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1439a5a5-effe-49c7-b030-71c47af51dc9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.499 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] resizing rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.623 257641 DEBUG nova.objects.instance [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lazy-loading 'migration_context' on Instance uuid 1439a5a5-effe-49c7-b030-71c47af51dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.648 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.649 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Ensure instance console log exists: /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.650 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.650 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.651 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.653 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.659 257641 WARNING nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.666 257641 DEBUG nova.virt.libvirt.host [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.667 257641 DEBUG nova.virt.libvirt.host [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.672 257641 DEBUG nova.virt.libvirt.host [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.673 257641 DEBUG nova.virt.libvirt.host [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.675 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.675 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.676 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.676 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.677 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.677 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.678 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.678 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.679 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.679 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.679 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.680 257641 DEBUG nova.virt.hardware [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:14 np0005539550 nova_compute[257631]: 2025-11-29 08:21:14.684 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1183623155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.099 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.125 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.130 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 231 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.8 MiB/s wr, 244 op/s
Nov 29 03:21:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797734271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.555 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.559 257641 DEBUG nova.objects.instance [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1439a5a5-effe-49c7-b030-71c47af51dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.590 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <uuid>1439a5a5-effe-49c7-b030-71c47af51dc9</uuid>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <name>instance-00000078</name>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersAaction247Test-server-2097250888</nova:name>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:21:14</nova:creationTime>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:user uuid="464a845eba164483a26ad56dca49af3d">tempest-ServersAaction247Test-639981495-project-member</nova:user>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <nova:project uuid="6aa17a00c635424f98f28f89b9229ee2">tempest-ServersAaction247Test-639981495</nova:project>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="serial">1439a5a5-effe-49c7-b030-71c47af51dc9</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="uuid">1439a5a5-effe-49c7-b030-71c47af51dc9</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1439a5a5-effe-49c7-b030-71c47af51dc9_disk">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/console.log" append="off"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:21:15 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:21:15 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:21:15 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:21:15 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.637 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.638 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.638 257641 INFO nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Using config drive#033[00m
Nov 29 03:21:15 np0005539550 nova_compute[257631]: 2025-11-29 08:21:15.659 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.267 257641 INFO nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Creating config drive at /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.272 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbn9uohi1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.403 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbn9uohi1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.437 257641 DEBUG nova.storage.rbd_utils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] rbd image 1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:16.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.441 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config 1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.590 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.606 257641 DEBUG oslo_concurrency.processutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config 1439a5a5-effe-49c7-b030-71c47af51dc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:16 np0005539550 nova_compute[257631]: 2025-11-29 08:21:16.607 257641 INFO nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Deleting local config drive /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:16 np0005539550 systemd-machined[216673]: New machine qemu-59-instance-00000078.
Nov 29 03:21:16 np0005539550 systemd[1]: Started Virtual Machine qemu-59-instance-00000078.
Nov 29 03:21:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.316 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.459 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404477.4582744, 1439a5a5-effe-49c7-b030-71c47af51dc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.460 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.462 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.463 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.466 257641 INFO nova.virt.libvirt.driver [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance spawned successfully.#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.467 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 252 MiB data, 973 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.5 MiB/s wr, 264 op/s
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.485 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.490 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.490 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.491 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.491 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.492 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.492 257641 DEBUG nova.virt.libvirt.driver [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.496 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.530 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.530 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404477.4590175, 1439a5a5-effe-49c7-b030-71c47af51dc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.531 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.552 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.556 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.563 257641 INFO nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Took 3.61 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.564 257641 DEBUG nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.577 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.630 257641 INFO nova.compute.manager [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Took 5.13 seconds to build instance.#033[00m
Nov 29 03:21:17 np0005539550 nova_compute[257631]: 2025-11-29 08:21:17.650 257641 DEBUG oslo_concurrency.lockutils [None req-43a15db9-32ca-452e-8e96-a81d46eb2013 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:18.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:18.953 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:18.954 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:18.954 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 271 MiB data, 982 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.9 MiB/s wr, 259 op/s
Nov 29 03:21:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:19.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.758 257641 DEBUG nova.compute.manager [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.799 257641 INFO nova.compute.manager [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] instance snapshotting#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.800 257641 DEBUG nova.objects.instance [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lazy-loading 'flavor' on Instance uuid 1439a5a5-effe-49c7-b030-71c47af51dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.911 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "1439a5a5-effe-49c7-b030-71c47af51dc9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.912 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.912 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "1439a5a5-effe-49c7-b030-71c47af51dc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.912 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.912 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.913 257641 INFO nova.compute.manager [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Terminating instance#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.914 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "refresh_cache-1439a5a5-effe-49c7-b030-71c47af51dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.914 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquired lock "refresh_cache-1439a5a5-effe-49c7-b030-71c47af51dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:19 np0005539550 nova_compute[257631]: 2025-11-29 08:21:19.914 257641 DEBUG nova.network.neutron [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005324110424809231 of space, bias 1.0, pg target 1.5972331274427694 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:21:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.047 257641 DEBUG nova.network.neutron [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.076 257641 INFO nova.virt.libvirt.driver [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Beginning live snapshot process#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.137 257641 DEBUG nova.compute.manager [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.308 257641 DEBUG nova.network.neutron [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.330 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Releasing lock "refresh_cache-1439a5a5-effe-49c7-b030-71c47af51dc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.331 257641 DEBUG nova.compute.manager [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:20 np0005539550 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000078.scope: Deactivated successfully.
Nov 29 03:21:20 np0005539550 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000078.scope: Consumed 3.553s CPU time.
Nov 29 03:21:20 np0005539550 systemd-machined[216673]: Machine qemu-59-instance-00000078 terminated.
Nov 29 03:21:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:20.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.548 257641 INFO nova.virt.libvirt.driver [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance destroyed successfully.#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.549 257641 DEBUG nova.objects.instance [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lazy-loading 'resources' on Instance uuid 1439a5a5-effe-49c7-b030-71c47af51dc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.562 257641 DEBUG nova.compute.manager [None req-5896f5f9-77fd-40c6-aab6-21e9ea0bbb85 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Found 0 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.931 257641 INFO nova.virt.libvirt.driver [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Deleting instance files /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9_del#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.932 257641 INFO nova.virt.libvirt.driver [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Deletion of /var/lib/nova/instances/1439a5a5-effe-49c7-b030-71c47af51dc9_del complete#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.977 257641 INFO nova.compute.manager [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Took 0.65 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.977 257641 DEBUG oslo.service.loopingcall [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.978 257641 DEBUG nova.compute.manager [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:20 np0005539550 nova_compute[257631]: 2025-11-29 08:21:20.978 257641 DEBUG nova.network.neutron [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.257 257641 DEBUG nova.network.neutron [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.270 257641 DEBUG nova.network.neutron [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.291 257641 INFO nova.compute.manager [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Took 0.31 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.329 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.330 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.366 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404466.3663163, 258dfc76-0ea9-4521-a3fc-5d64b3632451 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.367 257641 INFO nova.compute.manager [-] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.377 257641 DEBUG oslo_concurrency.processutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.405 257641 DEBUG nova.compute.manager [None req-a96bfced-9031-4ba4-9665-883c66786489 - - - - - -] [instance: 258dfc76-0ea9-4521-a3fc-5d64b3632451] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 289 MiB data, 1002 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.3 MiB/s wr, 300 op/s
Nov 29 03:21:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:21.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.596 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915661197' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.802 257641 DEBUG oslo_concurrency.processutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.808 257641 DEBUG nova.compute.provider_tree [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.833 257641 DEBUG nova.scheduler.client.report [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.866 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.898 257641 INFO nova.scheduler.client.report [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Deleted allocations for instance 1439a5a5-effe-49c7-b030-71c47af51dc9#033[00m
Nov 29 03:21:21 np0005539550 nova_compute[257631]: 2025-11-29 08:21:21.959 257641 DEBUG oslo_concurrency.lockutils [None req-2af0eecf-181f-4e75-aa6f-831115d12f45 464a845eba164483a26ad56dca49af3d 6aa17a00c635424f98f28f89b9229ee2 - - default default] Lock "1439a5a5-effe-49c7-b030-71c47af51dc9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:22 np0005539550 nova_compute[257631]: 2025-11-29 08:21:22.332 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 278 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.8 MiB/s wr, 291 op/s
Nov 29 03:21:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:23.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:24 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Nov 29 03:21:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:24.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 278 MiB data, 1021 MiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.0 MiB/s wr, 314 op/s
Nov 29 03:21:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:26.005 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:26.006 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:21:26 np0005539550 nova_compute[257631]: 2025-11-29 08:21:26.054 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:26.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:26 np0005539550 nova_compute[257631]: 2025-11-29 08:21:26.598 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:27 np0005539550 nova_compute[257631]: 2025-11-29 08:21:27.334 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 302 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 6.0 MiB/s wr, 311 op/s
Nov 29 03:21:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:27.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:28 np0005539550 nova_compute[257631]: 2025-11-29 08:21:28.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:28 np0005539550 podman[332580]: 2025-11-29 08:21:28.326782633 +0000 UTC m=+0.063031464 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:21:28 np0005539550 podman[332579]: 2025-11-29 08:21:28.33609416 +0000 UTC m=+0.073331226 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:21:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 317 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.6 MiB/s wr, 317 op/s
Nov 29 03:21:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:29.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.937 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.937 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.958 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:21:29 np0005539550 nova_compute[257631]: 2025-11-29 08:21:29.959 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:30.008 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451607604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.400 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.603 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.604 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4343MB free_disk=20.831424713134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.605 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.605 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.660 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.661 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:21:30 np0005539550 nova_compute[257631]: 2025-11-29 08:21:30.686 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1728752954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.132 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.138 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.162 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.192 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.192 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.0 MiB/s wr, 324 op/s
Nov 29 03:21:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:31.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:31 np0005539550 nova_compute[257631]: 2025-11-29 08:21:31.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:32 np0005539550 nova_compute[257631]: 2025-11-29 08:21:32.174 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:32 np0005539550 nova_compute[257631]: 2025-11-29 08:21:32.175 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:32 np0005539550 nova_compute[257631]: 2025-11-29 08:21:32.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:32.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.608099) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492608153, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 2216, "num_deletes": 255, "total_data_size": 3694712, "memory_usage": 3760568, "flush_reason": "Manual Compaction"}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492627736, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3625781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43693, "largest_seqno": 45908, "table_properties": {"data_size": 3615901, "index_size": 6182, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21788, "raw_average_key_size": 20, "raw_value_size": 3595663, "raw_average_value_size": 3454, "num_data_blocks": 268, "num_entries": 1041, "num_filter_entries": 1041, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404297, "oldest_key_time": 1764404297, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 19679 microseconds, and 7483 cpu microseconds.
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.627774) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3625781 bytes OK
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.627792) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.629732) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.629745) EVENT_LOG_v1 {"time_micros": 1764404492629740, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.629770) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3685557, prev total WAL file size 3685557, number of live WAL files 2.
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.630637) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3540KB)], [95(11MB)]
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492630668, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 15606878, "oldest_snapshot_seqno": -1}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 8185 keys, 13656331 bytes, temperature: kUnknown
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492727003, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 13656331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13599847, "index_size": 34932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 211423, "raw_average_key_size": 25, "raw_value_size": 13451829, "raw_average_value_size": 1643, "num_data_blocks": 1376, "num_entries": 8185, "num_filter_entries": 8185, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404492, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.727352) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13656331 bytes
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.728795) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 141.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 11.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 8712, records dropped: 527 output_compression: NoCompression
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.728825) EVENT_LOG_v1 {"time_micros": 1764404492728811, "job": 56, "event": "compaction_finished", "compaction_time_micros": 96440, "compaction_time_cpu_micros": 30004, "output_level": 6, "num_output_files": 1, "total_output_size": 13656331, "num_input_records": 8712, "num_output_records": 8185, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492730299, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404492734226, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.630578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.734267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.734271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.734272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.734274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:32.734276) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 326 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 248 op/s
Nov 29 03:21:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:33.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:34.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 325 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 265 op/s
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.548 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404480.5466309, 1439a5a5-effe-49c7-b030-71c47af51dc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.548 257641 INFO nova.compute.manager [-] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.572 257641 DEBUG nova.compute.manager [None req-381e132a-b587-4d20-9a0e-706838d52a95 - - - - - -] [instance: 1439a5a5-effe-49c7-b030-71c47af51dc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:35.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:35 np0005539550 nova_compute[257631]: 2025-11-29 08:21:35.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:21:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:36 np0005539550 nova_compute[257631]: 2025-11-29 08:21:36.572 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:36 np0005539550 nova_compute[257631]: 2025-11-29 08:21:36.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.339 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 314 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Nov 29 03:21:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:37.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.725 257641 DEBUG nova.compute.manager [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.860 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.860 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.881 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'pci_requests' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.914 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.914 257641 INFO nova.compute.claims [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.915 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'resources' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.940 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'numa_topology' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:37 np0005539550 nova_compute[257631]: 2025-11-29 08:21:37.966 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'pci_devices' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.025 257641 INFO nova.compute.resource_tracker [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating resource usage from migration 485f1b6d-5a0c-41df-a6c5-8c0287fb9ee9#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.026 257641 DEBUG nova.compute.resource_tracker [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Starting to track incoming migration 485f1b6d-5a0c-41df-a6c5-8c0287fb9ee9 with flavor b4d0f3a6-e3dc-4216-aee8-148280e428cc _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.104 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:38.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/135087682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.560 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.567 257641 DEBUG nova.compute.provider_tree [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.592 257641 DEBUG nova.scheduler.client.report [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.631 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.632 257641 INFO nova.compute.manager [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Migrating#033[00m
Nov 29 03:21:38 np0005539550 nova_compute[257631]: 2025-11-29 08:21:38.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:21:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 297 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1002 KiB/s rd, 3.4 MiB/s wr, 138 op/s
Nov 29 03:21:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:39.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:40 np0005539550 podman[332744]: 2025-11-29 08:21:40.375036892 +0000 UTC m=+0.105567866 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:21:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:40.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 320 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 878 KiB/s rd, 4.0 MiB/s wr, 162 op/s
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.592 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.592 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:41.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.606 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.611 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:21:41 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:21:41 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:21:41 np0005539550 systemd-logind[788]: New session 60 of user nova.
Nov 29 03:21:41 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:21:41 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.685 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.686 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.698 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.699 257641 INFO nova.compute.claims [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:21:41 np0005539550 systemd[332774]: Queued start job for default target Main User Target.
Nov 29 03:21:41 np0005539550 systemd[332774]: Created slice User Application Slice.
Nov 29 03:21:41 np0005539550 systemd[332774]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:21:41 np0005539550 systemd[332774]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:21:41 np0005539550 systemd[332774]: Reached target Paths.
Nov 29 03:21:41 np0005539550 systemd[332774]: Reached target Timers.
Nov 29 03:21:41 np0005539550 systemd[332774]: Starting D-Bus User Message Bus Socket...
Nov 29 03:21:41 np0005539550 systemd[332774]: Starting Create User's Volatile Files and Directories...
Nov 29 03:21:41 np0005539550 systemd[332774]: Finished Create User's Volatile Files and Directories.
Nov 29 03:21:41 np0005539550 systemd[332774]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:21:41 np0005539550 systemd[332774]: Reached target Sockets.
Nov 29 03:21:41 np0005539550 systemd[332774]: Reached target Basic System.
Nov 29 03:21:41 np0005539550 systemd[332774]: Reached target Main User Target.
Nov 29 03:21:41 np0005539550 systemd[332774]: Startup finished in 163ms.
Nov 29 03:21:41 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:21:41 np0005539550 systemd[1]: Started Session 60 of User nova.
Nov 29 03:21:41 np0005539550 nova_compute[257631]: 2025-11-29 08:21:41.866 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:41 np0005539550 systemd[1]: session-60.scope: Deactivated successfully.
Nov 29 03:21:41 np0005539550 systemd-logind[788]: Session 60 logged out. Waiting for processes to exit.
Nov 29 03:21:41 np0005539550 systemd-logind[788]: Removed session 60.
Nov 29 03:21:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:42 np0005539550 systemd-logind[788]: New session 62 of user nova.
Nov 29 03:21:42 np0005539550 systemd[1]: Started Session 62 of User nova.
Nov 29 03:21:42 np0005539550 systemd[1]: session-62.scope: Deactivated successfully.
Nov 29 03:21:42 np0005539550 systemd-logind[788]: Session 62 logged out. Waiting for processes to exit.
Nov 29 03:21:42 np0005539550 systemd-logind[788]: Removed session 62.
Nov 29 03:21:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2181825109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.317 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.327 257641 DEBUG nova.compute.provider_tree [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.348 257641 DEBUG nova.scheduler.client.report [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.382 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.387 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.388 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.434 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.434 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.452 257641 INFO nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.470 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:21:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:42.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.562 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.564 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.564 257641 INFO nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Creating image(s)#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.595 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.628 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.663 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.668 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.698 257641 DEBUG nova.policy [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0662b63a7f1f4a00960875249475e54a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '36e3df454d054a1abd33ac1e9ae39e7f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.734 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.735 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.736 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.736 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.761 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:42 np0005539550 nova_compute[257631]: 2025-11-29 08:21:42.765 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 24ab8319-0576-4b43-a61b-63b34b98158a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.171 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 24ab8319-0576-4b43-a61b-63b34b98158a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.266 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] resizing rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.378 257641 DEBUG nova.objects.instance [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lazy-loading 'migration_context' on Instance uuid 24ab8319-0576-4b43-a61b-63b34b98158a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.399 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.399 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Ensure instance console log exists: /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.400 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.400 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.401 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 322 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 371 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Nov 29 03:21:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:43.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:43 np0005539550 nova_compute[257631]: 2025-11-29 08:21:43.741 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Successfully created port: 891c612b-0aae-4bfc-af98-c0c12c5fe5fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:21:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:44.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.707 257641 DEBUG nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.708 257641 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.708 257641 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.709 257641 DEBUG oslo_concurrency.lockutils [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.709 257641 DEBUG nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.710 257641 WARNING nova.compute.manager [req-6e8cf0c9-580c-4018-96c1-b0f9f11f2382 req-9bcb63e4-8ce4-4c53-955c-4d1b2517f6c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.828 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Successfully updated port: 891c612b-0aae-4bfc-af98-c0c12c5fe5fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.853 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.854 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquired lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:44 np0005539550 nova_compute[257631]: 2025-11-29 08:21:44.854 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:45 np0005539550 nova_compute[257631]: 2025-11-29 08:21:45.135 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:21:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 313 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 389 KiB/s rd, 4.4 MiB/s wr, 161 op/s
Nov 29 03:21:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:45.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:45 np0005539550 nova_compute[257631]: 2025-11-29 08:21:45.613 257641 DEBUG nova.compute.manager [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-changed-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:45 np0005539550 nova_compute[257631]: 2025-11-29 08:21:45.614 257641 DEBUG nova.compute.manager [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Refreshing instance network info cache due to event network-changed-891c612b-0aae-4bfc-af98-c0c12c5fe5fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:45 np0005539550 nova_compute[257631]: 2025-11-29 08:21:45.614 257641 DEBUG oslo_concurrency.lockutils [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:45 np0005539550 nova_compute[257631]: 2025-11-29 08:21:45.865 257641 INFO nova.network.neutron [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating port f435ee76-ed2f-4ad8-a9e1-bda955080b3e with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.335 257641 DEBUG nova.network.neutron [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Updating instance_info_cache with network_info: [{"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.358 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Releasing lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.358 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Instance network_info: |[{"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.359 257641 DEBUG oslo_concurrency.lockutils [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.359 257641 DEBUG nova.network.neutron [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Refreshing network info cache for port 891c612b-0aae-4bfc-af98-c0c12c5fe5fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.364 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Start _get_guest_xml network_info=[{"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.371 257641 WARNING nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.377 257641 DEBUG nova.virt.libvirt.host [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.378 257641 DEBUG nova.virt.libvirt.host [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.384 257641 DEBUG nova.virt.libvirt.host [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.385 257641 DEBUG nova.virt.libvirt.host [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.386 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.386 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.386 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.386 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.386 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.387 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.388 257641 DEBUG nova.virt.hardware [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.390 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/758549609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.867 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.907 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.912 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.948 257641 DEBUG nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.949 257641 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.950 257641 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.950 257641 DEBUG oslo_concurrency.lockutils [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.950 257641 DEBUG nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:46 np0005539550 nova_compute[257631]: 2025-11-29 08:21:46.951 257641 WARNING nova.compute.manager [req-2bd66353-d7a3-482d-9b89-82b5a61c8963 req-f011d0c0-0171-46ee-a0ce-b6fc365b60a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:21:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.260 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.261 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.261 257641 DEBUG nova.network.neutron [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350603526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.363 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.364 257641 DEBUG nova.virt.libvirt.vif [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1933298515',display_name='tempest-ServerAddressesTestJSON-server-1933298515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1933298515',id=122,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36e3df454d054a1abd33ac1e9ae39e7f',ramdisk_id='',reservation_id='r-9w6voqvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1656158748',owner_user_name='tempest-ServerAddressesTestJSON-1656158748-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:42Z,user_data=None,user_id='0662b63a7f1f4a00960875249475e54a',uuid=24ab8319-0576-4b43-a61b-63b34b98158a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.365 257641 DEBUG nova.network.os_vif_util [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converting VIF {"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.366 257641 DEBUG nova.network.os_vif_util [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.368 257641 DEBUG nova.objects.instance [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lazy-loading 'pci_devices' on Instance uuid 24ab8319-0576-4b43-a61b-63b34b98158a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.382 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <uuid>24ab8319-0576-4b43-a61b-63b34b98158a</uuid>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <name>instance-0000007a</name>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerAddressesTestJSON-server-1933298515</nova:name>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:21:46</nova:creationTime>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:user uuid="0662b63a7f1f4a00960875249475e54a">tempest-ServerAddressesTestJSON-1656158748-project-member</nova:user>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:project uuid="36e3df454d054a1abd33ac1e9ae39e7f">tempest-ServerAddressesTestJSON-1656158748</nova:project>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <nova:port uuid="891c612b-0aae-4bfc-af98-c0c12c5fe5fc">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="serial">24ab8319-0576-4b43-a61b-63b34b98158a</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="uuid">24ab8319-0576-4b43-a61b-63b34b98158a</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/24ab8319-0576-4b43-a61b-63b34b98158a_disk">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/24ab8319-0576-4b43-a61b-63b34b98158a_disk.config">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:6c:a0:75"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <target dev="tap891c612b-0a"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/console.log" append="off"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:21:47 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:21:47 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:21:47 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:21:47 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.384 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Preparing to wait for external event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.384 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.384 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.385 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.385 257641 DEBUG nova.virt.libvirt.vif [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1933298515',display_name='tempest-ServerAddressesTestJSON-server-1933298515',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1933298515',id=122,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='36e3df454d054a1abd33ac1e9ae39e7f',ramdisk_id='',reservation_id='r-9w6voqvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1656158748',owner_user_name='tempest-ServerAddressesTestJSON-1656158748-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:42Z,user_data=None,user_id='0662b63a7f1f4a00960875249475e54a',uuid=24ab8319-0576-4b43-a61b-63b34b98158a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.386 257641 DEBUG nova.network.os_vif_util [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converting VIF {"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.386 257641 DEBUG nova.network.os_vif_util [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.386 257641 DEBUG os_vif [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.387 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.389 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.389 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.389 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.392 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.393 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap891c612b-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.393 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap891c612b-0a, col_values=(('external_ids', {'iface-id': '891c612b-0aae-4bfc-af98-c0c12c5fe5fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:a0:75', 'vm-uuid': '24ab8319-0576-4b43-a61b-63b34b98158a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.394 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539550 NetworkManager[49039]: <info>  [1764404507.3957] manager: (tap891c612b-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.397 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.403 257641 INFO os_vif [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a')#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.454 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.455 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.455 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] No VIF found with MAC fa:16:3e:6c:a0:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.456 257641 INFO nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Using config drive#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.484 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 316 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Nov 29 03:21:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:47.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.836 257641 DEBUG nova.compute.manager [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.836 257641 DEBUG nova.compute.manager [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.837 257641 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.843 257641 INFO nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Creating config drive at /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config#033[00m
Nov 29 03:21:47 np0005539550 nova_compute[257631]: 2025-11-29 08:21:47.852 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeq7ef82o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.003 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeq7ef82o" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.032 257641 DEBUG nova.storage.rbd_utils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] rbd image 24ab8319-0576-4b43-a61b-63b34b98158a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.038 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config 24ab8319-0576-4b43-a61b-63b34b98158a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.495 257641 DEBUG oslo_concurrency.processutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config 24ab8319-0576-4b43-a61b-63b34b98158a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.498 257641 INFO nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Deleting local config drive /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.553 257641 DEBUG nova.network.neutron [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Updated VIF entry in instance network info cache for port 891c612b-0aae-4bfc-af98-c0c12c5fe5fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.554 257641 DEBUG nova.network.neutron [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Updating instance_info_cache with network_info: [{"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:48 np0005539550 kernel: tap891c612b-0a: entered promiscuous mode
Nov 29 03:21:48 np0005539550 NetworkManager[49039]: <info>  [1764404508.5719] manager: (tap891c612b-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Nov 29 03:21:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:48Z|00493|binding|INFO|Claiming lport 891c612b-0aae-4bfc-af98-c0c12c5fe5fc for this chassis.
Nov 29 03:21:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:48Z|00494|binding|INFO|891c612b-0aae-4bfc-af98-c0c12c5fe5fc: Claiming fa:16:3e:6c:a0:75 10.100.0.4
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.681 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.683 257641 DEBUG oslo_concurrency.lockutils [req-5e337d5a-37a5-4e1c-9dea-16a3f89cf358 req-8e5d9903-8c4b-4d53-90b7-3f3e8d2ce337 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-24ab8319-0576-4b43-a61b-63b34b98158a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.693 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:a0:75 10.100.0.4'], port_security=['fa:16:3e:6c:a0:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '24ab8319-0576-4b43-a61b-63b34b98158a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36e3df454d054a1abd33ac1e9ae39e7f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1af7b1f9-1ea8-4e02-bfb0-43bff1d2856a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=069fe775-5708-4115-b88b-075f10fee5ff, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=891c612b-0aae-4bfc-af98-c0c12c5fe5fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.694 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 891c612b-0aae-4bfc-af98-c0c12c5fe5fc in datapath 4e0c4650-e56a-4802-81ca-91c563de7d3e bound to our chassis#033[00m
Nov 29 03:21:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:48Z|00495|binding|INFO|Setting lport 891c612b-0aae-4bfc-af98-c0c12c5fe5fc ovn-installed in OVS
Nov 29 03:21:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:48Z|00496|binding|INFO|Setting lport 891c612b-0aae-4bfc-af98-c0c12c5fe5fc up in Southbound
Nov 29 03:21:48 np0005539550 nova_compute[257631]: 2025-11-29 08:21:48.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.696 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e0c4650-e56a-4802-81ca-91c563de7d3e#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.707 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c52e7cfc-dc15-48ce-b3b4-b696b09c96bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.708 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e0c4650-e1 in ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.710 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e0c4650-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.710 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8c0678-6513-4fd0-bd98-dfc035e77bd4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.711 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21c618a0-a83f-49fa-8447-f92005e02e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 systemd-machined[216673]: New machine qemu-60-instance-0000007a.
Nov 29 03:21:48 np0005539550 systemd-udevd[333241]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:48 np0005539550 systemd[1]: Started Virtual Machine qemu-60-instance-0000007a.
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.724 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[1459e129-34fc-46f0-b86c-5ca52f8e5041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 NetworkManager[49039]: <info>  [1764404508.7433] device (tap891c612b-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:48 np0005539550 NetworkManager[49039]: <info>  [1764404508.7440] device (tap891c612b-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.748 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fed7859b-0442-4cee-a90c-271b7e7dd566]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.784 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4522e345-90bc-478c-bff8-76ef19405667]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 systemd-udevd[333245]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30d2df74-9d37-4b5a-a852-41c437a3d52b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 NetworkManager[49039]: <info>  [1764404508.7928] manager: (tap4e0c4650-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.829 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[94d651ce-e7d2-424c-8156-c0cb9bfa20a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.833 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[38dd7f67-bfda-4d25-939f-03ea29513b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 NetworkManager[49039]: <info>  [1764404508.8561] device (tap4e0c4650-e0): carrier: link connected
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.865 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d70ed64a-e46c-4573-b1cc-1d182138ae01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.883 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[31cdcf4b-daf6-4449-b0ea-5f82a6088d11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e0c4650-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ff:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755652, 'reachable_time': 31224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333287, 'error': None, 'target': 'ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.899 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[56747519-e358-489b-8bba-c6a17f4186f8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:ffb7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 755652, 'tstamp': 755652}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333288, 'error': None, 'target': 'ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:21:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.915 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[be75a7d1-c6d8-4027-ad73-a526ec7fc194]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e0c4650-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:ff:b7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755652, 'reachable_time': 31224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333289, 'error': None, 'target': 'ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:21:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:21:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:48.944 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2636e93-15c1-466e-8faa-ed422ad69a13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.001 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16acd291-1d8c-41bc-956a-c54848fcdf74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.003 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e0c4650-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.004 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.004 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e0c4650-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.007 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:49 np0005539550 kernel: tap4e0c4650-e0: entered promiscuous mode
Nov 29 03:21:49 np0005539550 NetworkManager[49039]: <info>  [1764404509.0068] manager: (tap4e0c4650-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.008 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e0c4650-e0, col_values=(('external_ids', {'iface-id': '9668ea6a-8718-434f-a9e7-f9e4d98d4063'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:49Z|00497|binding|INFO|Releasing lport 9668ea6a-8718-434f-a9e7-f9e4d98d4063 from this chassis (sb_readonly=0)
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.026 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e0c4650-e56a-4802-81ca-91c563de7d3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e0c4650-e56a-4802-81ca-91c563de7d3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.027 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[973b4833-1447-4ede-85e0-a89a07012bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.028 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-4e0c4650-e56a-4802-81ca-91c563de7d3e
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/4e0c4650-e56a-4802-81ca-91c563de7d3e.pid.haproxy
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 4e0c4650-e56a-4802-81ca-91c563de7d3e
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:49.028 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'env', 'PROCESS_TAG=haproxy-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e0c4650-e56a-4802-81ca-91c563de7d3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.106 257641 DEBUG nova.network.neutron [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.125 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.129 257641 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.130 257641 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.192 257641 DEBUG nova.compute.manager [req-eaec364c-54a4-4760-82d9-395b559eb64d req-6e84af65-2380-420b-aa22-52af514455a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.193 257641 DEBUG oslo_concurrency.lockutils [req-eaec364c-54a4-4760-82d9-395b559eb64d req-6e84af65-2380-420b-aa22-52af514455a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.193 257641 DEBUG oslo_concurrency.lockutils [req-eaec364c-54a4-4760-82d9-395b559eb64d req-6e84af65-2380-420b-aa22-52af514455a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.193 257641 DEBUG oslo_concurrency.lockutils [req-eaec364c-54a4-4760-82d9-395b559eb64d req-6e84af65-2380-420b-aa22-52af514455a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.194 257641 DEBUG nova.compute.manager [req-eaec364c-54a4-4760-82d9-395b559eb64d req-6e84af65-2380-420b-aa22-52af514455a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Processing event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.210 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.212 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.212 257641 INFO nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Creating image(s)#033[00m
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bfb3aea7-eeb0-4c6e-a81d-e933c125391c does not exist
Nov 29 03:21:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0a9924ba-c9c0-49f9-b172-fb6e50d6ab09 does not exist
Nov 29 03:21:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 08a69a01-2fb4-4434-8839-14ba693afc3b does not exist
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.256 257641 DEBUG nova.storage.rbd_utils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] creating snapshot(nova-resize) on rbd image(37bf3f0c-b49b-457b-81be-b4b31f32d872_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.356 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404509.3556492, 24ab8319-0576-4b43-a61b-63b34b98158a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.356 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.358 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.364 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.368 257641 INFO nova.virt.libvirt.driver [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Instance spawned successfully.#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.368 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.386 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.389 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:49 np0005539550 podman[333429]: 2025-11-29 08:21:49.394812542 +0000 UTC m=+0.057537424 container create 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.398 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.398 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.399 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.399 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.399 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.400 257641 DEBUG nova.virt.libvirt.driver [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.427 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.428 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404509.3565712, 24ab8319-0576-4b43-a61b-63b34b98158a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.428 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:21:49 np0005539550 systemd[1]: Started libpod-conmon-951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4.scope.
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.451 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:49 np0005539550 podman[333429]: 2025-11-29 08:21:49.35818101 +0000 UTC m=+0.020905912 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.461 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404509.3610842, 24ab8319-0576-4b43-a61b-63b34b98158a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.462 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.464 257641 INFO nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Took 6.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.465 257641 DEBUG nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b36a6e46871ee488701e985a21b302790df6a2eb9da90f5e03be535c7ccd1c3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 373 KiB/s rd, 4.3 MiB/s wr, 135 op/s
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.493 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:49 np0005539550 podman[333429]: 2025-11-29 08:21:49.496002615 +0000 UTC m=+0.158727517 container init 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.497 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:49 np0005539550 podman[333429]: 2025-11-29 08:21:49.50327684 +0000 UTC m=+0.166001722 container start 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.523 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:21:49 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [NOTICE]   (333518) : New worker (333522) forked
Nov 29 03:21:49 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [NOTICE]   (333518) : Loading success.
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.544 257641 INFO nova.compute.manager [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Took 7.89 seconds to build instance.#033[00m
Nov 29 03:21:49 np0005539550 nova_compute[257631]: 2025-11-29 08:21:49.572 257641 DEBUG oslo_concurrency.lockutils [None req-3ee0e7ba-a45c-4d25-850f-11cef45798f6 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:49.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.869569555 +0000 UTC m=+0.044075962 container create 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:49 np0005539550 systemd[1]: Started libpod-conmon-7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f.scope.
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:21:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.850969032 +0000 UTC m=+0.025475449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.955244534 +0000 UTC m=+0.129750971 container init 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.963217726 +0000 UTC m=+0.137724143 container start 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.96766743 +0000 UTC m=+0.142173847 container attach 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:21:49 np0005539550 systemd[1]: libpod-7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f.scope: Deactivated successfully.
Nov 29 03:21:49 np0005539550 sharp_ellis[333587]: 167 167
Nov 29 03:21:49 np0005539550 conmon[333587]: conmon 7f5f0b1c7f56a1ee44df <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f.scope/container/memory.events
Nov 29 03:21:49 np0005539550 podman[333570]: 2025-11-29 08:21:49.970966464 +0000 UTC m=+0.145472861 container died 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:21:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8ee5b8c72b2f192fcd3d3b64d53e80ab6c4c74fcaf00b43b77f8f9164df125fb-merged.mount: Deactivated successfully.
Nov 29 03:21:50 np0005539550 podman[333570]: 2025-11-29 08:21:50.013794313 +0000 UTC m=+0.188300710 container remove 7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ellis, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:21:50 np0005539550 systemd[1]: libpod-conmon-7f5f0b1c7f56a1ee44dfc793fc5a8c19f39a04e11de90a28ced612d05e82df3f.scope: Deactivated successfully.
Nov 29 03:21:50 np0005539550 podman[333612]: 2025-11-29 08:21:50.207075178 +0000 UTC m=+0.047106479 container create 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:21:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 29 03:21:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 29 03:21:50 np0005539550 systemd[1]: Started libpod-conmon-58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413.scope.
Nov 29 03:21:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 29 03:21:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:50 np0005539550 podman[333612]: 2025-11-29 08:21:50.188928467 +0000 UTC m=+0.028959788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:50 np0005539550 podman[333612]: 2025-11-29 08:21:50.293346962 +0000 UTC m=+0.133378293 container init 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:50 np0005539550 podman[333612]: 2025-11-29 08:21:50.301172891 +0000 UTC m=+0.141204192 container start 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.299 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'trusted_certs' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:50 np0005539550 podman[333612]: 2025-11-29 08:21:50.305627855 +0000 UTC m=+0.145659166 container attach 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.404 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.404 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Ensure instance console log exists: /var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.405 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.405 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.405 257641 DEBUG oslo_concurrency.lockutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.407 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Start _get_guest_xml network_info=[{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.413 257641 WARNING nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.419 257641 DEBUG nova.virt.libvirt.host [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.420 257641 DEBUG nova.virt.libvirt.host [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.423 257641 DEBUG nova.virt.libvirt.host [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.424 257641 DEBUG nova.virt.libvirt.host [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.425 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.425 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.425 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.426 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.426 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.426 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.426 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.426 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.427 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.427 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.427 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.427 257641 DEBUG nova.virt.hardware [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.427 257641 DEBUG nova.objects.instance [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.445 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.489 257641 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.490 257641 DEBUG nova.network.neutron [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.510 257641 DEBUG oslo_concurrency.lockutils [req-dea5e7e1-2281-422a-95e3-6f88c13ee4f8 req-3f62d195-9548-407f-bb15-d41931ae956e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305810107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.902 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:50 np0005539550 nova_compute[257631]: 2025-11-29 08:21:50.946 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.015 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.016 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.016 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.016 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.017 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.018 257641 INFO nova.compute.manager [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Terminating instance#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.019 257641 DEBUG nova.compute.manager [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:21:51 np0005539550 kernel: tap891c612b-0a (unregistering): left promiscuous mode
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.0682] device (tap891c612b-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00498|binding|INFO|Releasing lport 891c612b-0aae-4bfc-af98-c0c12c5fe5fc from this chassis (sb_readonly=0)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00499|binding|INFO|Setting lport 891c612b-0aae-4bfc-af98-c0c12c5fe5fc down in Southbound
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00500|binding|INFO|Removing iface tap891c612b-0a ovn-installed in OVS
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.117 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.123 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:a0:75 10.100.0.4'], port_security=['fa:16:3e:6c:a0:75 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '24ab8319-0576-4b43-a61b-63b34b98158a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '36e3df454d054a1abd33ac1e9ae39e7f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1af7b1f9-1ea8-4e02-bfb0-43bff1d2856a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=069fe775-5708-4115-b88b-075f10fee5ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=891c612b-0aae-4bfc-af98-c0c12c5fe5fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.125 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 891c612b-0aae-4bfc-af98-c0c12c5fe5fc in datapath 4e0c4650-e56a-4802-81ca-91c563de7d3e unbound from our chassis#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.127 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e0c4650-e56a-4802-81ca-91c563de7d3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.128 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eefd3a0c-d12d-4119-975b-5aeda80082c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.129 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e namespace which is not needed anymore#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d0000007a.scope: Consumed 2.189s CPU time.
Nov 29 03:21:51 np0005539550 systemd-machined[216673]: Machine qemu-60-instance-0000007a terminated.
Nov 29 03:21:51 np0005539550 distracted_bartik[333628]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:21:51 np0005539550 distracted_bartik[333628]: --> relative data size: 1.0
Nov 29 03:21:51 np0005539550 distracted_bartik[333628]: --> All data devices are unavailable
Nov 29 03:21:51 np0005539550 systemd[1]: libpod-58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 podman[333612]: 2025-11-29 08:21:51.208786634 +0000 UTC m=+1.048817945 container died 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d409c389a9d2ba06dc037f4cb46a3afdbad765b6a2d228149bc9b88480d426be-merged.mount: Deactivated successfully.
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.256 257641 INFO nova.virt.libvirt.driver [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Instance destroyed successfully.#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.257 257641 DEBUG nova.objects.instance [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lazy-loading 'resources' on Instance uuid 24ab8319-0576-4b43-a61b-63b34b98158a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.260 257641 DEBUG nova.compute.manager [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.260 257641 DEBUG oslo_concurrency.lockutils [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.260 257641 DEBUG oslo_concurrency.lockutils [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.260 257641 DEBUG oslo_concurrency.lockutils [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.261 257641 DEBUG nova.compute.manager [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] No waiting events found dispatching network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.261 257641 WARNING nova.compute.manager [req-ffff3d92-2db9-447d-b067-d923f952560a req-fd059a58-9382-41ea-b905-59f00dca9103 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received unexpected event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:21:51 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [NOTICE]   (333518) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:51 np0005539550 podman[333612]: 2025-11-29 08:21:51.268539683 +0000 UTC m=+1.108570984 container remove 58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_bartik, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.270 257641 DEBUG nova.virt.libvirt.vif [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-1933298515',display_name='tempest-ServerAddressesTestJSON-server-1933298515',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-1933298515',id=122,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='36e3df454d054a1abd33ac1e9ae39e7f',ramdisk_id='',reservation_id='r-9w6voqvy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1656158748',owner_user_name='tempest-ServerAddressesTestJSON-1656158748-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:49Z,user_data=None,user_id='0662b63a7f1f4a00960875249475e54a',uuid=24ab8319-0576-4b43-a61b-63b34b98158a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.271 257641 DEBUG nova.network.os_vif_util [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converting VIF {"id": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "address": "fa:16:3e:6c:a0:75", "network": {"id": "4e0c4650-e56a-4802-81ca-91c563de7d3e", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1197409691-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "36e3df454d054a1abd33ac1e9ae39e7f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap891c612b-0a", "ovs_interfaceid": "891c612b-0aae-4bfc-af98-c0c12c5fe5fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.271 257641 DEBUG nova.network.os_vif_util [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.271 257641 DEBUG os_vif [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.273 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [NOTICE]   (333518) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:51 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [WARNING]  (333518) : Exiting Master process...
Nov 29 03:21:51 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [ALERT]    (333518) : Current worker (333522) exited with code 143 (Terminated)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.274 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap891c612b-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e[333491]: [WARNING]  (333518) : All workers exited. Exiting... (0)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 systemd[1]: libpod-951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.279 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 podman[333761]: 2025-11-29 08:21:51.279469641 +0000 UTC m=+0.058818706 container died 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.284 257641 INFO os_vif [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:a0:75,bridge_name='br-int',has_traffic_filtering=True,id=891c612b-0aae-4bfc-af98-c0c12c5fe5fc,network=Network(4e0c4650-e56a-4802-81ca-91c563de7d3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap891c612b-0a')#033[00m
Nov 29 03:21:51 np0005539550 systemd[1]: libpod-conmon-58972ba5eaa8c784db1cc714b6cc59d782b606eadacb39d3acb7c272b69b5413.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7b36a6e46871ee488701e985a21b302790df6a2eb9da90f5e03be535c7ccd1c3-merged.mount: Deactivated successfully.
Nov 29 03:21:51 np0005539550 podman[333761]: 2025-11-29 08:21:51.331418673 +0000 UTC m=+0.110767738 container cleanup 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:21:51 np0005539550 systemd[1]: libpod-conmon-951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 podman[333840]: 2025-11-29 08:21:51.392476345 +0000 UTC m=+0.040745207 container remove 951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.403 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45c9a678-2e45-4f60-9664-a45c1338709e]: (4, ('Sat Nov 29 08:21:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e (951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4)\n951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4\nSat Nov 29 08:21:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e (951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4)\n951e860794ac422b8e8151ab08bed0e181b7bfc275302d5108e4427df00e8eb4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.405 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6d3bd2-7241-4324-9ec2-2bddc9ff81ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.406 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e0c4650-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 kernel: tap4e0c4650-e0: left promiscuous mode
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.426 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a302633e-011b-48f5-b182-8d77b2dfc35b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:21:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632512578' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.439 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d84f463-e5ec-441d-80d6-603d677b3e70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.444 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8848af31-6fb3-4995-8060-8eb2d4070ece]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.453 257641 DEBUG oslo_concurrency.processutils [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.454 257641 DEBUG nova.virt.libvirt.vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:45Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.455 257641 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.455 257641 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.457 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <uuid>37bf3f0c-b49b-457b-81be-b4b31f32d872</uuid>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <name>instance-00000077</name>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-700901307</nova:name>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:21:50</nova:creationTime>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <nova:port uuid="f435ee76-ed2f-4ad8-a9e1-bda955080b3e">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="serial">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="uuid">37bf3f0c-b49b-457b-81be-b4b31f32d872</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/37bf3f0c-b49b-457b-81be-b4b31f32d872_disk.config">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a0:40:0c"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <target dev="tapf435ee76-ed"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/37bf3f0c-b49b-457b-81be-b4b31f32d872/console.log" append="off"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:21:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:21:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:21:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:21:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.458 257641 DEBUG nova.virt.libvirt.vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:12Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:21:45Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.458 257641 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--788617349", "vif_mac": "fa:16:3e:a0:40:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.458 257641 DEBUG nova.network.os_vif_util [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.458 257641 DEBUG os_vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.459 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.459 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.459 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.458 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0338f761-2c44-41eb-ac2b-315e95127318]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755644, 'reachable_time': 34532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333892, 'error': None, 'target': 'ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.463 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf435ee76-ed, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.463 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf435ee76-ed, col_values=(('external_ids', {'iface-id': 'f435ee76-ed2f-4ad8-a9e1-bda955080b3e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:40:0c', 'vm-uuid': '37bf3f0c-b49b-457b-81be-b4b31f32d872'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 systemd[1]: run-netns-ovnmeta\x2d4e0c4650\x2de56a\x2d4802\x2d81ca\x2d91c563de7d3e.mount: Deactivated successfully.
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.463 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e0c4650-e56a-4802-81ca-91c563de7d3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.463 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[04a201f7-1130-4b89-b2e4-ff993735f343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.464 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.4656] manager: (tapf435ee76-ed): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.467 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.471 257641 INFO os_vif [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:21:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 901 KiB/s rd, 2.2 MiB/s wr, 115 op/s
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.519 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.520 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.520 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] No VIF found with MAC fa:16:3e:a0:40:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.521 257641 INFO nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Using config drive#033[00m
Nov 29 03:21:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:51.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:51 np0005539550 kernel: tapf435ee76-ed: entered promiscuous mode
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.6159] manager: (tapf435ee76-ed): new Tun device (/org/freedesktop/NetworkManager/Devices/229)
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00501|binding|INFO|Claiming lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e for this chassis.
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00502|binding|INFO|f435ee76-ed2f-4ad8-a9e1-bda955080b3e: Claiming fa:16:3e:a0:40:0c 10.100.0.5
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.616 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.623 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.228'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.625 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 bound to our chassis#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.627 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2b381cec-57a8-4697-a273-a320681301f8#033[00m
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00503|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e ovn-installed in OVS
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00504|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e up in Southbound
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.639 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78ee2c72-213c-4760-8494-81be1771b9b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.640 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2b381cec-51 in ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.643 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2b381cec-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.644 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30024c3f-977e-4153-8234-25d3c42f8fa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.645 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a272d6fa-c3bb-4215-98ea-efc419e66940]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 systemd-udevd[333976]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.661 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[708effc4-5659-4079-9dc8-b4224d8ae35c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 systemd-machined[216673]: New machine qemu-61-instance-00000077.
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.6674] device (tapf435ee76-ed): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.6682] device (tapf435ee76-ed): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:21:51 np0005539550 systemd[1]: Started Virtual Machine qemu-61-instance-00000077.
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.692 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2cfeff52-5221-4117-8348-b822a4b46b43]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.728 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7bf527-5b09-4af5-a8e0-b719854e8b0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.737 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1262c782-0e03-4a29-8b51-ea358a857c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.7378] manager: (tap2b381cec-50): new Veth device (/org/freedesktop/NetworkManager/Devices/230)
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.768 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[deb1c204-346a-4b69-b68d-fcde638c74ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.772 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[018d86ca-b3cc-44cb-9daa-d37a654082a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.7958] device (tap2b381cec-50): carrier: link connected
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.800 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[54c5f703-311d-4a3f-b080-78cf821692f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.821 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[595bfa16-4d16-4116-a789-8e6fe63ed5b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755946, 'reachable_time': 35007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334034, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.845 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3054ab58-3ed9-44ea-bd76-e267dd708055]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:c0a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 755946, 'tstamp': 755946}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334044, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.863 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd203147-5da1-4b7c-9d64-3cb097ecd161]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2b381cec-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:c0:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 143], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755946, 'reachable_time': 35007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334048, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.880 257641 INFO nova.virt.libvirt.driver [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Deleting instance files /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a_del#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.881 257641 INFO nova.virt.libvirt.driver [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Deletion of /var/lib/nova/instances/24ab8319-0576-4b43-a61b-63b34b98158a_del complete#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.898 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91b85400-d53d-4ab6-ac03-bceaf6ddd07d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.910390197 +0000 UTC m=+0.037851294 container create 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.927 257641 DEBUG nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.928 257641 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.928 257641 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.929 257641 DEBUG oslo_concurrency.lockutils [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.929 257641 DEBUG nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.929 257641 WARNING nova.compute.manager [req-e300093e-5347-4045-8868-63d5afb052e1 req-ad047e98-3fa4-4026-b6cb-f9b3a32bec81 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.941 257641 INFO nova.compute.manager [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.942 257641 DEBUG oslo.service.loopingcall [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.942 257641 DEBUG nova.compute.manager [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.942 257641 DEBUG nova.network.neutron [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:21:51 np0005539550 systemd[1]: Started libpod-conmon-482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5.scope.
Nov 29 03:21:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.969 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7554f0-6289-4921-9b42-dbc64684a75e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.970 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.970 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.971 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b381cec-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 kernel: tap2b381cec-50: entered promiscuous mode
Nov 29 03:21:51 np0005539550 NetworkManager[49039]: <info>  [1764404511.9742] manager: (tap2b381cec-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.976130849 +0000 UTC m=+0.103591966 container init 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.980 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2b381cec-50, col_values=(('external_ids', {'iface-id': '7127038e-90ca-4039-8404-4b8a2152df71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.973 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:51Z|00505|binding|INFO|Releasing lport 7127038e-90ca-4039-8404-4b8a2152df71 from this chassis (sb_readonly=0)
Nov 29 03:21:51 np0005539550 nova_compute[257631]: 2025-11-29 08:21:51.982 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.985 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.8943711 +0000 UTC m=+0.021832227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.992062504 +0000 UTC m=+0.119523601 container start 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.992 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6f006cd7-c1fd-4941-b9ff-beedc7b86a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.993 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/2b381cec-57a8-4697-a273-a320681301f8.pid.haproxy
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 2b381cec-57a8-4697-a273-a320681301f8
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:21:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:51.994 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'env', 'PROCESS_TAG=haproxy-2b381cec-57a8-4697-a273-a320681301f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2b381cec-57a8-4697-a273-a320681301f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:21:51 np0005539550 condescending_cray[334070]: 167 167
Nov 29 03:21:51 np0005539550 systemd[1]: libpod-482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5.scope: Deactivated successfully.
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.998011775 +0000 UTC m=+0.125472892 container attach 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:21:51 np0005539550 podman[334049]: 2025-11-29 08:21:51.998521488 +0000 UTC m=+0.125982605 container died 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:52 np0005539550 podman[334049]: 2025-11-29 08:21:52.040438165 +0000 UTC m=+0.167899262 container remove 482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_cray, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:52 np0005539550 systemd[1]: libpod-conmon-482452ea889106513c8cd12a0497659b4ebd29a802744fa557f07ab76abbddb5.scope: Deactivated successfully.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.049987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512050033, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 465, "num_deletes": 250, "total_data_size": 375718, "memory_usage": 385280, "flush_reason": "Manual Compaction"}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512053582, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 309886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45909, "largest_seqno": 46373, "table_properties": {"data_size": 307356, "index_size": 566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7141, "raw_average_key_size": 20, "raw_value_size": 302043, "raw_average_value_size": 875, "num_data_blocks": 25, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404493, "oldest_key_time": 1764404493, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 3621 microseconds, and 1554 cpu microseconds.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.053616) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 309886 bytes OK
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.053633) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.054543) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.054557) EVENT_LOG_v1 {"time_micros": 1764404512054552, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.054575) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 372945, prev total WAL file size 372945, number of live WAL files 2.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.054974) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373533' seq:0, type:0; will stop at (end)
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(302KB)], [98(13MB)]
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512055008, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 13966217, "oldest_snapshot_seqno": -1}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 8020 keys, 10180439 bytes, temperature: kUnknown
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512126915, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10180439, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10129641, "index_size": 29630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20101, "raw_key_size": 208240, "raw_average_key_size": 25, "raw_value_size": 9989124, "raw_average_value_size": 1245, "num_data_blocks": 1154, "num_entries": 8020, "num_filter_entries": 8020, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404512, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.127223) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10180439 bytes
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.129226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.9 rd, 141.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(77.9) write-amplify(32.9) OK, records in: 8530, records dropped: 510 output_compression: NoCompression
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.129252) EVENT_LOG_v1 {"time_micros": 1764404512129240, "job": 58, "event": "compaction_finished", "compaction_time_micros": 72030, "compaction_time_cpu_micros": 26433, "output_level": 6, "num_output_files": 1, "total_output_size": 10180439, "num_input_records": 8530, "num_output_records": 8020, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512129598, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404512131675, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.054922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.131779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.131784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.131786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.131787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:21:52.131789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.183 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404512.1825898, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.184 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.186 257641 DEBUG nova.compute.manager [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.195 257641 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance running successfully.#033[00m
Nov 29 03:21:52 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.197 257641 DEBUG nova.virt.libvirt.guest [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.198 257641 DEBUG nova.virt.libvirt.driver [None req-fab2b6d4-dd55-4892-a868-7703f0931d4c 5da137b03369494c991c9e0197471f42 cc9ff77d04cd4758aad09958f24a7a9a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.205 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.208 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:52 np0005539550 podman[334142]: 2025-11-29 08:21:52.225604184 +0000 UTC m=+0.047148100 container create 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.230 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.231 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404512.184341, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.231 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Started (Lifecycle Event)#033[00m
Nov 29 03:21:52 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:21:52 np0005539550 systemd[332774]: Activating special unit Exit the Session...
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped target Main User Target.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped target Basic System.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped target Paths.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped target Sockets.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped target Timers.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:21:52 np0005539550 systemd[332774]: Closed D-Bus User Message Bus Socket.
Nov 29 03:21:52 np0005539550 systemd[332774]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:21:52 np0005539550 systemd[332774]: Removed slice User Application Slice.
Nov 29 03:21:52 np0005539550 systemd[332774]: Reached target Shutdown.
Nov 29 03:21:52 np0005539550 systemd[332774]: Finished Exit the Session.
Nov 29 03:21:52 np0005539550 systemd[332774]: Reached target Exit the Session.
Nov 29 03:21:52 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:21:52 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:21:52 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:21:52 np0005539550 systemd[1]: Started libpod-conmon-8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c.scope.
Nov 29 03:21:52 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:21:52 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:21:52 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:21:52 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.283 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.288 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:21:52 np0005539550 podman[334142]: 2025-11-29 08:21:52.20777443 +0000 UTC m=+0.029318376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a07aaf708420c7e33e7d0efada63e0af971725680ddaded23fabc3830868149/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a07aaf708420c7e33e7d0efada63e0af971725680ddaded23fabc3830868149/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a07aaf708420c7e33e7d0efada63e0af971725680ddaded23fabc3830868149/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a07aaf708420c7e33e7d0efada63e0af971725680ddaded23fabc3830868149/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.317 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:21:52 np0005539550 podman[334142]: 2025-11-29 08:21:52.320544878 +0000 UTC m=+0.142088814 container init 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:21:52 np0005539550 podman[334142]: 2025-11-29 08:21:52.329131207 +0000 UTC m=+0.150675123 container start 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:21:52 np0005539550 podman[334142]: 2025-11-29 08:21:52.341023849 +0000 UTC m=+0.162567765 container attach 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:21:52 np0005539550 podman[334187]: 2025-11-29 08:21:52.411973303 +0000 UTC m=+0.051437069 container create 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.415 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:52 np0005539550 systemd[1]: Started libpod-conmon-6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f.scope.
Nov 29 03:21:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:52 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497fc328516c4dca6e02bf8fc026cecadb7aceadef6df839d05060401f1a2120/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:52 np0005539550 podman[334187]: 2025-11-29 08:21:52.386184458 +0000 UTC m=+0.025648244 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:21:52 np0005539550 podman[334187]: 2025-11-29 08:21:52.493663021 +0000 UTC m=+0.133126827 container init 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:21:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:52.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:52 np0005539550 podman[334187]: 2025-11-29 08:21:52.501970282 +0000 UTC m=+0.141434068 container start 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:21:52 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [NOTICE]   (334206) : New worker (334208) forked
Nov 29 03:21:52 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [NOTICE]   (334206) : Loading success.
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.547 257641 DEBUG nova.network.neutron [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.569 257641 INFO nova.compute.manager [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Took 0.63 seconds to deallocate network for instance.#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.613 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.613 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.641 257641 DEBUG nova.compute.manager [req-011587b4-749c-464d-aa93-1d302010ce3e req-b9fa8f50-67df-4ef1-914f-b5969e897233 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-vif-deleted-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:52 np0005539550 nova_compute[257631]: 2025-11-29 08:21:52.713 257641 DEBUG oslo_concurrency.processutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]: {
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:    "0": [
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:        {
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "devices": [
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "/dev/loop3"
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            ],
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "lv_name": "ceph_lv0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "lv_size": "7511998464",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "name": "ceph_lv0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "tags": {
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.cluster_name": "ceph",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.crush_device_class": "",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.encrypted": "0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.osd_id": "0",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.type": "block",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:                "ceph.vdo": "0"
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            },
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "type": "block",
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:            "vg_name": "ceph_vg0"
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:        }
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]:    ]
Nov 29 03:21:53 np0005539550 infallible_hamilton[334162]: }
Nov 29 03:21:53 np0005539550 systemd[1]: libpod-8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c.scope: Deactivated successfully.
Nov 29 03:21:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/823771253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:53 np0005539550 podman[334241]: 2025-11-29 08:21:53.203917304 +0000 UTC m=+0.032863416 container died 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.205 257641 DEBUG oslo_concurrency.processutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.213 257641 DEBUG nova.compute.provider_tree [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.230 257641 DEBUG nova.scheduler.client.report [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0a07aaf708420c7e33e7d0efada63e0af971725680ddaded23fabc3830868149-merged.mount: Deactivated successfully.
Nov 29 03:21:53 np0005539550 podman[334241]: 2025-11-29 08:21:53.26709104 +0000 UTC m=+0.096037102 container remove 8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hamilton, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:21:53 np0005539550 systemd[1]: libpod-conmon-8335c94f20ccb5b6e504f46693b0d95755ab729932446d527bce1d9b533bd73c.scope: Deactivated successfully.
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.357 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 316 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.553 257641 DEBUG nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-vif-unplugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.553 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 DEBUG nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] No waiting events found dispatching network-vif-unplugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 WARNING nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received unexpected event network-vif-unplugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 DEBUG nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.554 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.555 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.555 257641 DEBUG oslo_concurrency.lockutils [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.555 257641 DEBUG nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] No waiting events found dispatching network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.555 257641 WARNING nova.compute.manager [req-4e7aadd0-7fc4-4a5a-beaf-2e4a45c0f068 req-5d4c792b-6714-4c46-b3f4-df8a87dd6e98 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Received unexpected event network-vif-plugged-891c612b-0aae-4bfc-af98-c0c12c5fe5fc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.564 257641 INFO nova.scheduler.client.report [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Deleted allocations for instance 24ab8319-0576-4b43-a61b-63b34b98158a#033[00m
Nov 29 03:21:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:21:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:21:53 np0005539550 nova_compute[257631]: 2025-11-29 08:21:53.701 257641 DEBUG oslo_concurrency.lockutils [None req-5ab4572b-6e72-4860-9f4c-1eb9baaab707 0662b63a7f1f4a00960875249475e54a 36e3df454d054a1abd33ac1e9ae39e7f - - default default] Lock "24ab8319-0576-4b43-a61b-63b34b98158a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.894906537 +0000 UTC m=+0.036889540 container create 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:53 np0005539550 systemd[1]: Started libpod-conmon-01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94.scope.
Nov 29 03:21:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.96462288 +0000 UTC m=+0.106605893 container init 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.970451738 +0000 UTC m=+0.112434731 container start 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.973600768 +0000 UTC m=+0.115583751 container attach 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:21:53 np0005539550 funny_banach[334408]: 167 167
Nov 29 03:21:53 np0005539550 systemd[1]: libpod-01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94.scope: Deactivated successfully.
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.974381738 +0000 UTC m=+0.116364731 container died 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:21:53 np0005539550 podman[334392]: 2025-11-29 08:21:53.880465179 +0000 UTC m=+0.022448182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7228ca198b67f199cf82977093011ec7e799539cd101f2148b143d931a6a81d9-merged.mount: Deactivated successfully.
Nov 29 03:21:54 np0005539550 podman[334392]: 2025-11-29 08:21:54.014459457 +0000 UTC m=+0.156442450 container remove 01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:21:54 np0005539550 systemd[1]: libpod-conmon-01c20133b6a411ffe820c407721fc5b56870460ec79d27cc64bb238c5d7b7d94.scope: Deactivated successfully.
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.165 257641 DEBUG nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.166 257641 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.166 257641 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.166 257641 DEBUG oslo_concurrency.lockutils [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.166 257641 DEBUG nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.167 257641 WARNING nova.compute.manager [req-f2b48c38-e19f-4860-b443-ef3dde378ac5 req-d445fe45-2b0b-4d57-8821-ccb1b4fcb196 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:21:54 np0005539550 podman[334432]: 2025-11-29 08:21:54.209246621 +0000 UTC m=+0.054426055 container create dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:21:54 np0005539550 systemd[1]: Started libpod-conmon-dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f.scope.
Nov 29 03:21:54 np0005539550 podman[334432]: 2025-11-29 08:21:54.190487614 +0000 UTC m=+0.035667078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:21:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:21:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e448ddab95c91ba262970104892a2539770e44d415e9fb4c1d127e15a4a5424f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e448ddab95c91ba262970104892a2539770e44d415e9fb4c1d127e15a4a5424f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e448ddab95c91ba262970104892a2539770e44d415e9fb4c1d127e15a4a5424f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e448ddab95c91ba262970104892a2539770e44d415e9fb4c1d127e15a4a5424f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:21:54 np0005539550 podman[334432]: 2025-11-29 08:21:54.313620905 +0000 UTC m=+0.158800369 container init dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:21:54 np0005539550 podman[334432]: 2025-11-29 08:21:54.325513258 +0000 UTC m=+0.170692682 container start dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:54 np0005539550 podman[334432]: 2025-11-29 08:21:54.329060078 +0000 UTC m=+0.174239532 container attach dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:21:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.967 257641 DEBUG nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.968 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.969 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:21:54 np0005539550 nova_compute[257631]: 2025-11-29 08:21:54.969 257641 DEBUG nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]: {
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:        "osd_id": 0,
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:        "type": "bluestore"
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]:    }
Nov 29 03:21:55 np0005539550 dazzling_banach[334448]: }
Nov 29 03:21:55 np0005539550 systemd[1]: libpod-dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f.scope: Deactivated successfully.
Nov 29 03:21:55 np0005539550 podman[334432]: 2025-11-29 08:21:55.183668893 +0000 UTC m=+1.028848367 container died dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:21:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e448ddab95c91ba262970104892a2539770e44d415e9fb4c1d127e15a4a5424f-merged.mount: Deactivated successfully.
Nov 29 03:21:55 np0005539550 podman[334432]: 2025-11-29 08:21:55.246631584 +0000 UTC m=+1.091811038 container remove dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:21:55 np0005539550 systemd[1]: libpod-conmon-dee580c06f0a66e6787e1572cdf83b5c85c22ef26895145e0fc0eed97b2fdd7f.scope: Deactivated successfully.
Nov 29 03:21:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:21:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:21:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d668110f-5211-4a42-a0bc-424ca7a3fc79 does not exist
Nov 29 03:21:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5b9dd41a-abdf-4e43-b14c-54509da22af1 does not exist
Nov 29 03:21:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c58d0542-1a66-4066-9055-8cfe23d52e13 does not exist
Nov 29 03:21:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 315 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 230 op/s
Nov 29 03:21:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:21:56 np0005539550 nova_compute[257631]: 2025-11-29 08:21:56.466 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:21:57 np0005539550 nova_compute[257631]: 2025-11-29 08:21:57.326 257641 DEBUG nova.network.neutron [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:21:57 np0005539550 nova_compute[257631]: 2025-11-29 08:21:57.404 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:21:57 np0005539550 nova_compute[257631]: 2025-11-29 08:21:57.417 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 307 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.0 MiB/s wr, 239 op/s
Nov 29 03:21:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:21:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:57.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:57 np0005539550 kernel: tapf435ee76-ed (unregistering): left promiscuous mode
Nov 29 03:21:57 np0005539550 NetworkManager[49039]: <info>  [1764404517.9711] device (tapf435ee76-ed): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:21:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:57Z|00506|binding|INFO|Releasing lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e from this chassis (sb_readonly=0)
Nov 29 03:21:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:57Z|00507|binding|INFO|Setting lport f435ee76-ed2f-4ad8-a9e1-bda955080b3e down in Southbound
Nov 29 03:21:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:21:57Z|00508|binding|INFO|Removing iface tapf435ee76-ed ovn-installed in OVS
Nov 29 03:21:57 np0005539550 nova_compute[257631]: 2025-11-29 08:21:57.981 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:57.992 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:40:0c 10.100.0.5'], port_security=['fa:16:3e:a0:40:0c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '37bf3f0c-b49b-457b-81be-b4b31f32d872', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2b381cec-57a8-4697-a273-a320681301f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f7e4462f-71ed-420d-b2ac-83fad8b034b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.228', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b87ec03-3fc0-4efd-b28c-90cfac0d10cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f435ee76-ed2f-4ad8-a9e1-bda955080b3e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:21:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:57.994 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f435ee76-ed2f-4ad8-a9e1-bda955080b3e in datapath 2b381cec-57a8-4697-a273-a320681301f8 unbound from our chassis#033[00m
Nov 29 03:21:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:57.995 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2b381cec-57a8-4697-a273-a320681301f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:21:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:57.996 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2446e3e7-fef8-4e1f-b855-a8a6334dcb7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:57.996 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 namespace which is not needed anymore#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539550 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000077.scope: Deactivated successfully.
Nov 29 03:21:58 np0005539550 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000077.scope: Consumed 5.861s CPU time.
Nov 29 03:21:58 np0005539550 systemd-machined[216673]: Machine qemu-61-instance-00000077 terminated.
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.055 257641 INFO nova.virt.libvirt.driver [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Instance destroyed successfully.#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.057 257641 DEBUG nova.objects.instance [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.072 257641 DEBUG nova.virt.libvirt.vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:20:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-700901307',display_name='tempest-TestNetworkAdvancedServerOps-server-700901307',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-700901307',id=119,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHTZdF339uG4GTcdjaqWUNyl9tCN2Ihz0tT1aABynGHxCfjrTplPF8A9td3DkI7lqNybnYi0rKYsiF72+HnhHVmKPriLXx/cBMbe2eRLXVh9VLRo2vvXjsLkBGMzWqs3qw==',key_name='tempest-TestNetworkAdvancedServerOps-236637179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:52Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-rb72np4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:21:52Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=37bf3f0c-b49b-457b-81be-b4b31f32d872,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.073 257641 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.074 257641 DEBUG nova.network.os_vif_util [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.075 257641 DEBUG os_vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.077 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.078 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf435ee76-ed, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.127 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.129 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.131 257641 INFO os_vif [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:40:0c,bridge_name='br-int',has_traffic_filtering=True,id=f435ee76-ed2f-4ad8-a9e1-bda955080b3e,network=Network(2b381cec-57a8-4697-a273-a320681301f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf435ee76-ed')#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.135 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.135 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:58 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [NOTICE]   (334206) : haproxy version is 2.8.14-c23fe91
Nov 29 03:21:58 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [NOTICE]   (334206) : path to executable is /usr/sbin/haproxy
Nov 29 03:21:58 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [WARNING]  (334206) : Exiting Master process...
Nov 29 03:21:58 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [ALERT]    (334206) : Current worker (334208) exited with code 143 (Terminated)
Nov 29 03:21:58 np0005539550 neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8[334202]: [WARNING]  (334206) : All workers exited. Exiting... (0)
Nov 29 03:21:58 np0005539550 systemd[1]: libpod-6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f.scope: Deactivated successfully.
Nov 29 03:21:58 np0005539550 podman[334568]: 2025-11-29 08:21:58.171664403 +0000 UTC m=+0.044400560 container died 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.182 257641 DEBUG nova.objects.instance [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid 37bf3f0c-b49b-457b-81be-b4b31f32d872 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:21:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:21:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-497fc328516c4dca6e02bf8fc026cecadb7aceadef6df839d05060401f1a2120-merged.mount: Deactivated successfully.
Nov 29 03:21:58 np0005539550 podman[334568]: 2025-11-29 08:21:58.204350344 +0000 UTC m=+0.077086501 container cleanup 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:21:58 np0005539550 systemd[1]: libpod-conmon-6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f.scope: Deactivated successfully.
Nov 29 03:21:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:21:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:21:58 np0005539550 podman[334601]: 2025-11-29 08:21:58.259780294 +0000 UTC m=+0.035524595 container remove 6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.263 257641 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.264 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e05a5b8-c3a3-484f-95e5-fd2301001741]: (4, ('Sat Nov 29 08:21:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f)\n6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f\nSat Nov 29 08:21:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 (6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f)\n6a458549a60071c322e19cceaeea5e03d7333abcc69d39d7830473b8589ff06f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.266 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2a91f748-45e9-4069-9789-807ea52778b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.267 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b381cec-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:21:58 np0005539550 kernel: tap2b381cec-50: left promiscuous mode
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.273 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8554b563-8674-4ba2-a43a-7f931e08d93f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.293 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2de78211-3e60-4986-856b-d002d2c5dc06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.292 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.294 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8c229eaa-869d-4a4a-b88d-f62dd2d565af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.308 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0ecb55-fa2b-4b18-97b6-2022880fea88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 755938, 'reachable_time': 23557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334658, 'error': None, 'target': 'ovnmeta-2b381cec-57a8-4697-a273-a320681301f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.312 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2b381cec-57a8-4697-a273-a320681301f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:21:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:21:58.313 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[98bd86b4-140f-4280-beca-aac3ffb5b927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:21:58 np0005539550 systemd[1]: run-netns-ovnmeta\x2d2b381cec\x2d57a8\x2d4697\x2da273\x2da320681301f8.mount: Deactivated successfully.
Nov 29 03:21:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:21:58.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:21:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3056012327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.697 257641 DEBUG oslo_concurrency.processutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:21:58 np0005539550 nova_compute[257631]: 2025-11-29 08:21:58.703 257641 DEBUG nova.compute.provider_tree [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.180 257641 DEBUG nova.scheduler.client.report [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.254 257641 DEBUG oslo_concurrency.lockutils [None req-2b7251d2-b66c-4de2-ad64-d4448b52f47e fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:59 np0005539550 podman[334685]: 2025-11-29 08:21:59.318000137 +0000 UTC m=+0.052666651 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:21:59 np0005539550 podman[334684]: 2025-11-29 08:21:59.326355439 +0000 UTC m=+0.060740695 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:21:59
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups', '.mgr']
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.473 257641 DEBUG nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.473 257641 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.474 257641 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.474 257641 DEBUG oslo_concurrency.lockutils [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.475 257641 DEBUG nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.475 257641 WARNING nova.compute.manager [req-ed67b180-22fa-4fd6-b3a3-a545be2767c9 req-0b14ef01-2c9f-44af-8bce-6b3df8bf918d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-unplugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.490 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:21:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 257 op/s
Nov 29 03:21:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:21:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:21:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:21:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:21:59 np0005539550 nova_compute[257631]: 2025-11-29 08:21:59.738 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:00.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:22:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 46K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1739 writes, 7737 keys, 1739 commit groups, 1.0 writes per commit group, ingest: 11.32 MB, 0.02 MB/s#012Interval WAL: 1739 writes, 1739 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     30.3      2.02              0.20        29    0.070       0      0       0.0       0.0#012  L6      1/0    9.71 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.8     31.1     26.4     11.01              0.86        28    0.393    180K    15K       0.0       0.0#012 Sum      1/0    9.71 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.8     26.3     27.0     13.03              1.06        57    0.229    180K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4    109.7    106.9      0.72              0.24        12    0.060     49K   3097       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     31.1     26.4     11.01              0.86        28    0.393    180K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     30.4      2.01              0.20        28    0.072       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.060, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.08 MB/s write, 0.33 GB read, 0.08 MB/s read, 13.0 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 37.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000578 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2136,36.05 MB,11.859%) FilterBlock(58,506.23 KB,0.162622%) IndexBlock(58,895.48 KB,0.287663%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:22:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.9 MiB/s wr, 224 op/s
Nov 29 03:22:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:01.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.449 257641 DEBUG nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.449 257641 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.450 257641 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.450 257641 DEBUG oslo_concurrency.lockutils [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.450 257641 DEBUG nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.450 257641 WARNING nova.compute.manager [req-28025659-f680-47bd-9e60-5c8ad7ae9de8 req-e45b3d7e-0605-4e09-a81d-49a0001443be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:02 np0005539550 nova_compute[257631]: 2025-11-29 08:22:02.566 257641 DEBUG nova.compute.manager [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:22:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:02.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.129 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Nov 29 03:22:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:03.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.746 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.746 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.789 257641 DEBUG nova.objects.instance [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.812 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.812 257641 INFO nova.compute.claims [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:22:03 np0005539550 nova_compute[257631]: 2025-11-29 08:22:03.813 257641 DEBUG nova.objects.instance [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.030 257641 DEBUG nova.objects.instance [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.148 257641 INFO nova.compute.resource_tracker [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating resource usage from migration fa6c73cf-52c4-43d6-af42-5901aa2f0931#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.149 257641 DEBUG nova.compute.resource_tracker [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Starting to track incoming migration fa6c73cf-52c4-43d6-af42-5901aa2f0931 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.255 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:04.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971629518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.667 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.675 257641 DEBUG nova.compute.provider_tree [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.809 257641 DEBUG nova.scheduler.client.report [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.858 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:04 np0005539550 nova_compute[257631]: 2025-11-29 08:22:04.859 257641 INFO nova.compute.manager [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Migrating#033[00m
Nov 29 03:22:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Nov 29 03:22:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:05.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:06 np0005539550 nova_compute[257631]: 2025-11-29 08:22:06.255 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404511.2521024, 24ab8319-0576-4b43-a61b-63b34b98158a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:06 np0005539550 nova_compute[257631]: 2025-11-29 08:22:06.255 257641 INFO nova.compute.manager [-] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:06 np0005539550 nova_compute[257631]: 2025-11-29 08:22:06.287 257641 DEBUG nova.compute.manager [None req-34569570-94d6-45db-bbd4-f042ac32b0d9 - - - - - -] [instance: 24ab8319-0576-4b43-a61b-63b34b98158a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:06.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.163 257641 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.163 257641 DEBUG nova.compute.manager [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing instance network info cache due to event network-changed-f435ee76-ed2f-4ad8-a9e1-bda955080b3e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.164 257641 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.164 257641 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.164 257641 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Refreshing network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:07 np0005539550 nova_compute[257631]: 2025-11-29 08:22:07.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.1 MiB/s wr, 60 op/s
Nov 29 03:22:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:07.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:08 np0005539550 nova_compute[257631]: 2025-11-29 08:22:08.193 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:22:08 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:22:08 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:22:08 np0005539550 systemd-logind[788]: New session 63 of user nova.
Nov 29 03:22:08 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:22:08 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:22:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:08.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:08 np0005539550 systemd[334755]: Queued start job for default target Main User Target.
Nov 29 03:22:08 np0005539550 systemd[334755]: Created slice User Application Slice.
Nov 29 03:22:08 np0005539550 systemd[334755]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:22:08 np0005539550 systemd[334755]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:22:08 np0005539550 systemd[334755]: Reached target Paths.
Nov 29 03:22:08 np0005539550 systemd[334755]: Reached target Timers.
Nov 29 03:22:08 np0005539550 systemd[334755]: Starting D-Bus User Message Bus Socket...
Nov 29 03:22:08 np0005539550 systemd[334755]: Starting Create User's Volatile Files and Directories...
Nov 29 03:22:08 np0005539550 systemd[334755]: Finished Create User's Volatile Files and Directories.
Nov 29 03:22:08 np0005539550 systemd[334755]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:22:08 np0005539550 systemd[334755]: Reached target Sockets.
Nov 29 03:22:08 np0005539550 systemd[334755]: Reached target Basic System.
Nov 29 03:22:08 np0005539550 systemd[334755]: Reached target Main User Target.
Nov 29 03:22:08 np0005539550 systemd[334755]: Startup finished in 143ms.
Nov 29 03:22:08 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:22:08 np0005539550 systemd[1]: Started Session 63 of User nova.
Nov 29 03:22:08 np0005539550 systemd[1]: session-63.scope: Deactivated successfully.
Nov 29 03:22:08 np0005539550 systemd-logind[788]: Session 63 logged out. Waiting for processes to exit.
Nov 29 03:22:08 np0005539550 systemd-logind[788]: Removed session 63.
Nov 29 03:22:08 np0005539550 nova_compute[257631]: 2025-11-29 08:22:08.963 257641 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updated VIF entry in instance network info cache for port f435ee76-ed2f-4ad8-a9e1-bda955080b3e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:08 np0005539550 nova_compute[257631]: 2025-11-29 08:22:08.964 257641 DEBUG nova.network.neutron [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Updating instance_info_cache with network_info: [{"id": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "address": "fa:16:3e:a0:40:0c", "network": {"id": "2b381cec-57a8-4697-a273-a320681301f8", "bridge": "br-int", "label": "tempest-network-smoke--788617349", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf435ee76-ed", "ovs_interfaceid": "f435ee76-ed2f-4ad8-a9e1-bda955080b3e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:08 np0005539550 systemd-logind[788]: New session 65 of user nova.
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:22:08 np0005539550 systemd[1]: Started Session 65 of User nova.
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:22:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:22:08 np0005539550 nova_compute[257631]: 2025-11-29 08:22:08.981 257641 DEBUG oslo_concurrency.lockutils [req-b9f23350-fc2c-435c-aa08-8b36bdd4ae31 req-2919f597-37df-4f16-abb6-90c341fb3998 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-37bf3f0c-b49b-457b-81be-b4b31f32d872" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:09 np0005539550 systemd[1]: session-65.scope: Deactivated successfully.
Nov 29 03:22:09 np0005539550 systemd-logind[788]: Session 65 logged out. Waiting for processes to exit.
Nov 29 03:22:09 np0005539550 systemd-logind[788]: Removed session 65.
Nov 29 03:22:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 870 KiB/s wr, 60 op/s
Nov 29 03:22:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Nov 29 03:22:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Nov 29 03:22:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Nov 29 03:22:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:09.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:10.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.122 257641 DEBUG nova.compute.manager [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.122 257641 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.123 257641 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.123 257641 DEBUG oslo_concurrency.lockutils [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "37bf3f0c-b49b-457b-81be-b4b31f32d872-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.123 257641 DEBUG nova.compute.manager [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] No waiting events found dispatching network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:11 np0005539550 nova_compute[257631]: 2025-11-29 08:22:11.123 257641 WARNING nova.compute.manager [req-15bc7495-ad51-4220-a117-75766b2ca4c0 req-20726506-0b67-42d4-9827-d6cd047cbf24 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Received unexpected event network-vif-plugged-f435ee76-ed2f-4ad8-a9e1-bda955080b3e for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:11 np0005539550 podman[334779]: 2025-11-29 08:22:11.354631299 +0000 UTC m=+0.092546875 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:22:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 33 KiB/s wr, 93 op/s
Nov 29 03:22:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:11.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:12 np0005539550 nova_compute[257631]: 2025-11-29 08:22:12.425 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:12.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:12 np0005539550 nova_compute[257631]: 2025-11-29 08:22:12.627 257641 INFO nova.network.neutron [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.054 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404518.053841, 37bf3f0c-b49b-457b-81be-b4b31f32d872 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.055 257641 INFO nova.compute.manager [-] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.078 257641 DEBUG nova.compute.manager [None req-787b5aa8-9032-47bb-be1c-4a4dfb2073d6 - - - - - -] [instance: 37bf3f0c-b49b-457b-81be-b4b31f32d872] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.196 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.244 257641 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.244 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.244 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.245 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.245 257641 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.245 257641 WARNING nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.245 257641 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.246 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.246 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.246 257641 DEBUG oslo_concurrency.lockutils [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.247 257641 DEBUG nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.247 257641 WARNING nova.compute.manager [req-2a525007-efe5-40df-bfe7-a829f9126d88 req-c9faee5e-00de-4c87-9d19-44ab6ad4d497 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.436 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.437 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.438 257641 DEBUG nova.network.neutron [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 40 KiB/s wr, 125 op/s
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.568 257641 DEBUG nova.compute.manager [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.568 257641 DEBUG nova.compute.manager [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:13 np0005539550 nova_compute[257631]: 2025-11-29 08:22:13.568 257641 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:13.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:14.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:14 np0005539550 nova_compute[257631]: 2025-11-29 08:22:14.732 257641 DEBUG nova.network.neutron [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:14 np0005539550 nova_compute[257631]: 2025-11-29 08:22:14.890 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:14 np0005539550 nova_compute[257631]: 2025-11-29 08:22:14.893 257641 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:14 np0005539550 nova_compute[257631]: 2025-11-29 08:22:14.893 257641 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.270 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.271 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.271 257641 INFO nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Creating image(s)#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.312 257641 DEBUG nova.storage.rbd_utils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] creating snapshot(nova-resize) on rbd image(5004bd0f-c699-46d7-b535-b3a7db186a87_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:22:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Nov 29 03:22:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Nov 29 03:22:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Nov 29 03:22:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 29 KiB/s wr, 195 op/s
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.509 257641 DEBUG nova.objects.instance [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.610 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.611 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Ensure instance console log exists: /var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.611 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.612 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.612 257641 DEBUG oslo_concurrency.lockutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.616 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Start _get_guest_xml network_info=[{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.621 257641 WARNING nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.629 257641 DEBUG nova.virt.libvirt.host [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:15.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.630 257641 DEBUG nova.virt.libvirt.host [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.634 257641 DEBUG nova.virt.libvirt.host [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.635 257641 DEBUG nova.virt.libvirt.host [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.636 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.636 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.637 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.637 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.638 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.638 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.638 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.639 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.639 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.639 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.640 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.640 257641 DEBUG nova.virt.hardware [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.640 257641 DEBUG nova.objects.instance [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:15 np0005539550 nova_compute[257631]: 2025-11-29 08:22:15.665 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.124 257641 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.125 257641 DEBUG nova.network.neutron [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/822752538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.142 257641 DEBUG oslo_concurrency.lockutils [req-9ad72338-5840-4bb4-b53a-ced3db456e5a req-0eda5bf9-472c-425e-ae4d-0f4984de3aae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.163 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.198 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451498365' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:16.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.614 257641 DEBUG oslo_concurrency.processutils [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.616 257641 DEBUG nova.virt.libvirt.vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.617 257641 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.618 257641 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.620 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <uuid>5004bd0f-c699-46d7-b535-b3a7db186a87</uuid>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <name>instance-00000079</name>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestJSON-server-1253218278</nova:name>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:22:15</nova:creationTime>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <nova:port uuid="d5e26252-e0d3-4a6b-8b18-b2f4cb7db432">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="serial">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="uuid">5004bd0f-c699-46d7-b535-b3a7db186a87</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5004bd0f-c699-46d7-b535-b3a7db186a87_disk.config">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:47:25:11"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <target dev="tapd5e26252-e0"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/5004bd0f-c699-46d7-b535-b3a7db186a87/console.log" append="off"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:22:16 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:22:16 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:22:16 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:22:16 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.622 257641 DEBUG nova.virt.libvirt.vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:21:24Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.623 257641 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-1145729544-network", "vif_mac": "fa:16:3e:47:25:11"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.623 257641 DEBUG nova.network.os_vif_util [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.624 257641 DEBUG os_vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.624 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.625 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.625 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.628 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.628 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e26252-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.629 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5e26252-e0, col_values=(('external_ids', {'iface-id': 'd5e26252-e0d3-4a6b-8b18-b2f4cb7db432', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:25:11', 'vm-uuid': '5004bd0f-c699-46d7-b535-b3a7db186a87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.630 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.6314] manager: (tapd5e26252-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/232)
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.637 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.637 257641 INFO os_vif [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.748 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.749 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.749 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:47:25:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.750 257641 INFO nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Using config drive#033[00m
Nov 29 03:22:16 np0005539550 kernel: tapd5e26252-e0: entered promiscuous mode
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.8388] manager: (tapd5e26252-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.838 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:16Z|00509|binding|INFO|Claiming lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for this chassis.
Nov 29 03:22:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:16Z|00510|binding|INFO|d5e26252-e0d3-4a6b-8b18-b2f4cb7db432: Claiming fa:16:3e:47:25:11 10.100.0.14
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.843 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.848 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 nova_compute[257631]: 2025-11-29 08:22:16.853 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.8560] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.8565] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Nov 29 03:22:16 np0005539550 systemd-udevd[334977]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.868 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.870 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.871 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:22:16 np0005539550 systemd-machined[216673]: New machine qemu-62-instance-00000079.
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.8839] device (tapd5e26252-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.883 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[adafec48-42c8-4fcf-85a6-9e31a84c29fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.8847] device (tapd5e26252-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.884 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.886 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.886 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f98cc801-4290-455c-ac68-f0616e73971f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.887 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9b1f5e86-8c7e-48f1-b83e-888b65904249]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.898 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[ae86cef1-705b-46dd-a21a-0f4b7f4d7a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 systemd[1]: Started Virtual Machine qemu-62-instance-00000079.
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.922 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb5d869-af5e-403d-9b6f-61b5f7844187]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.956 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c8c59af9-ccf9-479b-af50-04d3a358a9ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:16 np0005539550 NetworkManager[49039]: <info>  [1764404536.9659] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Nov 29 03:22:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.965 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[460eed0b-aa69-43e7-99b9-c0cafacfef7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:16.999 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d8039ddc-6a70-4812-b6d0-6a18d6a90c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.003 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f4fdde-d48e-4ab6-ae85-fb97d01aabcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 NetworkManager[49039]: <info>  [1764404537.0261] device (tap58fd104d-40): carrier: link connected
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.030 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d868198f-fb72-4be8-986a-193eb2e6d2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.046 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7aaebb6-5362-40fb-96be-eee29cab39dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758469, 'reachable_time': 44985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335011, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Nov 29 03:22:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.063 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[149218f9-b438-4bc4-aa3e-e72b133c4015]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758469, 'tstamp': 758469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335012, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.084 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ccefb612-0d68-4a1e-a7e1-90edb7277254]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 146], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758469, 'reachable_time': 44985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335013, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.111 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1eca06aa-9d99-410e-a5f2-e92aeadf6a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.121 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.151 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:17Z|00511|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 ovn-installed in OVS
Nov 29 03:22:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:17Z|00512|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 up in Southbound
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.165 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.172 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[256e2131-c321-4b52-b11f-425491e76644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.174 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.174 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.174 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.176 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:22:17 np0005539550 NetworkManager[49039]: <info>  [1764404537.1772] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.178 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.181 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.182 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:17Z|00513|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.183 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.185 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.186 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e1aa77ee-82ce-408d-9fb1-849fe90b4bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.186 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:17.188 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.198 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.430 257641 DEBUG nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.431 257641 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.431 257641 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.432 257641 DEBUG oslo_concurrency.lockutils [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.432 257641 DEBUG nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.432 257641 WARNING nova.compute.manager [req-94c47667-3d08-4db5-883a-87a5f444510f req-e1cfefe4-4faf-4344-b679-e12552a1dd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.433 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 27 KiB/s wr, 178 op/s
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.544 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404537.5441077, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.545 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.547 257641 DEBUG nova.compute.manager [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.551 257641 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance running successfully.#033[00m
Nov 29 03:22:17 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.553 257641 DEBUG nova.virt.libvirt.guest [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.554 257641 DEBUG nova.virt.libvirt.driver [None req-934fe698-6203-4f68-a87c-e1ee0f7123bc 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:22:17 np0005539550 podman[335087]: 2025-11-29 08:22:17.56256708 +0000 UTC m=+0.052466065 container create e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.576 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.584 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:17 np0005539550 systemd[1]: Started libpod-conmon-e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5.scope.
Nov 29 03:22:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:17 np0005539550 podman[335087]: 2025-11-29 08:22:17.531188892 +0000 UTC m=+0.021087907 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ead335c7511cac11a0dcb9327b45bab67b5173dc807faa34f404582bd77953d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:17.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.637 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.638 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404537.5466418, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.638 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:17 np0005539550 podman[335087]: 2025-11-29 08:22:17.642032431 +0000 UTC m=+0.131931436 container init e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:22:17 np0005539550 podman[335087]: 2025-11-29 08:22:17.648459065 +0000 UTC m=+0.138358050 container start e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:22:17 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [NOTICE]   (335107) : New worker (335109) forked
Nov 29 03:22:17 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [NOTICE]   (335107) : Loading success.
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.679 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:17 np0005539550 nova_compute[257631]: 2025-11-29 08:22:17.684 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Synchronizing instance power state after lifecycle event "Started"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:18.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:18.954 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:18.956 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:18.957 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:19 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:22:19 np0005539550 systemd[334755]: Activating special unit Exit the Session...
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped target Main User Target.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped target Basic System.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped target Paths.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped target Sockets.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped target Timers.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:22:19 np0005539550 systemd[334755]: Closed D-Bus User Message Bus Socket.
Nov 29 03:22:19 np0005539550 systemd[334755]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:22:19 np0005539550 systemd[334755]: Removed slice User Application Slice.
Nov 29 03:22:19 np0005539550 systemd[334755]: Reached target Shutdown.
Nov 29 03:22:19 np0005539550 systemd[334755]: Finished Exit the Session.
Nov 29 03:22:19 np0005539550 systemd[334755]: Reached target Exit the Session.
Nov 29 03:22:19 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:22:19 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:22:19 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:22:19 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:22:19 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:22:19 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:22:19 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:22:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 9.7 KiB/s wr, 149 op/s
Nov 29 03:22:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:19.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007518059045226131 of space, bias 1.0, pg target 2.255417713567839 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:22:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.413 257641 DEBUG nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.414 257641 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.414 257641 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.414 257641 DEBUG oslo_concurrency.lockutils [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.415 257641 DEBUG nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:20 np0005539550 nova_compute[257631]: 2025-11-29 08:22:20.415 257641 WARNING nova.compute.manager [req-44f3752c-0c4c-4c6a-ac35-ad70a2bdf619 req-9152ca51-e4bf-48a2-a6c8-b458606401ef 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:22:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 343 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 2.4 MiB/s wr, 245 op/s
Nov 29 03:22:21 np0005539550 nova_compute[257631]: 2025-11-29 08:22:21.632 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:21.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:22 np0005539550 nova_compute[257631]: 2025-11-29 08:22:22.274 257641 DEBUG nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Nov 29 03:22:22 np0005539550 nova_compute[257631]: 2025-11-29 08:22:22.275 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:22 np0005539550 nova_compute[257631]: 2025-11-29 08:22:22.275 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:22 np0005539550 nova_compute[257631]: 2025-11-29 08:22:22.275 257641 DEBUG nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:22 np0005539550 nova_compute[257631]: 2025-11-29 08:22:22.476 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:22.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.2 MiB/s wr, 241 op/s
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.593 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.594 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.615 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:22:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:23.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.704 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.705 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.712 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.712 257641 INFO nova.compute.claims [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:22:23 np0005539550 nova_compute[257631]: 2025-11-29 08:22:23.835 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3872115490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.296 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.302 257641 DEBUG nova.compute.provider_tree [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.317 257641 DEBUG nova.scheduler.client.report [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.343 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.359 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "a42f6a76-bc34-4cda-9e3f-b809b959654d" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.359 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "a42f6a76-bc34-4cda-9e3f-b809b959654d" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.368 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "a42f6a76-bc34-4cda-9e3f-b809b959654d" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.369 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.426 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.427 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.448 257641 INFO nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.466 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.551 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.552 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.553 257641 INFO nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Creating image(s)#033[00m
Nov 29 03:22:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:24.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.627 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.661 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.688 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.693 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.725 257641 DEBUG nova.policy [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0a8754f9da5640b784a1a46ae3b4d9e2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '44709b8542a14e20970b111bd3fce127', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.729 257641 DEBUG nova.network.neutron [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.749 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.774 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.775 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.776 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.776 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.809 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.816 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 50832848-4c11-4eab-8274-8582250e1b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:24 np0005539550 kernel: tapd5e26252-e0 (unregistering): left promiscuous mode
Nov 29 03:22:24 np0005539550 NetworkManager[49039]: <info>  [1764404544.8458] device (tapd5e26252-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.885 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:24Z|00514|binding|INFO|Releasing lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 from this chassis (sb_readonly=0)
Nov 29 03:22:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:24Z|00515|binding|INFO|Setting lport d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 down in Southbound
Nov 29 03:22:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:24Z|00516|binding|INFO|Removing iface tapd5e26252-e0 ovn-installed in OVS
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.888 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:24.892 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:25:11 10.100.0.14'], port_security=['fa:16:3e:47:25:11 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '5004bd0f-c699-46d7-b535-b3a7db186a87', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:24.894 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:22:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:24.895 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:24.896 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1464cc9a-d553-4536-a1ea-af686259351c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:24.897 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:22:24 np0005539550 nova_compute[257631]: 2025-11-29 08:22:24.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:24 np0005539550 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000079.scope: Deactivated successfully.
Nov 29 03:22:24 np0005539550 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000079.scope: Consumed 8.203s CPU time.
Nov 29 03:22:24 np0005539550 systemd-machined[216673]: Machine qemu-62-instance-00000079 terminated.
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.011 257641 INFO nova.virt.libvirt.driver [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Instance destroyed successfully.#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.011 257641 DEBUG nova.objects.instance [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.025 257641 DEBUG nova.virt.libvirt.vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:21:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1253218278',display_name='tempest-ServerActionsTestJSON-server-1253218278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1253218278',id=121,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:17Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-84wu0jt1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=5004bd0f-c699-46d7-b535-b3a7db186a87,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.026 257641 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.027 257641 DEBUG nova.network.os_vif_util [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.028 257641 DEBUG os_vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.030 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.031 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e26252-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.036 257641 INFO os_vif [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:25:11,bridge_name='br-int',has_traffic_filtering=True,id=d5e26252-e0d3-4a6b-8b18-b2f4cb7db432,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e26252-e0')#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.039 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.040 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [NOTICE]   (335107) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [NOTICE]   (335107) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [WARNING]  (335107) : Exiting Master process...
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [WARNING]  (335107) : Exiting Master process...
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [ALERT]    (335107) : Current worker (335109) exited with code 143 (Terminated)
Nov 29 03:22:25 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[335103]: [WARNING]  (335107) : All workers exited. Exiting... (0)
Nov 29 03:22:25 np0005539550 systemd[1]: libpod-e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5.scope: Deactivated successfully.
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.058 257641 DEBUG nova.objects.instance [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid 5004bd0f-c699-46d7-b535-b3a7db186a87 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:25 np0005539550 podman[335313]: 2025-11-29 08:22:25.059541983 +0000 UTC m=+0.066982074 container died e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.154 257641 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4ead335c7511cac11a0dcb9327b45bab67b5173dc807faa34f404582bd77953d-merged.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539550 podman[335313]: 2025-11-29 08:22:25.194212368 +0000 UTC m=+0.201652459 container cleanup e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:22:25 np0005539550 systemd[1]: libpod-conmon-e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5.scope: Deactivated successfully.
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.273 257641 DEBUG nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.275 257641 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.276 257641 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.276 257641 DEBUG oslo_concurrency.lockutils [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.276 257641 DEBUG nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.277 257641 WARNING nova.compute.manager [req-97866b7f-9572-4eac-b9d5-1c73598fb066 req-7a2b315f-79c4-431e-8811-f83f474f28fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-unplugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:25 np0005539550 podman[335353]: 2025-11-29 08:22:25.367963207 +0000 UTC m=+0.151997767 container remove e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.374 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[53a69bdf-fd0d-468e-98e8-2d7d5968c07b]: (4, ('Sat Nov 29 08:22:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5)\ne707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5\nSat Nov 29 08:22:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (e707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5)\ne707706e6e35933251f954ef7caa9acc02d72d21a9beb189b56b18227e93c4c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.375 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[55cc6eaa-c1b8-4f58-a69c-ac435c6bda3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.376 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.378 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.395 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.398 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[35edb73a-e7cb-4ef4-829d-15f3ad90c1e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.416 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6346ae10-58c1-490b-821a-38ad71dc53fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.417 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b823e2d4-344c-4c25-acb1-755721fa8040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.433 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8f20396c-2401-44ab-af7f-c6aaae086995]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758461, 'reachable_time': 15835, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335387, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.437 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:25.438 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[3abd2bd2-b3b3-4ac0-bbaf-0b146ce3836d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 352 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.6 MiB/s wr, 235 op/s
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.556 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 50832848-4c11-4eab-8274-8582250e1b30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3949824456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.640 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Successfully created port: a9e43e6c-9faa-46cf-8f94-656f7a6469ad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.643 257641 DEBUG oslo_concurrency.processutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.649 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] resizing rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.744 257641 DEBUG nova.compute.provider_tree [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.759 257641 DEBUG nova.scheduler.client.report [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.858 257641 DEBUG oslo_concurrency.lockutils [None req-fff08c37-bac7-40d8-a808-89da9ad85805 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.925 257641 DEBUG nova.objects.instance [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lazy-loading 'migration_context' on Instance uuid 50832848-4c11-4eab-8274-8582250e1b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.943 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.943 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Ensure instance console log exists: /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.944 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.944 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:25 np0005539550 nova_compute[257631]: 2025-11-29 08:22:25.944 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.355 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Successfully updated port: a9e43e6c-9faa-46cf-8f94-656f7a6469ad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.384 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.384 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquired lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.384 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.460 257641 DEBUG nova.compute.manager [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-changed-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.460 257641 DEBUG nova.compute.manager [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Refreshing instance network info cache due to event network-changed-a9e43e6c-9faa-46cf-8f94-656f7a6469ad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.460 257641 DEBUG oslo_concurrency.lockutils [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:26 np0005539550 nova_compute[257631]: 2025-11-29 08:22:26.555 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:22:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:26.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.477 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.2 MiB/s wr, 230 op/s
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.621 257641 DEBUG nova.network.neutron [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updating instance_info_cache with network_info: [{"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:27.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.654 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Releasing lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.654 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Instance network_info: |[{"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.654 257641 DEBUG oslo_concurrency.lockutils [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.654 257641 DEBUG nova.network.neutron [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Refreshing network info cache for port a9e43e6c-9faa-46cf-8f94-656f7a6469ad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.657 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Start _get_guest_xml network_info=[{"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.662 257641 WARNING nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.668 257641 DEBUG nova.virt.libvirt.host [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.668 257641 DEBUG nova.virt.libvirt.host [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.672 257641 DEBUG nova.virt.libvirt.host [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.673 257641 DEBUG nova.virt.libvirt.host [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.674 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.675 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.675 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.676 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.676 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.676 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.676 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.677 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.677 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.677 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.677 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.678 257641 DEBUG nova.virt.hardware [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.681 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.722 257641 DEBUG nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.722 257641 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.723 257641 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.723 257641 DEBUG oslo_concurrency.lockutils [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.724 257641 DEBUG nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:27 np0005539550 nova_compute[257631]: 2025-11-29 08:22:27.724 257641 WARNING nova.compute.manager [req-a3a51699-2f08-4744-b1db-dab191547d29 req-4861dcbb-86a8-483c-b26d-c72286ec44c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2377823327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.157 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.185 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.189 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888830529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:28.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.643 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.645 257641 DEBUG nova.virt.libvirt.vif [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-678096733',display_name='tempest-ServerGroupTestJSON-server-678096733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-678096733',id=124,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44709b8542a14e20970b111bd3fce127',ramdisk_id='',reservation_id='r-cea7q5id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1892233603',owner_user_name='tempest-ServerGroupTestJSON-1892233603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:24Z,user_data=None,user_id='0a8754f9da5640b784a1a46ae3b4d9e2',uuid=50832848-4c11-4eab-8274-8582250e1b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.645 257641 DEBUG nova.network.os_vif_util [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converting VIF {"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.646 257641 DEBUG nova.network.os_vif_util [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.647 257641 DEBUG nova.objects.instance [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lazy-loading 'pci_devices' on Instance uuid 50832848-4c11-4eab-8274-8582250e1b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.672 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <uuid>50832848-4c11-4eab-8274-8582250e1b30</uuid>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <name>instance-0000007c</name>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerGroupTestJSON-server-678096733</nova:name>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:22:27</nova:creationTime>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:user uuid="0a8754f9da5640b784a1a46ae3b4d9e2">tempest-ServerGroupTestJSON-1892233603-project-member</nova:user>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:project uuid="44709b8542a14e20970b111bd3fce127">tempest-ServerGroupTestJSON-1892233603</nova:project>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <nova:port uuid="a9e43e6c-9faa-46cf-8f94-656f7a6469ad">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="serial">50832848-4c11-4eab-8274-8582250e1b30</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="uuid">50832848-4c11-4eab-8274-8582250e1b30</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/50832848-4c11-4eab-8274-8582250e1b30_disk">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/50832848-4c11-4eab-8274-8582250e1b30_disk.config">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:09:4c:c0"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <target dev="tapa9e43e6c-9f"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/console.log" append="off"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:22:28 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:22:28 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:22:28 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:22:28 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.674 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Preparing to wait for external event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.675 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.675 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.675 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.676 257641 DEBUG nova.virt.libvirt.vif [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-678096733',display_name='tempest-ServerGroupTestJSON-server-678096733',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-678096733',id=124,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='44709b8542a14e20970b111bd3fce127',ramdisk_id='',reservation_id='r-cea7q5id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-1892233603',owner_user_name='tempest-ServerGroupTestJSON-1892233603-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:24Z,user_data=None,user_id='0a8754f9da5640b784a1a46ae3b4d9e2',uuid=50832848-4c11-4eab-8274-8582250e1b30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.676 257641 DEBUG nova.network.os_vif_util [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converting VIF {"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.677 257641 DEBUG nova.network.os_vif_util [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.677 257641 DEBUG os_vif [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.678 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.678 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.679 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.683 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9e43e6c-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.683 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa9e43e6c-9f, col_values=(('external_ids', {'iface-id': 'a9e43e6c-9faa-46cf-8f94-656f7a6469ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:4c:c0', 'vm-uuid': '50832848-4c11-4eab-8274-8582250e1b30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:28 np0005539550 NetworkManager[49039]: <info>  [1764404548.6854] manager: (tapa9e43e6c-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.688 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.690 257641 INFO os_vif [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f')#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.735 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.736 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.736 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] No VIF found with MAC fa:16:3e:09:4c:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.736 257641 INFO nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Using config drive#033[00m
Nov 29 03:22:28 np0005539550 nova_compute[257631]: 2025-11-29 08:22:28.763 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.325 257641 INFO nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Creating config drive at /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.331 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2rma3w6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.358 257641 DEBUG nova.network.neutron [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updated VIF entry in instance network info cache for port a9e43e6c-9faa-46cf-8f94-656f7a6469ad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.359 257641 DEBUG nova.network.neutron [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updating instance_info_cache with network_info: [{"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.376 257641 DEBUG oslo_concurrency.lockutils [req-fab21003-703a-4a20-8584-697c878889e1 req-b2a9b55e-1fd2-4c92-98b7-2516f357f0c9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.465 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2rma3w6" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.499 257641 DEBUG nova.storage.rbd_utils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] rbd image 50832848-4c11-4eab-8274-8582250e1b30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.502 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config 50832848-4c11-4eab-8274-8582250e1b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 389 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 220 op/s
Nov 29 03:22:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Nov 29 03:22:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Nov 29 03:22:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Nov 29 03:22:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.671 257641 DEBUG oslo_concurrency.processutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config 50832848-4c11-4eab-8274-8582250e1b30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.672 257641 INFO nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Deleting local config drive /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30/disk.config because it was imported into RBD.#033[00m
Nov 29 03:22:29 np0005539550 kernel: tapa9e43e6c-9f: entered promiscuous mode
Nov 29 03:22:29 np0005539550 NetworkManager[49039]: <info>  [1764404549.7336] manager: (tapa9e43e6c-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Nov 29 03:22:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:29Z|00517|binding|INFO|Claiming lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad for this chassis.
Nov 29 03:22:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:29Z|00518|binding|INFO|a9e43e6c-9faa-46cf-8f94-656f7a6469ad: Claiming fa:16:3e:09:4c:c0 10.100.0.13
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.742 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4c:c0 10.100.0.13'], port_security=['fa:16:3e:09:4c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '50832848-4c11-4eab-8274-8582250e1b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44709b8542a14e20970b111bd3fce127', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ba86d27-2710-41ad-933f-0ae7dfc7893a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec2da06-948c-46a4-a5cd-87ffc08858a9, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a9e43e6c-9faa-46cf-8f94-656f7a6469ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.743 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a9e43e6c-9faa-46cf-8f94-656f7a6469ad in datapath 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 bound to our chassis#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.744 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.755 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9502a415-2dd9-492a-b153-8ea48421997e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.756 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ac17c47-21 in ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.758 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ac17c47-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.758 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66e263bb-ee99-48bb-a8a3-b6b15464bc7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.759 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fc975576-64e3-4675-be82-d0b6d7dfe79b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:29Z|00519|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad up in Southbound
Nov 29 03:22:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:29Z|00520|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad ovn-installed in OVS
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.772 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7ad9ec-6d5d-404e-98cd-6e621704b8fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.771 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.776 257641 DEBUG nova.compute.manager [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.777 257641 DEBUG nova.compute.manager [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing instance network info cache due to event network-changed-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.777 257641 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.777 257641 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.777 257641 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Refreshing network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.787 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1f387303-0603-46f5-aaa5-4c8c0dec1677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 systemd-machined[216673]: New machine qemu-63-instance-0000007c.
Nov 29 03:22:29 np0005539550 systemd-udevd[335624]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:29 np0005539550 systemd[1]: Started Virtual Machine qemu-63-instance-0000007c.
Nov 29 03:22:29 np0005539550 NetworkManager[49039]: <info>  [1764404549.8200] device (tapa9e43e6c-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:29 np0005539550 NetworkManager[49039]: <info>  [1764404549.8210] device (tapa9e43e6c-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.821 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bf29be1f-bbca-410f-a637-f851761fab02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 NetworkManager[49039]: <info>  [1764404549.8278] manager: (tap1ac17c47-20): new Veth device (/org/freedesktop/NetworkManager/Devices/240)
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.827 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1a2eb01f-1aa9-4d02-adee-c0df04a24773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 podman[335601]: 2025-11-29 08:22:29.845757615 +0000 UTC m=+0.074354612 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:22:29 np0005539550 podman[335599]: 2025-11-29 08:22:29.850322681 +0000 UTC m=+0.081774551 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.859 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7f914a70-29e6-43da-b4ef-aee23b0fc7b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.862 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c45dd05b-159b-4a4f-8dd8-5c7d9d020caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 NetworkManager[49039]: <info>  [1764404549.8821] device (tap1ac17c47-20): carrier: link connected
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.887 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9e67ff5f-2999-4f9a-a238-c63f3f8b4aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.902 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a5042c9c-eb94-44b6-8cdf-da893c2dc787]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ac17c47-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:fd:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759754, 'reachable_time': 43118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335674, 'error': None, 'target': 'ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.916 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[03afdc74-7395-4af2-96eb-10cd09259bed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:fd94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 759754, 'tstamp': 759754}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335675, 'error': None, 'target': 'ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.931 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a91a38a7-a09d-49d8-b041-7a955a28b21f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ac17c47-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:14:fd:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759754, 'reachable_time': 43118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335676, 'error': None, 'target': 'ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:29.961 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[348add3d-ef28-417c-a1cb-d7fdd903157f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.987 257641 DEBUG nova.compute.manager [req-b12d92fb-bf1a-401e-97d6-ff331bfd1938 req-77894c93-b56c-49b8-952e-37eba33d4800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.988 257641 DEBUG oslo_concurrency.lockutils [req-b12d92fb-bf1a-401e-97d6-ff331bfd1938 req-77894c93-b56c-49b8-952e-37eba33d4800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.988 257641 DEBUG oslo_concurrency.lockutils [req-b12d92fb-bf1a-401e-97d6-ff331bfd1938 req-77894c93-b56c-49b8-952e-37eba33d4800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.988 257641 DEBUG oslo_concurrency.lockutils [req-b12d92fb-bf1a-401e-97d6-ff331bfd1938 req-77894c93-b56c-49b8-952e-37eba33d4800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:29 np0005539550 nova_compute[257631]: 2025-11-29 08:22:29.989 257641 DEBUG nova.compute.manager [req-b12d92fb-bf1a-401e-97d6-ff331bfd1938 req-77894c93-b56c-49b8-952e-37eba33d4800 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Processing event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.016 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a5202009-8028-404b-9bfa-8957030a715e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.017 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac17c47-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.017 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.017 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ac17c47-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.019 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539550 kernel: tap1ac17c47-20: entered promiscuous mode
Nov 29 03:22:30 np0005539550 NetworkManager[49039]: <info>  [1764404550.0199] manager: (tap1ac17c47-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.021 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.024 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ac17c47-20, col_values=(('external_ids', {'iface-id': 'b852a6b2-b71a-439f-9d71-08b79c164d5a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:30Z|00521|binding|INFO|Releasing lport b852a6b2-b71a-439f-9d71-08b79c164d5a from this chassis (sb_readonly=0)
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.029 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.036 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[00cc87d6-550b-40a8-975d-fc6e6e2e3ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.037 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4.pid.haproxy
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:30.037 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'env', 'PROCESS_TAG=haproxy-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.247 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.248 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404550.2476757, 50832848-4c11-4eab-8274-8582250e1b30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.248 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.253 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.257 257641 INFO nova.virt.libvirt.driver [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Instance spawned successfully.#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.257 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.268 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.274 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.278 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.279 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.279 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.280 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.280 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.281 257641 DEBUG nova.virt.libvirt.driver [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.306 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.306 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404550.247774, 50832848-4c11-4eab-8274-8582250e1b30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.306 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.336 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.341 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404550.2530482, 50832848-4c11-4eab-8274-8582250e1b30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.341 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.347 257641 INFO nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Took 5.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.347 257641 DEBUG nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.389 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.392 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:30 np0005539550 podman[335750]: 2025-11-29 08:22:30.394509241 +0000 UTC m=+0.048233578 container create ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:30 np0005539550 systemd[1]: Started libpod-conmon-ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97.scope.
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.427 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.440 257641 INFO nova.compute.manager [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Took 6.76 seconds to build instance.#033[00m
Nov 29 03:22:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c71e7b57475e6945d02ae46006627bddef2202fa94cf8611bd12dc2de570b35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.458 257641 DEBUG oslo_concurrency.lockutils [None req-7544bf86-e9e4-4504-9c3b-4c0ae26bb090 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:30 np0005539550 podman[335750]: 2025-11-29 08:22:30.369904665 +0000 UTC m=+0.023629022 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:30 np0005539550 podman[335750]: 2025-11-29 08:22:30.468481782 +0000 UTC m=+0.122206139 container init ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:22:30 np0005539550 podman[335750]: 2025-11-29 08:22:30.475164952 +0000 UTC m=+0.128889289 container start ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 03:22:30 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [NOTICE]   (335770) : New worker (335772) forked
Nov 29 03:22:30 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [NOTICE]   (335770) : Loading success.
Nov 29 03:22:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:22:30 np0005539550 nova_compute[257631]: 2025-11-29 08:22:30.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.314 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.315 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.315 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.315 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 50832848-4c11-4eab-8274-8582250e1b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 151 op/s
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.573 257641 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updated VIF entry in instance network info cache for port d5e26252-e0d3-4a6b-8b18-b2f4cb7db432. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.574 257641 DEBUG nova.network.neutron [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Updating instance_info_cache with network_info: [{"id": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "address": "fa:16:3e:47:25:11", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e26252-e0", "ovs_interfaceid": "d5e26252-e0d3-4a6b-8b18-b2f4cb7db432", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:31.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:31.832 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.833 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:31.834 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.845 257641 DEBUG oslo_concurrency.lockutils [req-2e6f97ef-83d0-4c61-ac32-3ccf8d6a6417 req-f4972280-43bc-4c22-b3c8-7fe1ef9cdf9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5004bd0f-c699-46d7-b535-b3a7db186a87" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.938 257641 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.938 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.939 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.939 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.939 257641 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.939 257641 WARNING nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.940 257641 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.940 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.940 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.940 257641 DEBUG oslo_concurrency.lockutils [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5004bd0f-c699-46d7-b535-b3a7db186a87-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.940 257641 DEBUG nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] No waiting events found dispatching network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:31 np0005539550 nova_compute[257631]: 2025-11-29 08:22:31.941 257641 WARNING nova.compute.manager [req-89b71a96-ee0b-42d3-bb91-5a5b4d83af04 req-b77175aa-0016-4f73-9324-e56530abc5ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Received unexpected event network-vif-plugged-d5e26252-e0d3-4a6b-8b18-b2f4cb7db432 for instance with vm_state resized and task_state resize_reverting.#033[00m
Nov 29 03:22:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.072 257641 DEBUG nova.compute.manager [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.072 257641 DEBUG oslo_concurrency.lockutils [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.072 257641 DEBUG oslo_concurrency.lockutils [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.073 257641 DEBUG oslo_concurrency.lockutils [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.073 257641 DEBUG nova.compute.manager [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.073 257641 WARNING nova.compute.manager [req-62b09b35-cdce-47ce-b339-b9ed4cad51d2 req-9e10a50d-0864-4a53-92d1-e79732d1cd5b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.349 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.349 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.350 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.350 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.350 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.351 257641 INFO nova.compute.manager [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Terminating instance#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.353 257641 DEBUG nova.compute.manager [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:22:32 np0005539550 kernel: tapa9e43e6c-9f (unregistering): left promiscuous mode
Nov 29 03:22:32 np0005539550 NetworkManager[49039]: <info>  [1764404552.3951] device (tapa9e43e6c-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00522|binding|INFO|Releasing lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad from this chassis (sb_readonly=0)
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00523|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad down in Southbound
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00524|binding|INFO|Removing iface tapa9e43e6c-9f ovn-installed in OVS
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.407 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.420 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4c:c0 10.100.0.13'], port_security=['fa:16:3e:09:4c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '50832848-4c11-4eab-8274-8582250e1b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44709b8542a14e20970b111bd3fce127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ba86d27-2710-41ad-933f-0ae7dfc7893a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec2da06-948c-46a4-a5cd-87ffc08858a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a9e43e6c-9faa-46cf-8f94-656f7a6469ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.422 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a9e43e6c-9faa-46cf-8f94-656f7a6469ad in datapath 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 unbound from our chassis#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.423 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.424 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c974dbb-59fe-4c29-9c58-0cabb397aa89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.424 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 namespace which is not needed anymore#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Nov 29 03:22:32 np0005539550 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000007c.scope: Consumed 2.617s CPU time.
Nov 29 03:22:32 np0005539550 systemd-machined[216673]: Machine qemu-63-instance-0000007c terminated.
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.518 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 kernel: tapa9e43e6c-9f: entered promiscuous mode
Nov 29 03:22:32 np0005539550 NetworkManager[49039]: <info>  [1764404552.5705] manager: (tapa9e43e6c-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00525|binding|INFO|Claiming lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad for this chassis.
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00526|binding|INFO|a9e43e6c-9faa-46cf-8f94-656f7a6469ad: Claiming fa:16:3e:09:4c:c0 10.100.0.13
Nov 29 03:22:32 np0005539550 kernel: tapa9e43e6c-9f (unregistering): left promiscuous mode
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.573 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [NOTICE]   (335770) : haproxy version is 2.8.14-c23fe91
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [NOTICE]   (335770) : path to executable is /usr/sbin/haproxy
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [WARNING]  (335770) : Exiting Master process...
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [WARNING]  (335770) : Exiting Master process...
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [ALERT]    (335770) : Current worker (335772) exited with code 143 (Terminated)
Nov 29 03:22:32 np0005539550 neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4[335766]: [WARNING]  (335770) : All workers exited. Exiting... (0)
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.579 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4c:c0 10.100.0.13'], port_security=['fa:16:3e:09:4c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '50832848-4c11-4eab-8274-8582250e1b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44709b8542a14e20970b111bd3fce127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ba86d27-2710-41ad-933f-0ae7dfc7893a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec2da06-948c-46a4-a5cd-87ffc08858a9, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a9e43e6c-9faa-46cf-8f94-656f7a6469ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:32 np0005539550 systemd[1]: libpod-ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97.scope: Deactivated successfully.
Nov 29 03:22:32 np0005539550 conmon[335766]: conmon ff65e1e681bd9e1705cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97.scope/container/memory.events
Nov 29 03:22:32 np0005539550 podman[335801]: 2025-11-29 08:22:32.586851977 +0000 UTC m=+0.045788536 container died ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.604 257641 INFO nova.virt.libvirt.driver [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Instance destroyed successfully.#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.604 257641 DEBUG nova.objects.instance [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lazy-loading 'resources' on Instance uuid 50832848-4c11-4eab-8274-8582250e1b30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00527|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad ovn-installed in OVS
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00528|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad up in Southbound
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00529|binding|INFO|Releasing lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad from this chassis (sb_readonly=1)
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00530|if_status|INFO|Dropped 16 log messages in last 621 seconds (most recently, 621 seconds ago) due to excessive rate
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00531|if_status|INFO|Not setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad down as sb is readonly
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00532|binding|INFO|Removing iface tapa9e43e6c-9f ovn-installed in OVS
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00533|binding|INFO|Releasing lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad from this chassis (sb_readonly=0)
Nov 29 03:22:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:32Z|00534|binding|INFO|Setting lport a9e43e6c-9faa-46cf-8f94-656f7a6469ad down in Southbound
Nov 29 03:22:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97-userdata-shm.mount: Deactivated successfully.
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.623 257641 DEBUG nova.virt.libvirt.vif [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-678096733',display_name='tempest-ServerGroupTestJSON-server-678096733',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-678096733',id=124,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='44709b8542a14e20970b111bd3fce127',ramdisk_id='',reservation_id='r-cea7q5id',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-1892233603',owner_user_name='tempest-ServerGroupTestJSON-1892233603-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:22:30Z,user_data=None,user_id='0a8754f9da5640b784a1a46ae3b4d9e2',uuid=50832848-4c11-4eab-8274-8582250e1b30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.623 257641 DEBUG nova.network.os_vif_util [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converting VIF {"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.624 257641 DEBUG nova.network.os_vif_util [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.625 257641 DEBUG os_vif [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:22:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3c71e7b57475e6945d02ae46006627bddef2202fa94cf8611bd12dc2de570b35-merged.mount: Deactivated successfully.
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.623 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:4c:c0 10.100.0.13'], port_security=['fa:16:3e:09:4c:c0 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '50832848-4c11-4eab-8274-8582250e1b30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '44709b8542a14e20970b111bd3fce127', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ba86d27-2710-41ad-933f-0ae7dfc7893a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7ec2da06-948c-46a4-a5cd-87ffc08858a9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a9e43e6c-9faa-46cf-8f94-656f7a6469ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.627 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.628 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9e43e6c-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.630 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:32 np0005539550 podman[335801]: 2025-11-29 08:22:32.636683983 +0000 UTC m=+0.095620522 container cleanup ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:22:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.638 257641 INFO os_vif [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:4c:c0,bridge_name='br-int',has_traffic_filtering=True,id=a9e43e6c-9faa-46cf-8f94-656f7a6469ad,network=Network(1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa9e43e6c-9f')#033[00m
Nov 29 03:22:32 np0005539550 systemd[1]: libpod-conmon-ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97.scope: Deactivated successfully.
Nov 29 03:22:32 np0005539550 podman[335837]: 2025-11-29 08:22:32.700028214 +0000 UTC m=+0.041855056 container remove ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.706 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[efc5986a-447d-4e9e-99ce-d97c578810c1]: (4, ('Sat Nov 29 08:22:32 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 (ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97)\nff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97\nSat Nov 29 08:22:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 (ff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97)\nff65e1e681bd9e1705cc5db9cf0f83c2f8b53d3cf3303206cc7acada2e563d97\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.707 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a596221-3891-497a-b808-b879dee451f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.708 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ac17c47-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.710 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 kernel: tap1ac17c47-20: left promiscuous mode
Nov 29 03:22:32 np0005539550 nova_compute[257631]: 2025-11-29 08:22:32.725 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.727 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[86db1c39-492c-4e04-891b-064aebffbf19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.739 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[430dbd26-d143-4e8a-a47c-7de3d6d0257a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.741 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[52f8737f-042b-4faf-a916-798eb35bd519]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.757 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[19de3977-026b-4f00-b781-d13ac4db2c85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 759747, 'reachable_time': 21939, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335868, 'error': None, 'target': 'ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 systemd[1]: run-netns-ovnmeta\x2d1ac17c47\x2d2bdd\x2d4fe3\x2d91ae\x2db20e16a9a2a4.mount: Deactivated successfully.
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.762 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.762 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e11192-b164-468f-bb51-5636e85117b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.763 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a9e43e6c-9faa-46cf-8f94-656f7a6469ad in datapath 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 unbound from our chassis#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.764 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.765 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dbb12473-a676-49e9-ad06-1b54306c932f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.765 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a9e43e6c-9faa-46cf-8f94-656f7a6469ad in datapath 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4 unbound from our chassis#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.766 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:22:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:32.767 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b221dd0-b133-479f-93e9-f1036abd3f69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.008 257641 INFO nova.virt.libvirt.driver [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Deleting instance files /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30_del#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.010 257641 INFO nova.virt.libvirt.driver [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Deletion of /var/lib/nova/instances/50832848-4c11-4eab-8274-8582250e1b30_del complete#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.069 257641 INFO nova.compute.manager [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.071 257641 DEBUG oslo.service.loopingcall [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.071 257641 DEBUG nova.compute.manager [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.072 257641 DEBUG nova.network.neutron [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:22:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 407 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.2 MiB/s wr, 162 op/s
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.591 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updating instance_info_cache with network_info: [{"id": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "address": "fa:16:3e:09:4c:c0", "network": {"id": "1ac17c47-2bdd-4fe3-91ae-b20e16a9a2a4", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-933781785-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "44709b8542a14e20970b111bd3fce127", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa9e43e6c-9f", "ovs_interfaceid": "a9e43e6c-9faa-46cf-8f94-656f7a6469ad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.633 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-50832848-4c11-4eab-8274-8582250e1b30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.634 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.634 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.634 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.635 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:33.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.666 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.666 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.667 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.667 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:22:33 np0005539550 nova_compute[257631]: 2025-11-29 08:22:33.667 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.086 257641 DEBUG nova.network.neutron [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302020932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.108 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.110 257641 INFO nova.compute.manager [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Took 1.04 seconds to deallocate network for instance.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.157 257641 DEBUG nova.compute.manager [req-287bf057-99b2-40a1-ad83-d3571e7da442 req-1748f269-fedc-4dbf-9443-f41a686c8d9e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-deleted-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.159 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.159 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.209 257641 DEBUG oslo_concurrency.processutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.241 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.241 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.241 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.242 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.242 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.242 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.242 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.243 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.243 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.243 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.243 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.244 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.244 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.244 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.244 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.245 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.245 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.245 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.245 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.246 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.246 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.246 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.247 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.247 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.247 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.247 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.248 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.248 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.248 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.248 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-unplugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.248 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.249 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "50832848-4c11-4eab-8274-8582250e1b30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.249 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.249 257641 DEBUG oslo_concurrency.lockutils [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.249 257641 DEBUG nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] No waiting events found dispatching network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.250 257641 WARNING nova.compute.manager [req-cce66be7-31ea-4e66-8867-94b99bf2a256 req-24eda958-7bde-4b8d-bd5a-f1d7085dd166 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Received unexpected event network-vif-plugged-a9e43e6c-9faa-46cf-8f94-656f7a6469ad for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.345 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.348 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4311MB free_disk=20.787078857421875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.348 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454565397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:34.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.646 257641 DEBUG oslo_concurrency.processutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.651 257641 DEBUG nova.compute.provider_tree [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.666 257641 DEBUG nova.scheduler.client.report [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.687 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.690 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.718 257641 INFO nova.scheduler.client.report [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Deleted allocations for instance 50832848-4c11-4eab-8274-8582250e1b30#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.753 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.754 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.773 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:34 np0005539550 nova_compute[257631]: 2025-11-29 08:22:34.815 257641 DEBUG oslo_concurrency.lockutils [None req-c43fd105-f999-4196-a607-fce7173fac49 0a8754f9da5640b784a1a46ae3b4d9e2 44709b8542a14e20970b111bd3fce127 - - default default] Lock "50832848-4c11-4eab-8274-8582250e1b30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3920427718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:35 np0005539550 nova_compute[257631]: 2025-11-29 08:22:35.210 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:35 np0005539550 nova_compute[257631]: 2025-11-29 08:22:35.214 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:35 np0005539550 nova_compute[257631]: 2025-11-29 08:22:35.234 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:35 np0005539550 nova_compute[257631]: 2025-11-29 08:22:35.274 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:22:35 np0005539550 nova_compute[257631]: 2025-11-29 08:22:35.275 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 351 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 188 op/s
Nov 29 03:22:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:35.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Nov 29 03:22:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Nov 29 03:22:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Nov 29 03:22:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 278 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.4 MiB/s wr, 316 op/s
Nov 29 03:22:37 np0005539550 nova_compute[257631]: 2025-11-29 08:22:37.520 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:37 np0005539550 nova_compute[257631]: 2025-11-29 08:22:37.631 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:37.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:38 np0005539550 nova_compute[257631]: 2025-11-29 08:22:38.563 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:38.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:38 np0005539550 nova_compute[257631]: 2025-11-29 08:22:38.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:38.836 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:22:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/834375898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:22:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:22:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/834375898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:22:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 220 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.1 MiB/s wr, 287 op/s
Nov 29 03:22:39 np0005539550 nova_compute[257631]: 2025-11-29 08:22:39.559 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:39 np0005539550 nova_compute[257631]: 2025-11-29 08:22:39.560 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:39 np0005539550 nova_compute[257631]: 2025-11-29 08:22:39.560 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:22:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:39.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:39 np0005539550 nova_compute[257631]: 2025-11-29 08:22:39.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:40 np0005539550 nova_compute[257631]: 2025-11-29 08:22:40.009 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404545.0074728, 5004bd0f-c699-46d7-b535-b3a7db186a87 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:40 np0005539550 nova_compute[257631]: 2025-11-29 08:22:40.009 257641 INFO nova.compute.manager [-] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:40 np0005539550 nova_compute[257631]: 2025-11-29 08:22:40.033 257641 DEBUG nova.compute.manager [None req-95dbfa0d-b5d2-4906-8cf4-f46567233fb3 - - - - - -] [instance: 5004bd0f-c699-46d7-b535-b3a7db186a87] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 17 KiB/s wr, 272 op/s
Nov 29 03:22:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:22:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:41.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:22:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:42 np0005539550 podman[335996]: 2025-11-29 08:22:42.411376144 +0000 UTC m=+0.155052925 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:22:42 np0005539550 nova_compute[257631]: 2025-11-29 08:22:42.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539550 nova_compute[257631]: 2025-11-29 08:22:42.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:42 np0005539550 nova_compute[257631]: 2025-11-29 08:22:42.937 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:42 np0005539550 nova_compute[257631]: 2025-11-29 08:22:42.938 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:42 np0005539550 nova_compute[257631]: 2025-11-29 08:22:42.961 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.061 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.062 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.076 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.077 257641 INFO nova.compute.claims [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.316 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.4 KiB/s wr, 223 op/s
Nov 29 03:22:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:43.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:22:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1556173368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.786 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.792 257641 DEBUG nova.compute.provider_tree [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.813 257641 DEBUG nova.scheduler.client.report [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.848 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.849 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.934 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.934 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.955 257641 INFO nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:22:43 np0005539550 nova_compute[257631]: 2025-11-29 08:22:43.978 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.111 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.113 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.113 257641 INFO nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Creating image(s)#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.147 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.176 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.201 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.204 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.241 257641 DEBUG nova.policy [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '80ceb9112b3a4f119c05f21fd617af11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '26e3508b949a4dbf960d7befc8f27869', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.289 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.290 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.290 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.291 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.313 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.316 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.605 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:44.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.677 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] resizing rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.777 257641 DEBUG nova.objects.instance [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'migration_context' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.800 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.800 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Ensure instance console log exists: /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.801 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.802 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:44 np0005539550 nova_compute[257631]: 2025-11-29 08:22:44.802 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 224 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 953 KiB/s wr, 170 op/s
Nov 29 03:22:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:45.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:45 np0005539550 nova_compute[257631]: 2025-11-29 08:22:45.950 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Successfully created port: 573e5085-7652-4c15-b353-cd8eff879375 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:22:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:46.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 232 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.603 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404552.6022086, 50832848-4c11-4eab-8274-8582250e1b30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.603 257641 INFO nova.compute.manager [-] [instance: 50832848-4c11-4eab-8274-8582250e1b30] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.635 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:47.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.824 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Successfully updated port: 573e5085-7652-4c15-b353-cd8eff879375 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.834 257641 DEBUG nova.compute.manager [None req-1dc1b651-aa15-4d27-b887-910158d13cc6 - - - - - -] [instance: 50832848-4c11-4eab-8274-8582250e1b30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.838 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.839 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.839 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.934 257641 DEBUG nova.compute.manager [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-changed-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.934 257641 DEBUG nova.compute.manager [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Refreshing instance network info cache due to event network-changed-573e5085-7652-4c15-b353-cd8eff879375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:47 np0005539550 nova_compute[257631]: 2025-11-29 08:22:47.935 257641 DEBUG oslo_concurrency.lockutils [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:48 np0005539550 nova_compute[257631]: 2025-11-29 08:22:48.036 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:22:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:48.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:48 np0005539550 nova_compute[257631]: 2025-11-29 08:22:48.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:22:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Nov 29 03:22:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:22:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.313 257641 DEBUG nova.network.neutron [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.343 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.343 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance network_info: |[{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.343 257641 DEBUG oslo_concurrency.lockutils [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.344 257641 DEBUG nova.network.neutron [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Refreshing network info cache for port 573e5085-7652-4c15-b353-cd8eff879375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.346 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start _get_guest_xml network_info=[{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.351 257641 WARNING nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.355 257641 DEBUG nova.virt.libvirt.host [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.356 257641 DEBUG nova.virt.libvirt.host [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.360 257641 DEBUG nova.virt.libvirt.host [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.361 257641 DEBUG nova.virt.libvirt.host [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.362 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.363 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.363 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.363 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.364 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.365 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.365 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.365 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.365 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.366 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.366 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.366 257641 DEBUG nova.virt.hardware [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.369 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:50.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926995944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.821 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.855 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:50 np0005539550 nova_compute[257631]: 2025-11-29 08:22:50.860 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:22:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/449476727' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.286 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.289 257641 DEBUG nova.virt.libvirt.vif [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.290 257641 DEBUG nova.network.os_vif_util [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.291 257641 DEBUG nova.network.os_vif_util [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.292 257641 DEBUG nova.objects.instance [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.313 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <uuid>f09f0e1a-bc69-4cd4-b504-3ba084ffa875</uuid>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <name>instance-0000007d</name>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestJSON-server-161781417</nova:name>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:22:50</nova:creationTime>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <nova:port uuid="573e5085-7652-4c15-b353-cd8eff879375">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="serial">f09f0e1a-bc69-4cd4-b504-3ba084ffa875</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="uuid">f09f0e1a-bc69-4cd4-b504-3ba084ffa875</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:43:4e:b0"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <target dev="tap573e5085-76"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/console.log" append="off"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:22:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:22:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:22:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:22:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.314 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Preparing to wait for external event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.314 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.315 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.315 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.316 257641 DEBUG nova.virt.libvirt.vif [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:22:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.316 257641 DEBUG nova.network.os_vif_util [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.317 257641 DEBUG nova.network.os_vif_util [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.317 257641 DEBUG os_vif [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.319 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.319 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.322 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap573e5085-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.323 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap573e5085-76, col_values=(('external_ids', {'iface-id': '573e5085-7652-4c15-b353-cd8eff879375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:4e:b0', 'vm-uuid': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539550 NetworkManager[49039]: <info>  [1764404571.3252] manager: (tap573e5085-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.335 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.336 257641 INFO os_vif [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76')#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.387 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.388 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.388 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] No VIF found with MAC fa:16:3e:43:4e:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.389 257641 INFO nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Using config drive#033[00m
Nov 29 03:22:51 np0005539550 nova_compute[257631]: 2025-11-29 08:22:51.419 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 29 03:22:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.005 257641 INFO nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Creating config drive at /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.010 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr18omm4y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.168 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr18omm4y" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.200 257641 DEBUG nova.storage.rbd_utils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] rbd image f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.204 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.385 257641 DEBUG oslo_concurrency.processutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.386 257641 INFO nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Deleting local config drive /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/disk.config because it was imported into RBD.#033[00m
Nov 29 03:22:52 np0005539550 kernel: tap573e5085-76: entered promiscuous mode
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.4450] manager: (tap573e5085-76): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Nov 29 03:22:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:52Z|00535|binding|INFO|Claiming lport 573e5085-7652-4c15-b353-cd8eff879375 for this chassis.
Nov 29 03:22:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:52Z|00536|binding|INFO|573e5085-7652-4c15-b353-cd8eff879375: Claiming fa:16:3e:43:4e:b0 10.100.0.6
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.501 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.506 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.507 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.508 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:22:52 np0005539550 systemd-machined[216673]: New machine qemu-64-instance-0000007d.
Nov 29 03:22:52 np0005539550 systemd-udevd[336354]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.520 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6de38fb9-36d7-4704-855d-2b461cbe3fd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.521 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.523 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.523 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b071d6d4-417e-4fe9-bc55-5beeef408df5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.523 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e21fb5-6ca9-4d1e-885a-e9436dbd8892]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.5335] device (tap573e5085-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.5350] device (tap573e5085-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.534 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[9c270854-4b78-467b-98dd-579f13988e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 systemd[1]: Started Virtual Machine qemu-64-instance-0000007d.
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.557 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[12cd6861-54ea-4e70-94e6-7bceb7f3603e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:52Z|00537|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 ovn-installed in OVS
Nov 29 03:22:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:52Z|00538|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 up in Southbound
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.574 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.585 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d7eae80a-eeb2-4397-a4a3-330837cbdd4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.5900] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/245)
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a02e443-6f50-493b-978b-65ed2baf7424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.616 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7a612e88-27a8-4bdd-ab5c-7a0da0976e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.618 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5cb3b2-b43e-41c8-ba46-5807a02bc095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.6385] device (tap58fd104d-40): carrier: link connected
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.644 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c5825df2-ef97-4981-a103-92fb11292e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.662 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a5480-b543-4cec-a999-4490d15bc230]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762030, 'reachable_time': 41689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336387, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.678 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d53d6382-9d8c-4d0e-aa78-7517de8301e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 762030, 'tstamp': 762030}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336388, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[581f730e-1b2e-465b-8ef8-7e281f23cc65]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762030, 'reachable_time': 41689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336389, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.738 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[69561e31-cdcf-4984-9ae1-16349723e76e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.793 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0fb3f6-14a4-4436-ade8-4f24e7140b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.796 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.796 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.797 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 NetworkManager[49039]: <info>  [1764404572.8000] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/246)
Nov 29 03:22:52 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.803 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:52Z|00539|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.826 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.827 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[36b95261-25f7-4676-bd71-068b6cea1b36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.828 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:22:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:22:52.829 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.996 257641 DEBUG nova.compute.manager [req-98a60a25-7fd6-45af-b211-06c02885b464 req-fbdb4327-fbb9-49ce-917b-8ec6e530c71b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.997 257641 DEBUG oslo_concurrency.lockutils [req-98a60a25-7fd6-45af-b211-06c02885b464 req-fbdb4327-fbb9-49ce-917b-8ec6e530c71b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.997 257641 DEBUG oslo_concurrency.lockutils [req-98a60a25-7fd6-45af-b211-06c02885b464 req-fbdb4327-fbb9-49ce-917b-8ec6e530c71b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.998 257641 DEBUG oslo_concurrency.lockutils [req-98a60a25-7fd6-45af-b211-06c02885b464 req-fbdb4327-fbb9-49ce-917b-8ec6e530c71b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:52 np0005539550 nova_compute[257631]: 2025-11-29 08:22:52.998 257641 DEBUG nova.compute.manager [req-98a60a25-7fd6-45af-b211-06c02885b464 req-fbdb4327-fbb9-49ce-917b-8ec6e530c71b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Processing event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.130 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404573.130063, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.131 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Started (Lifecycle Event)#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.135 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.138 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.143 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance spawned successfully.#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.143 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.161 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.167 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.173 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.174 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.175 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.176 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.176 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.177 257641 DEBUG nova.virt.libvirt.driver [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.203 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.204 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404573.1303728, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.204 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:22:53 np0005539550 podman[336463]: 2025-11-29 08:22:53.222365385 +0000 UTC m=+0.051771904 container create 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.234 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.239 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404573.1381156, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.239 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.268 257641 INFO nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Took 9.16 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.268 257641 DEBUG nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:53 np0005539550 systemd[1]: Started libpod-conmon-9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c.scope.
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.277 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.281 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:22:53 np0005539550 podman[336463]: 2025-11-29 08:22:53.194039405 +0000 UTC m=+0.023445944 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.300 257641 DEBUG nova.network.neutron [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updated VIF entry in instance network info cache for port 573e5085-7652-4c15-b353-cd8eff879375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.301 257641 DEBUG nova.network.neutron [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.307 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:22:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a25ec6770a3b297ad5f05ad3c2f263662057e8fefa6d2b1fc5a7d67fa8eb17c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.324 257641 DEBUG oslo_concurrency.lockutils [req-1a4406c6-8009-4f6a-825e-ff6b932fc775 req-1a3897d9-88ce-43fc-acbf-5a47934b1e80 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:22:53 np0005539550 podman[336463]: 2025-11-29 08:22:53.334146428 +0000 UTC m=+0.163552967 container init 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:22:53 np0005539550 podman[336463]: 2025-11-29 08:22:53.340538911 +0000 UTC m=+0.169945440 container start 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.342 257641 INFO nova.compute.manager [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Took 10.32 seconds to build instance.#033[00m
Nov 29 03:22:53 np0005539550 nova_compute[257631]: 2025-11-29 08:22:53.359 257641 DEBUG oslo_concurrency.lockutils [None req-7707a38d-7c70-488f-9989-d8e05214b5a6 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:53 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [NOTICE]   (336482) : New worker (336484) forked
Nov 29 03:22:53 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [NOTICE]   (336482) : Loading success.
Nov 29 03:22:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 29 03:22:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:53.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:54.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.126 257641 DEBUG nova.compute.manager [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.127 257641 DEBUG oslo_concurrency.lockutils [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.127 257641 DEBUG oslo_concurrency.lockutils [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.128 257641 DEBUG oslo_concurrency.lockutils [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.128 257641 DEBUG nova.compute.manager [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:22:55 np0005539550 nova_compute[257631]: 2025-11-29 08:22:55.129 257641 WARNING nova.compute.manager [req-e1c0ab3d-37c0-4a5c-afbf-077334629b6a req-dbca162f-8469-4048-af79-994f89ffe48e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:22:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 217 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 29 03:22:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:55.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:56 np0005539550 nova_compute[257631]: 2025-11-29 08:22:56.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:56.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:22:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5b3b201f-630d-4266-b0f9-98ab882562e7 does not exist
Nov 29 03:22:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 06901bda-a8e5-415f-ab7c-c9dacf8a6e3a does not exist
Nov 29 03:22:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d3c62797-b982-4208-b6d3-a3b3599704b9 does not exist
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:22:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:22:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.178 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539550 NetworkManager[49039]: <info>  [1764404577.1804] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Nov 29 03:22:57 np0005539550 NetworkManager[49039]: <info>  [1764404577.1818] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.260058677 +0000 UTC m=+0.043339751 container create dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:22:57Z|00540|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:22:57 np0005539550 systemd[1]: Started libpod-conmon-dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056.scope.
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.236365358 +0000 UTC m=+0.019646462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.362902146 +0000 UTC m=+0.146183250 container init dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.376909732 +0000 UTC m=+0.160190846 container start dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.381594244 +0000 UTC m=+0.164875348 container attach dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:22:57 np0005539550 awesome_archimedes[336781]: 167 167
Nov 29 03:22:57 np0005539550 systemd[1]: libpod-dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056.scope: Deactivated successfully.
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.384151206 +0000 UTC m=+0.167432310 container died dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:22:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-39bfa3774774668652f28499c684809efd260dc00b08fb8286f129a56013d7ee-merged.mount: Deactivated successfully.
Nov 29 03:22:57 np0005539550 podman[336763]: 2025-11-29 08:22:57.43559126 +0000 UTC m=+0.218872334 container remove dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:22:57 np0005539550 systemd[1]: libpod-conmon-dccd4bd94557b49e9462cc769a86399c9ddafa1d7bdf5f56f0fa3147eb117056.scope: Deactivated successfully.
Nov 29 03:22:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 205 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.5 MiB/s wr, 114 op/s
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.572 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539550 podman[336804]: 2025-11-29 08:22:57.620480998 +0000 UTC m=+0.050735829 container create 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:22:57 np0005539550 systemd[1]: Started libpod-conmon-1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846.scope.
Nov 29 03:22:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:22:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:57.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:22:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:57 np0005539550 podman[336804]: 2025-11-29 08:22:57.598790217 +0000 UTC m=+0.029045098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:57 np0005539550 podman[336804]: 2025-11-29 08:22:57.713436088 +0000 UTC m=+0.143690949 container init 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:22:57 np0005539550 podman[336804]: 2025-11-29 08:22:57.719971005 +0000 UTC m=+0.150225836 container start 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:22:57 np0005539550 podman[336804]: 2025-11-29 08:22:57.723769086 +0000 UTC m=+0.154023937 container attach 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:22:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:22:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:22:57 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.841 257641 DEBUG nova.compute.manager [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-changed-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.841 257641 DEBUG nova.compute.manager [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Refreshing instance network info cache due to event network-changed-573e5085-7652-4c15-b353-cd8eff879375. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.841 257641 DEBUG oslo_concurrency.lockutils [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.842 257641 DEBUG oslo_concurrency.lockutils [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:22:57 np0005539550 nova_compute[257631]: 2025-11-29 08:22:57.842 257641 DEBUG nova.network.neutron [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Refreshing network info cache for port 573e5085-7652-4c15-b353-cd8eff879375 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:22:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:22:58 np0005539550 dreamy_keldysh[336821]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:22:58 np0005539550 dreamy_keldysh[336821]: --> relative data size: 1.0
Nov 29 03:22:58 np0005539550 dreamy_keldysh[336821]: --> All data devices are unavailable
Nov 29 03:22:58 np0005539550 systemd[1]: libpod-1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846.scope: Deactivated successfully.
Nov 29 03:22:58 np0005539550 podman[336804]: 2025-11-29 08:22:58.571714239 +0000 UTC m=+1.001969070 container died 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:22:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e5bffd63b3693a74d3f7a5e39536442dc1bf39483ce87941b3f5f87ec8d1d401-merged.mount: Deactivated successfully.
Nov 29 03:22:58 np0005539550 podman[336804]: 2025-11-29 08:22:58.622093598 +0000 UTC m=+1.052348429 container remove 1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:22:58 np0005539550 systemd[1]: libpod-conmon-1f5ea242cc4c701644243b16e2904298f36e545ba8cfd5c3e5a74bd5d6e4f846.scope: Deactivated successfully.
Nov 29 03:22:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 03:22:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:22:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.184324703 +0000 UTC m=+0.040496103 container create bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:22:59 np0005539550 systemd[1]: Started libpod-conmon-bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02.scope.
Nov 29 03:22:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.256226308 +0000 UTC m=+0.112397738 container init bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.262028738 +0000 UTC m=+0.118200138 container start bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.166934565 +0000 UTC m=+0.023105985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.265247645 +0000 UTC m=+0.121419065 container attach bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:22:59 np0005539550 trusting_dewdney[337056]: 167 167
Nov 29 03:22:59 np0005539550 systemd[1]: libpod-bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02.scope: Deactivated successfully.
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.267590781 +0000 UTC m=+0.123762181 container died bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:22:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-15f87704cbd82d32e3c27ec50f33cf424b8c24ea77f7ea4de718cded291f5568-merged.mount: Deactivated successfully.
Nov 29 03:22:59 np0005539550 podman[337039]: 2025-11-29 08:22:59.306355612 +0000 UTC m=+0.162527012 container remove bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:22:59 np0005539550 systemd[1]: libpod-conmon-bff25d93d45309f3aa7f8e3f50ab359aba8130a9c6958e10d248d62fc1854e02.scope: Deactivated successfully.
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:22:59
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:22:59 np0005539550 podman[337079]: 2025-11-29 08:22:59.480371318 +0000 UTC m=+0.055951113 container create 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:22:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 189 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Nov 29 03:22:59 np0005539550 systemd[1]: Started libpod-conmon-49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f.scope.
Nov 29 03:22:59 np0005539550 podman[337079]: 2025-11-29 08:22:59.461129337 +0000 UTC m=+0.036709172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:22:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:22:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8231f280ec1c66236bda1cf871fa31445e2433c441283256d6a989eedc83391/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8231f280ec1c66236bda1cf871fa31445e2433c441283256d6a989eedc83391/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8231f280ec1c66236bda1cf871fa31445e2433c441283256d6a989eedc83391/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8231f280ec1c66236bda1cf871fa31445e2433c441283256d6a989eedc83391/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:22:59 np0005539550 podman[337079]: 2025-11-29 08:22:59.581427474 +0000 UTC m=+0.157007319 container init 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:22:59 np0005539550 podman[337079]: 2025-11-29 08:22:59.588366521 +0000 UTC m=+0.163946316 container start 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:22:59 np0005539550 podman[337079]: 2025-11-29 08:22:59.591339972 +0000 UTC m=+0.166919787 container attach 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:22:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:22:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:22:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:22:59.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:00 np0005539550 podman[337102]: 2025-11-29 08:23:00.339133451 +0000 UTC m=+0.067833090 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:23:00 np0005539550 podman[337100]: 2025-11-29 08:23:00.347814359 +0000 UTC m=+0.080637366 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]: {
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:    "0": [
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:        {
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "devices": [
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "/dev/loop3"
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            ],
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "lv_name": "ceph_lv0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "lv_size": "7511998464",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "name": "ceph_lv0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "tags": {
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.cluster_name": "ceph",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.crush_device_class": "",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.encrypted": "0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.osd_id": "0",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.type": "block",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:                "ceph.vdo": "0"
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            },
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "type": "block",
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:            "vg_name": "ceph_vg0"
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:        }
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]:    ]
Nov 29 03:23:00 np0005539550 quirky_dewdney[337095]: }
Nov 29 03:23:00 np0005539550 systemd[1]: libpod-49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f.scope: Deactivated successfully.
Nov 29 03:23:00 np0005539550 podman[337079]: 2025-11-29 08:23:00.372601654 +0000 UTC m=+0.948181449 container died 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:23:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d8231f280ec1c66236bda1cf871fa31445e2433c441283256d6a989eedc83391-merged.mount: Deactivated successfully.
Nov 29 03:23:00 np0005539550 podman[337079]: 2025-11-29 08:23:00.426803055 +0000 UTC m=+1.002382850 container remove 49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:23:00 np0005539550 systemd[1]: libpod-conmon-49d9deae8669c3e7b03737ba53132857deb025bf8e98209b3aee7496c653240f.scope: Deactivated successfully.
Nov 29 03:23:00 np0005539550 nova_compute[257631]: 2025-11-29 08:23:00.561 257641 DEBUG nova.network.neutron [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updated VIF entry in instance network info cache for port 573e5085-7652-4c15-b353-cd8eff879375. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:00 np0005539550 nova_compute[257631]: 2025-11-29 08:23:00.561 257641 DEBUG nova.network.neutron [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:00 np0005539550 nova_compute[257631]: 2025-11-29 08:23:00.591 257641 DEBUG oslo_concurrency.lockutils [req-6f505f55-1fc8-4009-ada8-3aba620c4ba8 req-7f386d11-9d56-41f0-b527-bbd2692cd090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:00.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.285525755 +0000 UTC m=+0.041690210 container create e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:23:01 np0005539550 systemd[1]: Started libpod-conmon-e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a.scope.
Nov 29 03:23:01 np0005539550 nova_compute[257631]: 2025-11-29 08:23:01.329 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.269234794 +0000 UTC m=+0.025399269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.377450682 +0000 UTC m=+0.133615137 container init e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.384601813 +0000 UTC m=+0.140766268 container start e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.38862368 +0000 UTC m=+0.144788425 container attach e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:23:01 np0005539550 systemd[1]: libpod-e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a.scope: Deactivated successfully.
Nov 29 03:23:01 np0005539550 xenodochial_grothendieck[337305]: 167 167
Nov 29 03:23:01 np0005539550 conmon[337305]: conmon e51b3684cf1882a33873 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a.scope/container/memory.events
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.392911603 +0000 UTC m=+0.149076048 container died e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:23:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8ef339f1282bc47d25ad8a9f346304c45f71624dbd95f59d38ab3626f52b52ef-merged.mount: Deactivated successfully.
Nov 29 03:23:01 np0005539550 podman[337288]: 2025-11-29 08:23:01.434996323 +0000 UTC m=+0.191160768 container remove e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:23:01 np0005539550 systemd[1]: libpod-conmon-e51b3684cf1882a3387303987550bf9dfb370497e3c4f3850e0a67c8ee72dd6a.scope: Deactivated successfully.
Nov 29 03:23:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 214 MiB data, 1022 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Nov 29 03:23:01 np0005539550 podman[337329]: 2025-11-29 08:23:01.61945603 +0000 UTC m=+0.059436007 container create a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:23:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:01.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:01 np0005539550 systemd[1]: Started libpod-conmon-a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75.scope.
Nov 29 03:23:01 np0005539550 podman[337329]: 2025-11-29 08:23:01.595929826 +0000 UTC m=+0.035909833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:23:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f685db4a5baa0d7a80fd1c4722634ad966be2a0352310a92df5ae9e52b1b964a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f685db4a5baa0d7a80fd1c4722634ad966be2a0352310a92df5ae9e52b1b964a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f685db4a5baa0d7a80fd1c4722634ad966be2a0352310a92df5ae9e52b1b964a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f685db4a5baa0d7a80fd1c4722634ad966be2a0352310a92df5ae9e52b1b964a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:01 np0005539550 podman[337329]: 2025-11-29 08:23:01.738005516 +0000 UTC m=+0.177985513 container init a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:23:01 np0005539550 podman[337329]: 2025-11-29 08:23:01.744691296 +0000 UTC m=+0.184671253 container start a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:23:01 np0005539550 podman[337329]: 2025-11-29 08:23:01.74818342 +0000 UTC m=+0.188163397 container attach a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:23:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]: {
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:        "osd_id": 0,
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:        "type": "bluestore"
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]:    }
Nov 29 03:23:02 np0005539550 thirsty_jemison[337347]: }
Nov 29 03:23:02 np0005539550 nova_compute[257631]: 2025-11-29 08:23:02.574 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:02 np0005539550 systemd[1]: libpod-a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75.scope: Deactivated successfully.
Nov 29 03:23:02 np0005539550 conmon[337347]: conmon a949888b9b676c4306a9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75.scope/container/memory.events
Nov 29 03:23:02 np0005539550 podman[337329]: 2025-11-29 08:23:02.588055679 +0000 UTC m=+1.028035636 container died a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:23:02 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f685db4a5baa0d7a80fd1c4722634ad966be2a0352310a92df5ae9e52b1b964a-merged.mount: Deactivated successfully.
Nov 29 03:23:02 np0005539550 podman[337329]: 2025-11-29 08:23:02.650160239 +0000 UTC m=+1.090140206 container remove a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jemison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:23:02 np0005539550 systemd[1]: libpod-conmon-a949888b9b676c4306a9a02a89bb02422d07132bdf91fdeb980b657eb94ade75.scope: Deactivated successfully.
Nov 29 03:23:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:02.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:23:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:23:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c031824a-7a22-4465-8e8b-dd4f278963b6 does not exist
Nov 29 03:23:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 47f8a8eb-6197-4813-a2af-4b3227063b46 does not exist
Nov 29 03:23:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 888ba3c4-cffa-4bb3-92b8-a38cb0a39429 does not exist
Nov 29 03:23:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 226 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.1 MiB/s wr, 217 op/s
Nov 29 03:23:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:03.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:23:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:04.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 250 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 254 op/s
Nov 29 03:23:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:05.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:06 np0005539550 nova_compute[257631]: 2025-11-29 08:23:06.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:07Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:4e:b0 10.100.0.6
Nov 29 03:23:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:07Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:4e:b0 10.100.0.6
Nov 29 03:23:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 299 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 211 op/s
Nov 29 03:23:07 np0005539550 nova_compute[257631]: 2025-11-29 08:23:07.576 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:07.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:23:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:08.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:23:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:23:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 319 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.6 MiB/s wr, 195 op/s
Nov 29 03:23:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:09.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:10.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:11 np0005539550 nova_compute[257631]: 2025-11-29 08:23:11.414 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 302 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 6.5 MiB/s wr, 231 op/s
Nov 29 03:23:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:11.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:12 np0005539550 nova_compute[257631]: 2025-11-29 08:23:12.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:12.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.191 257641 DEBUG oslo_concurrency.lockutils [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.192 257641 DEBUG oslo_concurrency.lockutils [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.192 257641 DEBUG nova.compute.manager [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.197 257641 DEBUG nova.compute.manager [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.198 257641 DEBUG nova.objects.instance [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:13 np0005539550 nova_compute[257631]: 2025-11-29 08:23:13.220 257641 DEBUG nova.virt.libvirt.driver [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:23:13 np0005539550 podman[337439]: 2025-11-29 08:23:13.359670647 +0000 UTC m=+0.090121354 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:23:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 194 op/s
Nov 29 03:23:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:13.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.4 MiB/s wr, 203 op/s
Nov 29 03:23:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.416 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 kernel: tap573e5085-76 (unregistering): left promiscuous mode
Nov 29 03:23:16 np0005539550 NetworkManager[49039]: <info>  [1764404596.4696] device (tap573e5085-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:16Z|00541|binding|INFO|Releasing lport 573e5085-7652-4c15-b353-cd8eff879375 from this chassis (sb_readonly=0)
Nov 29 03:23:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:16Z|00542|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 down in Southbound
Nov 29 03:23:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:16Z|00543|binding|INFO|Removing iface tap573e5085-76 ovn-installed in OVS
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.488 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.490 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.492 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.493 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3982f501-3751-4f4c-86c1-5e19e92174a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.493 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.507 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.515 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.516 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.529 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:16 np0005539550 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 29 03:23:16 np0005539550 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000007d.scope: Consumed 14.491s CPU time.
Nov 29 03:23:16 np0005539550 systemd-machined[216673]: Machine qemu-64-instance-0000007d terminated.
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.600 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.601 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.608 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.608 257641 INFO nova.compute.claims [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [NOTICE]   (336482) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [NOTICE]   (336482) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [WARNING]  (336482) : Exiting Master process...
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [WARNING]  (336482) : Exiting Master process...
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [ALERT]    (336482) : Current worker (336484) exited with code 143 (Terminated)
Nov 29 03:23:16 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[336478]: [WARNING]  (336482) : All workers exited. Exiting... (0)
Nov 29 03:23:16 np0005539550 systemd[1]: libpod-9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c.scope: Deactivated successfully.
Nov 29 03:23:16 np0005539550 podman[337492]: 2025-11-29 08:23:16.661154098 +0000 UTC m=+0.068995117 container died 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a25ec6770a3b297ad5f05ad3c2f263662057e8fefa6d2b1fc5a7d67fa8eb17c-merged.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:16.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 podman[337492]: 2025-11-29 08:23:16.716373393 +0000 UTC m=+0.124214402 container cleanup 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:23:16 np0005539550 systemd[1]: libpod-conmon-9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c.scope: Deactivated successfully.
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.771 257641 DEBUG nova.compute.manager [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.772 257641 DEBUG oslo_concurrency.lockutils [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.772 257641 DEBUG oslo_concurrency.lockutils [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.773 257641 DEBUG oslo_concurrency.lockutils [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.773 257641 DEBUG nova.compute.manager [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.774 257641 WARNING nova.compute.manager [req-6897f428-2913-4645-b719-e0ee0bf87843 req-759d3511-f2eb-4a7f-9354-832c32806d21 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.777 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:16 np0005539550 podman[337530]: 2025-11-29 08:23:16.785481112 +0000 UTC m=+0.044388286 container remove 9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bff9bb11-742a-4ae8-ac7f-ef0973998ab2]: (4, ('Sat Nov 29 08:23:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c)\n9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c\nSat Nov 29 08:23:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c)\n9602bf4d7bfb3a120df355fe8c75d9812d8ace88654d7b0dc035605b95ef9c3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.793 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06bca4c1-7f1d-4a1d-9357-134563792e71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.795 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:16 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 nova_compute[257631]: 2025-11-29 08:23:16.814 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.817 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07d0b128-f883-49c7-b822-0c78e1be0d77]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.833 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[88c521cf-a22d-40fa-bcca-171b4ba66936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.834 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[318b914a-a485-4cb4-96e9-5380b90234ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.852 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[31987e83-9b2a-402b-af33-dd99b6c1a3c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762024, 'reachable_time': 27648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337549, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:16 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.854 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:16.854 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[319da2de-7e0b-4a98-90e7-bbc7a46ef6a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790085240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.241 257641 INFO nova.virt.libvirt.driver [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance shutdown successfully after 4 seconds.#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.243 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.250 257641 DEBUG nova.compute.provider_tree [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.255 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance destroyed successfully.#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.255 257641 DEBUG nova.objects.instance [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'numa_topology' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.277 257641 DEBUG nova.compute.manager [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.279 257641 DEBUG nova.scheduler.client.report [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.310 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.311 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.346 257641 DEBUG oslo_concurrency.lockutils [None req-4c86235a-bb62-49b1-993d-f5ed89f8770e 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 4.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.357 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.357 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.380 257641 INFO nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.395 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.479 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.482 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.482 257641 INFO nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Creating image(s)#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.514 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.5 MiB/s wr, 244 op/s
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.541 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.571 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.575 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.643 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.644 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.644 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.645 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.668 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.672 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d17ba263-68c7-4428-9d64-9a809e93a457_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:17.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.953 257641 DEBUG nova.policy [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1552f15deb524705a9456cbe9b54c429', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0bace34c102e4d56b089fd695d324f10', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:17 np0005539550 nova_compute[257631]: 2025-11-29 08:23:17.957 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d17ba263-68c7-4428-9d64-9a809e93a457_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.025 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] resizing rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.127 257641 DEBUG nova.objects.instance [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'migration_context' on Instance uuid d17ba263-68c7-4428-9d64-9a809e93a457 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.178 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.179 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Ensure instance console log exists: /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.179 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.180 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.180 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.658 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Successfully created port: 72d454e3-eb73-4ea1-94dd-f13b08f74cda _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:18.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:18 np0005539550 nova_compute[257631]: 2025-11-29 08:23:18.865 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:18.955 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:18.956 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:18.956 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.038 257641 DEBUG nova.compute.manager [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.038 257641 DEBUG oslo_concurrency.lockutils [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.039 257641 DEBUG oslo_concurrency.lockutils [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.039 257641 DEBUG oslo_concurrency.lockutils [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.039 257641 DEBUG nova.compute.manager [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.039 257641 WARNING nova.compute.manager [req-4d8d2e08-0a32-499d-80e7-29a09813a58c req-bdf401e2-0d33-4984-ad2d-005e7280d6bc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.065 257641 DEBUG oslo_concurrency.lockutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.066 257641 DEBUG oslo_concurrency.lockutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.066 257641 DEBUG nova.network.neutron [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.066 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'info_cache' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 285 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 242 op/s
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.575 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Successfully updated port: 72d454e3-eb73-4ea1-94dd-f13b08f74cda _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.594 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.594 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquired lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.595 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:19.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:19 np0005539550 nova_compute[257631]: 2025-11-29 08:23:19.798 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006301581693188163 of space, bias 1.0, pg target 1.890474507956449 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:23:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.320 257641 DEBUG nova.compute.manager [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-changed-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.320 257641 DEBUG nova.compute.manager [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Refreshing instance network info cache due to event network-changed-72d454e3-eb73-4ea1-94dd-f13b08f74cda. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.321 257641 DEBUG oslo_concurrency.lockutils [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.327 257641 DEBUG nova.network.neutron [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.355 257641 DEBUG oslo_concurrency.lockutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.396 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance destroyed successfully.#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.396 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'numa_topology' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.413 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.433 257641 DEBUG nova.virt.libvirt.vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.433 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.434 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.434 257641 DEBUG os_vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.436 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.436 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap573e5085-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.437 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.441 257641 INFO os_vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76')#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.448 257641 DEBUG nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start _get_guest_xml network_info=[{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.451 257641 WARNING nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.455 257641 DEBUG nova.virt.libvirt.host [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.456 257641 DEBUG nova.virt.libvirt.host [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.458 257641 DEBUG nova.virt.libvirt.host [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.459 257641 DEBUG nova.virt.libvirt.host [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.460 257641 DEBUG nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.460 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.460 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.461 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.461 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.461 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.461 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.462 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.462 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.462 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.462 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.463 257641 DEBUG nova.virt.hardware [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.463 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.479 257641 DEBUG oslo_concurrency.processutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:20.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688890395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:20 np0005539550 nova_compute[257631]: 2025-11-29 08:23:20.954 257641 DEBUG oslo_concurrency.processutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.002 257641 DEBUG oslo_concurrency.processutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534539133' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.427 257641 DEBUG oslo_concurrency.processutils [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.429 257641 DEBUG nova.virt.libvirt.vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.430 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.431 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.432 257641 DEBUG nova.objects.instance [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.498 257641 DEBUG nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <uuid>f09f0e1a-bc69-4cd4-b504-3ba084ffa875</uuid>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <name>instance-0000007d</name>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestJSON-server-161781417</nova:name>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:23:20</nova:creationTime>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:user uuid="80ceb9112b3a4f119c05f21fd617af11">tempest-ServerActionsTestJSON-2111371935-project-member</nova:user>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:project uuid="26e3508b949a4dbf960d7befc8f27869">tempest-ServerActionsTestJSON-2111371935</nova:project>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <nova:port uuid="573e5085-7652-4c15-b353-cd8eff879375">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="serial">f09f0e1a-bc69-4cd4-b504-3ba084ffa875</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="uuid">f09f0e1a-bc69-4cd4-b504-3ba084ffa875</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_disk.config">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:43:4e:b0"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <target dev="tap573e5085-76"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875/console.log" append="off"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:23:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:23:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:23:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:23:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.500 257641 DEBUG nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.501 257641 DEBUG nova.virt.libvirt.driver [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.502 257641 DEBUG nova.virt.libvirt.vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.502 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.502 257641 DEBUG nova.network.os_vif_util [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.503 257641 DEBUG os_vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.504 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.504 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.508 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap573e5085-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.508 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap573e5085-76, col_values=(('external_ids', {'iface-id': '573e5085-7652-4c15-b353-cd8eff879375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:4e:b0', 'vm-uuid': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.5109] manager: (tap573e5085-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.516 257641 INFO os_vif [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76')#033[00m
Nov 29 03:23:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 274 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 252 op/s
Nov 29 03:23:21 np0005539550 kernel: tap573e5085-76: entered promiscuous mode
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.5909] manager: (tap573e5085-76): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.590 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:21Z|00544|binding|INFO|Claiming lport 573e5085-7652-4c15-b353-cd8eff879375 for this chassis.
Nov 29 03:23:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:21Z|00545|binding|INFO|573e5085-7652-4c15-b353-cd8eff879375: Claiming fa:16:3e:43:4e:b0 10.100.0.6
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.596 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.597 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.598 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:23:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:21Z|00546|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 ovn-installed in OVS
Nov 29 03:23:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:21Z|00547|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 up in Southbound
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.608 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.611 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.610 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef809c7-300a-4462-962f-531963ca036c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.612 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.613 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.613 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[47f97e32-743a-4dcc-8489-50f4926dac53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.614 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[374b56da-4251-4e0c-a9d3-c4f3bf20ba09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 systemd-udevd[337868]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:21 np0005539550 systemd-machined[216673]: New machine qemu-65-instance-0000007d.
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.625 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[97c21573-ad56-46d8-afe6-4e38dbeffd81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.6311] device (tap573e5085-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.6318] device (tap573e5085-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:21 np0005539550 systemd[1]: Started Virtual Machine qemu-65-instance-0000007d.
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.648 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2d6f71-aa03-40f1-b256-12850a54a1e0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.676 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1ec83cca-2589-419f-8523-e3f80b169bac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.6821] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.681 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3b45a823-c115-44a7-b0a2-ff7eb6e9cdb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 systemd-udevd[337871]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000023s ======
Nov 29 03:23:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:21.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.712 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1b7d8c-fa4a-4ae4-962f-1806af511929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.715 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[49e5b4c9-5dd8-452d-9801-45999592b3e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.7447] device (tap58fd104d-40): carrier: link connected
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.750 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[11cc1ea1-7311-49a0-a5b9-0c10d86e5b12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.768 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45b570e3-45cd-4d9d-892e-ea12b20caf25]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764940, 'reachable_time': 16200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337899, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bc38994a-cbcf-4381-9431-1db502766f0f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 764940, 'tstamp': 764940}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337900, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.805 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[77a6a146-960c-4cb7-9bdc-b2b84061e5a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 155], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764940, 'reachable_time': 16200, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 337901, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.835 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4ef1ec-e4b9-4a6e-ab6a-dbba65a3e5c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.895 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e8e19649-0af9-4240-b078-6b00d158a5f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.897 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.897 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.897 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 NetworkManager[49039]: <info>  [1764404601.9000] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Nov 29 03:23:21 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.902 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:21Z|00548|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 nova_compute[257631]: 2025-11-29 08:23:21.925 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.926 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.927 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e742250c-4e79-4487-ac4c-d0a5aca82921]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.927 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:21.928 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:22 np0005539550 podman[337933]: 2025-11-29 08:23:22.290233858 +0000 UTC m=+0.048791793 container create 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:23:22 np0005539550 systemd[1]: Started libpod-conmon-1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb.scope.
Nov 29 03:23:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6994ccde9126d4b6501d0389fe87dcaf9a97dbc06bbe12045f200246ddd0995b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:22 np0005539550 podman[337933]: 2025-11-29 08:23:22.264749516 +0000 UTC m=+0.023307451 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:22 np0005539550 podman[337933]: 2025-11-29 08:23:22.365155706 +0000 UTC m=+0.123713661 container init 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:22 np0005539550 podman[337933]: 2025-11-29 08:23:22.370431522 +0000 UTC m=+0.128989457 container start 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:23:22 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [NOTICE]   (337952) : New worker (337954) forked
Nov 29 03:23:22 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [NOTICE]   (337952) : Loading success.
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.437 257641 DEBUG nova.network.neutron [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updating instance_info_cache with network_info: [{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.474 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Releasing lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.474 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Instance network_info: |[{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.475 257641 DEBUG oslo_concurrency.lockutils [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.475 257641 DEBUG nova.network.neutron [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Refreshing network info cache for port 72d454e3-eb73-4ea1-94dd-f13b08f74cda _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.477 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Start _get_guest_xml network_info=[{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.482 257641 WARNING nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.486 257641 DEBUG nova.virt.libvirt.host [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.488 257641 DEBUG nova.virt.libvirt.host [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.490 257641 DEBUG nova.virt.libvirt.host [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.491 257641 DEBUG nova.virt.libvirt.host [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.492 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.492 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.493 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.493 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.493 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.493 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.494 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.494 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.494 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.494 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.494 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.495 257641 DEBUG nova.virt.hardware [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.498 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.589 257641 DEBUG nova.compute.manager [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.589 257641 DEBUG oslo_concurrency.lockutils [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.590 257641 DEBUG oslo_concurrency.lockutils [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.590 257641 DEBUG oslo_concurrency.lockutils [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.590 257641 DEBUG nova.compute.manager [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.590 257641 WARNING nova.compute.manager [req-bc295e8e-5a1b-4929-8ff6-5c6f7bab1f15 req-b24cc1cf-1fa4-4f97-ae9b-056321e21263 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state stopped and task_state powering-on.#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:22.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.922 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for f09f0e1a-bc69-4cd4-b504-3ba084ffa875 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.922 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404602.9216318, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.922 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.926 257641 DEBUG nova.compute.manager [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/643474474' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.933 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance rebooted successfully.#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.934 257641 DEBUG nova.compute.manager [None req-a94dd3fb-abcc-43e8-b1c0-87cbb6190287 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.943 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.946 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.953 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.979 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:22 np0005539550 nova_compute[257631]: 2025-11-29 08:23:22.984 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.008 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.009 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404602.92518, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.009 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.046 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.052 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/455776818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.408 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.410 257641 DEBUG nova.virt.libvirt.vif [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207824385',display_name='tempest-ServerActionsTestOtherA-server-207824385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207824385',id=128,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-yo5nxlbx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:17Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=d17ba263-68c7-4428-9d64-9a809e93a457,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.410 257641 DEBUG nova.network.os_vif_util [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.411 257641 DEBUG nova.network.os_vif_util [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.412 257641 DEBUG nova.objects.instance [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'pci_devices' on Instance uuid d17ba263-68c7-4428-9d64-9a809e93a457 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.437 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <uuid>d17ba263-68c7-4428-9d64-9a809e93a457</uuid>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <name>instance-00000080</name>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerActionsTestOtherA-server-207824385</nova:name>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:23:22</nova:creationTime>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:user uuid="1552f15deb524705a9456cbe9b54c429">tempest-ServerActionsTestOtherA-1954650991-project-member</nova:user>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:project uuid="0bace34c102e4d56b089fd695d324f10">tempest-ServerActionsTestOtherA-1954650991</nova:project>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <nova:port uuid="72d454e3-eb73-4ea1-94dd-f13b08f74cda">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="serial">d17ba263-68c7-4428-9d64-9a809e93a457</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="uuid">d17ba263-68c7-4428-9d64-9a809e93a457</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d17ba263-68c7-4428-9d64-9a809e93a457_disk">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d17ba263-68c7-4428-9d64-9a809e93a457_disk.config">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:0a:a1:5d"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <target dev="tap72d454e3-eb"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/console.log" append="off"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:23:23 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:23:23 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:23:23 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:23:23 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.438 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Preparing to wait for external event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.438 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.438 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.439 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.439 257641 DEBUG nova.virt.libvirt.vif [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207824385',display_name='tempest-ServerActionsTestOtherA-server-207824385',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207824385',id=128,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-yo5nxlbx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:17Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=d17ba263-68c7-4428-9d64-9a809e93a457,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.439 257641 DEBUG nova.network.os_vif_util [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.440 257641 DEBUG nova.network.os_vif_util [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.440 257641 DEBUG os_vif [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.441 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.441 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.442 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.443 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72d454e3-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.444 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72d454e3-eb, col_values=(('external_ids', {'iface-id': '72d454e3-eb73-4ea1-94dd-f13b08f74cda', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:a1:5d', 'vm-uuid': 'd17ba263-68c7-4428-9d64-9a809e93a457'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:23 np0005539550 NetworkManager[49039]: <info>  [1764404603.4463] manager: (tap72d454e3-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.452 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.453 257641 INFO os_vif [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb')#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.524 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.524 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.525 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] No VIF found with MAC fa:16:3e:0a:a1:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.525 257641 INFO nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Using config drive#033[00m
Nov 29 03:23:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 298 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 210 op/s
Nov 29 03:23:23 np0005539550 nova_compute[257631]: 2025-11-29 08:23:23.553 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:23.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.606 257641 INFO nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Creating config drive at /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.611 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcwwrn9m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.685 257641 DEBUG nova.compute.manager [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.686 257641 DEBUG oslo_concurrency.lockutils [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.686 257641 DEBUG oslo_concurrency.lockutils [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.686 257641 DEBUG oslo_concurrency.lockutils [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.687 257641 DEBUG nova.compute.manager [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.687 257641 WARNING nova.compute.manager [req-2d4af686-f308-4f60-922e-965e8be13a92 req-9c09c90f-9983-41ec-9733-af69f87bc707 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:24.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.744 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvcwwrn9m" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.773 257641 DEBUG nova.storage.rbd_utils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] rbd image d17ba263-68c7-4428-9d64-9a809e93a457_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.776 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config d17ba263-68c7-4428-9d64-9a809e93a457_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.936 257641 DEBUG oslo_concurrency.processutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config d17ba263-68c7-4428-9d64-9a809e93a457_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.937 257641 INFO nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Deleting local config drive /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:24 np0005539550 kernel: tap72d454e3-eb: entered promiscuous mode
Nov 29 03:23:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:24Z|00549|binding|INFO|Claiming lport 72d454e3-eb73-4ea1-94dd-f13b08f74cda for this chassis.
Nov 29 03:23:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:24Z|00550|binding|INFO|72d454e3-eb73-4ea1-94dd-f13b08f74cda: Claiming fa:16:3e:0a:a1:5d 10.100.0.6
Nov 29 03:23:24 np0005539550 nova_compute[257631]: 2025-11-29 08:23:24.979 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:24 np0005539550 NetworkManager[49039]: <info>  [1764404604.9838] manager: (tap72d454e3-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Nov 29 03:23:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:24.993 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:a1:5d 10.100.0.6'], port_security=['fa:16:3e:0a:a1:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd17ba263-68c7-4428-9d64-9a809e93a457', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2702f195-789d-4a37-affe-a8159dccabea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=72d454e3-eb73-4ea1-94dd-f13b08f74cda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:24.994 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 72d454e3-eb73-4ea1-94dd-f13b08f74cda in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 bound to our chassis#033[00m
Nov 29 03:23:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:24.996 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7fc1dfc3-8d7f-4854-980d-37a93f366035#033[00m
Nov 29 03:23:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:25Z|00551|binding|INFO|Setting lport 72d454e3-eb73-4ea1-94dd-f13b08f74cda ovn-installed in OVS
Nov 29 03:23:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:25Z|00552|binding|INFO|Setting lport 72d454e3-eb73-4ea1-94dd-f13b08f74cda up in Southbound
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.008 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[439bfd12-06dd-445a-8053-5643dde95c58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.009 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7fc1dfc3-81 in ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.011 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.011 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7fc1dfc3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.011 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb5042b-4089-46ab-a49c-1a276ba8ff5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.016 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0affa0bb-312b-4ac0-b925-6b24885c42d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 systemd-machined[216673]: New machine qemu-66-instance-00000080.
Nov 29 03:23:25 np0005539550 systemd-udevd[338145]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.030 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fc70f4a7-666b-4c0b-833f-de25f3221a4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 NetworkManager[49039]: <info>  [1764404605.0393] device (tap72d454e3-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:25 np0005539550 NetworkManager[49039]: <info>  [1764404605.0401] device (tap72d454e3-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:25 np0005539550 systemd[1]: Started Virtual Machine qemu-66-instance-00000080.
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.043 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e38a99-01d9-4ad0-ad5d-7e3cc3292cff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.072 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[11fc7a88-884d-4317-b176-c8561fdaa2b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.076 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc3abc3-dbfc-4844-965b-f61b45e4d520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 NetworkManager[49039]: <info>  [1764404605.0783] manager: (tap7fc1dfc3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Nov 29 03:23:25 np0005539550 systemd-udevd[338148]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.112 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd2fba2-c6c7-4d6b-ac7a-c3a406b52f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.116 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ceab63d4-9eb1-4e84-bfa8-5fcdd4058653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 NetworkManager[49039]: <info>  [1764404605.1374] device (tap7fc1dfc3-80): carrier: link connected
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.143 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3e458c-cd70-46ed-88a1-ef0403a4c35b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.164 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3b50e1-a7fd-4ce7-87b3-34a0366be68b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765280, 'reachable_time': 15050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338176, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eba1c5b8-755c-4657-8b5a-70b98d52bfb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:273e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 765280, 'tstamp': 765280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338177, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.194 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[75454bae-dc8a-4910-954e-d10c11f3d9ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7fc1dfc3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:27:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765280, 'reachable_time': 15050, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338178, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.224 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b19ae0c-fa2e-49bd-bf3f-0bcd3f575587]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.280 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30c78752-2017-4ac7-bed2-5b68d71441e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.281 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.282 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.282 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fc1dfc3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:25 np0005539550 kernel: tap7fc1dfc3-80: entered promiscuous mode
Nov 29 03:23:25 np0005539550 NetworkManager[49039]: <info>  [1764404605.2849] manager: (tap7fc1dfc3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.285 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.287 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7fc1dfc3-80, col_values=(('external_ids', {'iface-id': '79109459-2a40-4b69-936e-ac2a2aa77985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:25Z|00553|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.289 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.290 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[29c0ec4f-47f8-4ebe-8dd2-de57a269d20e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.290 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7fc1dfc3-8d7f-4854-980d-37a93f366035.pid.haproxy
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7fc1dfc3-8d7f-4854-980d-37a93f366035
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:25.291 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'env', 'PROCESS_TAG=haproxy-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7fc1dfc3-8d7f-4854-980d-37a93f366035.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.361 257641 DEBUG nova.compute.manager [req-99526403-35db-4b60-94e4-6fe30d9fe7d6 req-21c99e8a-f54d-4970-8429-6efec4869b4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.362 257641 DEBUG oslo_concurrency.lockutils [req-99526403-35db-4b60-94e4-6fe30d9fe7d6 req-21c99e8a-f54d-4970-8429-6efec4869b4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.362 257641 DEBUG oslo_concurrency.lockutils [req-99526403-35db-4b60-94e4-6fe30d9fe7d6 req-21c99e8a-f54d-4970-8429-6efec4869b4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.362 257641 DEBUG oslo_concurrency.lockutils [req-99526403-35db-4b60-94e4-6fe30d9fe7d6 req-21c99e8a-f54d-4970-8429-6efec4869b4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.363 257641 DEBUG nova.compute.manager [req-99526403-35db-4b60-94e4-6fe30d9fe7d6 req-21c99e8a-f54d-4970-8429-6efec4869b4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Processing event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:23:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 309 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 241 op/s
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.590 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.590 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404605.5892982, d17ba263-68c7-4428-9d64-9a809e93a457 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.591 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.594 257641 DEBUG nova.network.neutron [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updated VIF entry in instance network info cache for port 72d454e3-eb73-4ea1-94dd-f13b08f74cda. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.595 257641 DEBUG nova.network.neutron [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updating instance_info_cache with network_info: [{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.597 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.601 257641 INFO nova.virt.libvirt.driver [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Instance spawned successfully.#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.601 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:23:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:25Z|00554|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:23:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:25Z|00555|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.632 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.633 257641 DEBUG oslo_concurrency.lockutils [req-ff6fa4b1-a2f0-438c-a141-d9a461d5d927 req-7bd15ff0-b3e0-48fe-b363-1b51f23c8d6d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.637 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.638 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.639 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.639 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.640 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.640 257641 DEBUG nova.virt.libvirt.driver [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.645 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:25 np0005539550 podman[338252]: 2025-11-29 08:23:25.660942191 +0000 UTC m=+0.046419565 container create 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.681 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.682 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404605.590435, d17ba263-68c7-4428-9d64-9a809e93a457 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.682 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.688 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.700 257641 INFO nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.701 257641 DEBUG nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:25.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.703 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:25 np0005539550 systemd[1]: Started libpod-conmon-54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12.scope.
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.710 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404605.5973775, d17ba263-68c7-4428-9d64-9a809e93a457 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.710 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:25 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cecfee3fad69887674f56fe024fe570f3aae4a43783c246245efe83e080363cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:25 np0005539550 podman[338252]: 2025-11-29 08:23:25.638876741 +0000 UTC m=+0.024354115 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:25 np0005539550 podman[338252]: 2025-11-29 08:23:25.736539865 +0000 UTC m=+0.122017259 container init 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.739 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:25 np0005539550 podman[338252]: 2025-11-29 08:23:25.742172881 +0000 UTC m=+0.127650255 container start 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.743 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:25 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [NOTICE]   (338268) : New worker (338270) forked
Nov 29 03:23:25 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [NOTICE]   (338268) : Loading success.
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.772 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.791 257641 INFO nova.compute.manager [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Took 9.22 seconds to build instance.#033[00m
Nov 29 03:23:25 np0005539550 nova_compute[257631]: 2025-11-29 08:23:25.810 257641 DEBUG oslo_concurrency.lockutils [None req-6b3047af-4aa9-4206-bee3-8b2aaa45ce95 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:26.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.446 257641 DEBUG nova.compute.manager [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.446 257641 DEBUG oslo_concurrency.lockutils [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.446 257641 DEBUG oslo_concurrency.lockutils [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.447 257641 DEBUG oslo_concurrency.lockutils [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.447 257641 DEBUG nova.compute.manager [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] No waiting events found dispatching network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.447 257641 WARNING nova.compute.manager [req-5a2a01de-dc8c-4547-9a6e-a02dfe624e3d req-f0ed7eb4-5008-441e-b764-b265aab2339a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received unexpected event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 318 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 244 op/s
Nov 29 03:23:27 np0005539550 nova_compute[257631]: 2025-11-29 08:23:27.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:27.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:28 np0005539550 nova_compute[257631]: 2025-11-29 08:23:28.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:28.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.704 257641 DEBUG nova.compute.manager [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-changed-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:29.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.706 257641 DEBUG nova.compute.manager [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Refreshing instance network info cache due to event network-changed-72d454e3-eb73-4ea1-94dd-f13b08f74cda. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.708 257641 DEBUG oslo_concurrency.lockutils [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.708 257641 DEBUG oslo_concurrency.lockutils [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.709 257641 DEBUG nova.network.neutron [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Refreshing network info cache for port 72d454e3-eb73-4ea1-94dd-f13b08f74cda _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.914 257641 DEBUG nova.objects.instance [None req-ab6ed900-b5cf-4d37-a8a1-8130cdd9ece0 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.946 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404609.9458983, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.946 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.963 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.975 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:29 np0005539550 nova_compute[257631]: 2025-11-29 08:23:29.994 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:23:30 np0005539550 kernel: tap573e5085-76 (unregistering): left promiscuous mode
Nov 29 03:23:30 np0005539550 NetworkManager[49039]: <info>  [1764404610.6963] device (tap573e5085-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:30.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:30Z|00556|binding|INFO|Releasing lport 573e5085-7652-4c15-b353-cd8eff879375 from this chassis (sb_readonly=0)
Nov 29 03:23:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:30Z|00557|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 down in Southbound
Nov 29 03:23:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:30Z|00558|binding|INFO|Removing iface tap573e5085-76 ovn-installed in OVS
Nov 29 03:23:30 np0005539550 nova_compute[257631]: 2025-11-29 08:23:30.747 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:30 np0005539550 nova_compute[257631]: 2025-11-29 08:23:30.753 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:30.756 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:30.758 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:23:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:30.759 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:30.760 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[daaeb638-22a6-4c46-a3c5-acbbc2c39497]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:30.762 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:23:30 np0005539550 nova_compute[257631]: 2025-11-29 08:23:30.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:30 np0005539550 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 29 03:23:30 np0005539550 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000007d.scope: Consumed 8.664s CPU time.
Nov 29 03:23:30 np0005539550 podman[338286]: 2025-11-29 08:23:30.784908715 +0000 UTC m=+0.109851127 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:23:30 np0005539550 systemd-machined[216673]: Machine qemu-65-instance-0000007d terminated.
Nov 29 03:23:30 np0005539550 podman[338285]: 2025-11-29 08:23:30.794855794 +0000 UTC m=+0.119784766 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:23:30 np0005539550 nova_compute[257631]: 2025-11-29 08:23:30.872 257641 DEBUG nova.compute.manager [None req-ab6ed900-b5cf-4d37-a8a1-8130cdd9ece0 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [NOTICE]   (337952) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [NOTICE]   (337952) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [WARNING]  (337952) : Exiting Master process...
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [WARNING]  (337952) : Exiting Master process...
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [ALERT]    (337952) : Current worker (337954) exited with code 143 (Terminated)
Nov 29 03:23:30 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[337948]: [WARNING]  (337952) : All workers exited. Exiting... (0)
Nov 29 03:23:30 np0005539550 systemd[1]: libpod-1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb.scope: Deactivated successfully.
Nov 29 03:23:30 np0005539550 podman[338344]: 2025-11-29 08:23:30.890196312 +0000 UTC m=+0.048718390 container died 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:23:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:30 np0005539550 nova_compute[257631]: 2025-11-29 08:23:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6994ccde9126d4b6501d0389fe87dcaf9a97dbc06bbe12045f200246ddd0995b-merged.mount: Deactivated successfully.
Nov 29 03:23:30 np0005539550 podman[338344]: 2025-11-29 08:23:30.931833652 +0000 UTC m=+0.090355730 container cleanup 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:23:30 np0005539550 systemd[1]: libpod-conmon-1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb.scope: Deactivated successfully.
Nov 29 03:23:30 np0005539550 podman[338385]: 2025-11-29 08:23:30.99759854 +0000 UTC m=+0.044519979 container remove 1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.002 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[651ecef7-f440-4daf-b451-fc7ade9e0def]: (4, ('Sat Nov 29 08:23:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb)\n1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb\nSat Nov 29 08:23:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb)\n1a291a294c39bbad23f933af38950b6bae754713ca3f523c89f325e248ba9bcb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.004 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58fc1f48-fbb7-4255-8f67-fbcb88fed6b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.005 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.007 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.031 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d5dda9-83c6-49d1-841e-305c4263038a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.047 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6313d5ed-8c32-42f5-abe0-c03d07128afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[652d81f0-498d-4d3b-a4f7-5ed822b99bf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.063 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0984f28-6359-448b-8c79-40b15722b175]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 764933, 'reachable_time': 38674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338403, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.067 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.068 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[9a60caea-b0da-42e3-be40-d6869aafaada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 260 op/s
Nov 29 03:23:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:31.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:23:31 np0005539550 nova_compute[257631]: 2025-11-29 08:23:31.980 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.982 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:31.982 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:23:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.107 257641 DEBUG nova.compute.manager [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.108 257641 DEBUG oslo_concurrency.lockutils [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.108 257641 DEBUG oslo_concurrency.lockutils [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.108 257641 DEBUG oslo_concurrency.lockutils [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.108 257641 DEBUG nova.compute.manager [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.109 257641 WARNING nova.compute.manager [req-174bf57b-6d49-4b2d-aef0-662149d9f1ad req-501c6102-3784-4985-8bc1-bdbe3b8b7716 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.257 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.257 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.257 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.258 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.259 257641 DEBUG nova.network.neutron [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updated VIF entry in instance network info cache for port 72d454e3-eb73-4ea1-94dd-f13b08f74cda. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.260 257641 DEBUG nova.network.neutron [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updating instance_info_cache with network_info: [{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.290 257641 DEBUG oslo_concurrency.lockutils [req-20859ca3-a23c-4b61-8d8d-8c44a992bd7c req-c24c37af-748b-4703-a260-f831bea55543 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.540 257641 INFO nova.compute.manager [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Resuming#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.541 257641 DEBUG nova.objects.instance [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'flavor' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.592 257641 DEBUG oslo_concurrency.lockutils [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:32 np0005539550 nova_compute[257631]: 2025-11-29 08:23:32.638 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:32.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:33 np0005539550 nova_compute[257631]: 2025-11-29 08:23:33.448 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 219 op/s
Nov 29 03:23:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.209 257641 DEBUG nova.compute.manager [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.210 257641 DEBUG oslo_concurrency.lockutils [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.210 257641 DEBUG oslo_concurrency.lockutils [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.211 257641 DEBUG oslo_concurrency.lockutils [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.211 257641 DEBUG nova.compute.manager [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.211 257641 WARNING nova.compute.manager [req-96775d00-b379-4eab-84d5-1aa81e3acfcc req-a1f3bf87-18a5-44c1-8872-b51eadfecf46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.401 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.450 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.451 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.452 257641 DEBUG oslo_concurrency.lockutils [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquired lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.452 257641 DEBUG nova.network.neutron [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.453 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.455 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.455 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.473 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.474 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.475 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.475 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.475 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:34.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289614045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:34 np0005539550 nova_compute[257631]: 2025-11-29 08:23:34.923 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.003 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.003 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.008 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.008 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.186 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.188 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4172MB free_disk=20.83062744140625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.188 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.188 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.255 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance f09f0e1a-bc69-4cd4-b504-3ba084ffa875 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.255 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance d17ba263-68c7-4428-9d64-9a809e93a457 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.255 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.255 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.312 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 195 op/s
Nov 29 03:23:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:35.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659002781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.756 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.762 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.771 257641 DEBUG nova.network.neutron [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [{"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.783 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.788 257641 DEBUG oslo_concurrency.lockutils [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Releasing lock "refresh_cache-f09f0e1a-bc69-4cd4-b504-3ba084ffa875" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.793 257641 DEBUG nova.virt.libvirt.vif [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.794 257641 DEBUG nova.network.os_vif_util [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.795 257641 DEBUG nova.network.os_vif_util [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.795 257641 DEBUG os_vif [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.796 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.797 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.799 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap573e5085-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.799 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap573e5085-76, col_values=(('external_ids', {'iface-id': '573e5085-7652-4c15-b353-cd8eff879375', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:4e:b0', 'vm-uuid': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.800 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.800 257641 INFO os_vif [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76')#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.804 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.804 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.813 257641 DEBUG nova.objects.instance [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'numa_topology' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:35 np0005539550 kernel: tap573e5085-76: entered promiscuous mode
Nov 29 03:23:35 np0005539550 NetworkManager[49039]: <info>  [1764404615.8752] manager: (tap573e5085-76): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Nov 29 03:23:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:35Z|00559|binding|INFO|Claiming lport 573e5085-7652-4c15-b353-cd8eff879375 for this chassis.
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.876 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:35Z|00560|binding|INFO|573e5085-7652-4c15-b353-cd8eff879375: Claiming fa:16:3e:43:4e:b0 10.100.0.6
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.884 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.885 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 bound to our chassis#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.887 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788#033[00m
Nov 29 03:23:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:35Z|00561|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 ovn-installed in OVS
Nov 29 03:23:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:35Z|00562|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 up in Southbound
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.897 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.898 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9195e93a-e88c-4f6f-a457-2a10e62cb1fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.899 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap58fd104d-41 in ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:35 np0005539550 nova_compute[257631]: 2025-11-29 08:23:35.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.902 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap58fd104d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.902 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0a92195c-de85-4ee6-9cd0-98fd49ca8af5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.903 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[92fd600f-0dab-4a8d-bb0b-d2a0329e3075]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:35 np0005539550 systemd-udevd[338464]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.914 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[f1df6652-e25f-4778-b8bd-2f8dc90680f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:35 np0005539550 NetworkManager[49039]: <info>  [1764404615.9240] device (tap573e5085-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:35 np0005539550 NetworkManager[49039]: <info>  [1764404615.9255] device (tap573e5085-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:35 np0005539550 systemd-machined[216673]: New machine qemu-67-instance-0000007d.
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.947 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ef95f1-a9e0-4d0b-af01-778adcfabbe6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:35 np0005539550 systemd[1]: Started Virtual Machine qemu-67-instance-0000007d.
Nov 29 03:23:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.985 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4f04f1f3-42b5-4f56-8314-e5b5ba5c8324]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:35.999 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3db24d47-fc96-43e3-8352-0ddb96d42df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 NetworkManager[49039]: <info>  [1764404616.0010] manager: (tap58fd104d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.036 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3d9d75-6668-4069-bffe-2557fccf7711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.039 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[388d8ce1-2d89-4e2e-a151-ef952cc0349a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 NetworkManager[49039]: <info>  [1764404616.0631] device (tap58fd104d-40): carrier: link connected
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.067 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[36a224dd-b9ca-4fbb-8557-0d8ed1fd789d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.085 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e70bbf60-8dbf-461e-a469-5f6ded4562fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766372, 'reachable_time': 23392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338497, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.102 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[79a7dafa-987d-4830-b262-663e900f1048]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:261e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 766372, 'tstamp': 766372}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338498, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.119 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4d46aa-67be-400c-b74b-13915df6ccf0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap58fd104d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:26:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 160], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766372, 'reachable_time': 23392, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338499, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.152 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7e8c9b81-440c-4103-9b8e-83044835bf2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.215 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb994a6-22a9-4d18-93db-72d4bbc38514]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.216 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.217 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.217 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap58fd104d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.269 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539550 NetworkManager[49039]: <info>  [1764404616.2703] manager: (tap58fd104d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Nov 29 03:23:36 np0005539550 kernel: tap58fd104d-40: entered promiscuous mode
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.275 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap58fd104d-40, col_values=(('external_ids', {'iface-id': '49c2d2fc-d147-42b8-8b87-df4d04283e61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:36Z|00563|binding|INFO|Releasing lport 49c2d2fc-d147-42b8-8b87-df4d04283e61 from this chassis (sb_readonly=0)
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.279 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.280 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d717a966-3609-495c-a1c4-bb6a4640f117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.281 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/58fd104d-4342-482d-ae9e-dbb4b9fa6788.pid.haproxy
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 58fd104d-4342-482d-ae9e-dbb4b9fa6788
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:36.282 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'env', 'PROCESS_TAG=haproxy-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/58fd104d-4342-482d-ae9e-dbb4b9fa6788.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.294 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.544 257641 DEBUG nova.compute.manager [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.545 257641 DEBUG oslo_concurrency.lockutils [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.546 257641 DEBUG oslo_concurrency.lockutils [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.546 257641 DEBUG oslo_concurrency.lockutils [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.547 257641 DEBUG nova.compute.manager [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.548 257641 WARNING nova.compute.manager [req-91923a5e-30fd-4f37-a1dc-1c3fc737b4a0 req-a7a8b071-df88-4cf0-90ec-c30f5a68b2f2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.577 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for f09f0e1a-bc69-4cd4-b504-3ba084ffa875 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.578 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404616.5766017, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.578 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.596 257641 DEBUG nova.compute.manager [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.596 257641 DEBUG nova.objects.instance [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'pci_devices' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.600 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.604 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.611 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance running successfully.#033[00m
Nov 29 03:23:36 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.615 257641 DEBUG nova.virt.libvirt.guest [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.616 257641 DEBUG nova.compute.manager [None req-f5533382-30f5-4557-a650-9f7ad19e87da 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.637 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.638 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404616.584142, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.638 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.680 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:36 np0005539550 nova_compute[257631]: 2025-11-29 08:23:36.683 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:36 np0005539550 podman[338574]: 2025-11-29 08:23:36.69500655 +0000 UTC m=+0.066605589 container create b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:23:36 np0005539550 systemd[1]: Started libpod-conmon-b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4.scope.
Nov 29 03:23:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:36.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:36 np0005539550 podman[338574]: 2025-11-29 08:23:36.655286197 +0000 UTC m=+0.026885256 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95df754118fa0772caa8f2c9d507028159bafaa58be204cacc114f5dbf6d3a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:36 np0005539550 podman[338574]: 2025-11-29 08:23:36.779111649 +0000 UTC m=+0.150710778 container init b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:23:36 np0005539550 podman[338574]: 2025-11-29 08:23:36.784685543 +0000 UTC m=+0.156284582 container start b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:23:36 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [NOTICE]   (338593) : New worker (338595) forked
Nov 29 03:23:36 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [NOTICE]   (338593) : Loading success.
Nov 29 03:23:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 954 KiB/s wr, 142 op/s
Nov 29 03:23:37 np0005539550 nova_compute[257631]: 2025-11-29 08:23:37.641 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:37.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:37 np0005539550 nova_compute[257631]: 2025-11-29 08:23:37.861 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:37.985 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.627 257641 DEBUG nova.compute.manager [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.627 257641 DEBUG oslo_concurrency.lockutils [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.628 257641 DEBUG oslo_concurrency.lockutils [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.628 257641 DEBUG oslo_concurrency.lockutils [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.628 257641 DEBUG nova.compute.manager [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:38 np0005539550 nova_compute[257631]: 2025-11-29 08:23:38.628 257641 WARNING nova.compute.manager [req-ad9096b5-bc16-4c7a-88d5-4764f683863c req-8f0d0a5e-539f-4e0c-8e6d-06b796477acf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:38.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.213 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.215 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.215 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.215 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.216 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.217 257641 INFO nova.compute.manager [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Terminating instance#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.218 257641 DEBUG nova.compute.manager [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:23:39 np0005539550 kernel: tap573e5085-76 (unregistering): left promiscuous mode
Nov 29 03:23:39 np0005539550 NetworkManager[49039]: <info>  [1764404619.2633] device (tap573e5085-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:39Z|00564|binding|INFO|Releasing lport 573e5085-7652-4c15-b353-cd8eff879375 from this chassis (sb_readonly=0)
Nov 29 03:23:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:39Z|00565|binding|INFO|Setting lport 573e5085-7652-4c15-b353-cd8eff879375 down in Southbound
Nov 29 03:23:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:39Z|00566|binding|INFO|Removing iface tap573e5085-76 ovn-installed in OVS
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.282 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:4e:b0 10.100.0.6'], port_security=['fa:16:3e:43:4e:b0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'f09f0e1a-bc69-4cd4-b504-3ba084ffa875', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '26e3508b949a4dbf960d7befc8f27869', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'f8b3ac18-c5ae-4ce5-b905-769d2e675d6d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.241', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37614949-afe4-4907-8dd7-b52152148378, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=573e5085-7652-4c15-b353-cd8eff879375) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.283 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 573e5085-7652-4c15-b353-cd8eff879375 in datapath 58fd104d-4342-482d-ae9e-dbb4b9fa6788 unbound from our chassis#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.285 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 58fd104d-4342-482d-ae9e-dbb4b9fa6788, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.287 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fa53ebca-8466-4f35-a047-514bf3325fef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.288 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 namespace which is not needed anymore#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Nov 29 03:23:39 np0005539550 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000007d.scope: Consumed 3.217s CPU time.
Nov 29 03:23:39 np0005539550 systemd-machined[216673]: Machine qemu-67-instance-0000007d terminated.
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [NOTICE]   (338593) : haproxy version is 2.8.14-c23fe91
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [NOTICE]   (338593) : path to executable is /usr/sbin/haproxy
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [WARNING]  (338593) : Exiting Master process...
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [WARNING]  (338593) : Exiting Master process...
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [ALERT]    (338593) : Current worker (338595) exited with code 143 (Terminated)
Nov 29 03:23:39 np0005539550 neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788[338589]: [WARNING]  (338593) : All workers exited. Exiting... (0)
Nov 29 03:23:39 np0005539550 systemd[1]: libpod-b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4.scope: Deactivated successfully.
Nov 29 03:23:39 np0005539550 podman[338678]: 2025-11-29 08:23:39.422795522 +0000 UTC m=+0.047257775 container died b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.452 257641 INFO nova.virt.libvirt.driver [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Instance destroyed successfully.#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.454 257641 DEBUG nova.objects.instance [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lazy-loading 'resources' on Instance uuid f09f0e1a-bc69-4cd4-b504-3ba084ffa875 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:23:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e95df754118fa0772caa8f2c9d507028159bafaa58be204cacc114f5dbf6d3a2-merged.mount: Deactivated successfully.
Nov 29 03:23:39 np0005539550 podman[338678]: 2025-11-29 08:23:39.470001975 +0000 UTC m=+0.094464228 container cleanup b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.471 257641 DEBUG nova.virt.libvirt.vif [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-161781417',display_name='tempest-ServerActionsTestJSON-server-161781417',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-161781417',id=125,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHzpWtKVBxR8y0ptyf26y7qDtzaZ8kbONkoZ9pomjaUJfrobt3UrzOwJRKUVsAcnHq9vyCWex553L84ouC5hX916iXo50xuUU5ZZ/mR8SlhwWlkwNt3Z2Xuyrzlm/13P0A==',key_name='tempest-keypair-2034735121',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:22:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='26e3508b949a4dbf960d7befc8f27869',ramdisk_id='',reservation_id='r-u6xh7rnj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-2111371935',owner_user_name='tempest-ServerActionsTestJSON-2111371935-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='80ceb9112b3a4f119c05f21fd617af11',uuid=f09f0e1a-bc69-4cd4-b504-3ba084ffa875,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.472 257641 DEBUG nova.network.os_vif_util [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converting VIF {"id": "573e5085-7652-4c15-b353-cd8eff879375", "address": "fa:16:3e:43:4e:b0", "network": {"id": "58fd104d-4342-482d-ae9e-dbb4b9fa6788", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1145729544-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "26e3508b949a4dbf960d7befc8f27869", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap573e5085-76", "ovs_interfaceid": "573e5085-7652-4c15-b353-cd8eff879375", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.473 257641 DEBUG nova.network.os_vif_util [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.474 257641 DEBUG os_vif [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.476 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.477 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap573e5085-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.479 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 systemd[1]: libpod-conmon-b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4.scope: Deactivated successfully.
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.483 257641 INFO os_vif [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:4e:b0,bridge_name='br-int',has_traffic_filtering=True,id=573e5085-7652-4c15-b353-cd8eff879375,network=Network(58fd104d-4342-482d-ae9e-dbb4b9fa6788),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap573e5085-76')#033[00m
Nov 29 03:23:39 np0005539550 podman[338716]: 2025-11-29 08:23:39.539913293 +0000 UTC m=+0.044313884 container remove b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:23:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 163 KiB/s wr, 108 op/s
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.548 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[971c6915-2e65-4968-8cbe-9ed5fb7380a9]: (4, ('Sat Nov 29 08:23:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4)\nb5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4\nSat Nov 29 08:23:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 (b5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4)\nb5542a4fa6545d0da8e68a18768b8d0a1b5242aa0d7c862ff9cb1e8bf713abd4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.550 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[897dabc1-5b1c-403d-9138-2d61102806ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.552 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap58fd104d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.554 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 kernel: tap58fd104d-40: left promiscuous mode
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.570 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.571 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[723c95c6-a25a-4649-93ce-fbf5edd22dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2eeee4be-05bf-4226-b451-b6ff04328312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.591 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[948f98dd-e113-4853-9274-0234ff8f9f85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.607 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a343ae69-bf0c-4807-8cdb-51306dca81be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 766364, 'reachable_time': 35487, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338747, 'error': None, 'target': 'ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 systemd[1]: run-netns-ovnmeta\x2d58fd104d\x2d4342\x2d482d\x2dae9e\x2ddbb4b9fa6788.mount: Deactivated successfully.
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.612 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-58fd104d-4342-482d-ae9e-dbb4b9fa6788 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:23:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:39.612 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b3070e75-fdef-48a9-9d49-1407f2b874d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:39.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.906 257641 INFO nova.virt.libvirt.driver [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Deleting instance files /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_del#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.907 257641 INFO nova.virt.libvirt.driver [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Deletion of /var/lib/nova/instances/f09f0e1a-bc69-4cd4-b504-3ba084ffa875_del complete#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.956 257641 INFO nova.compute.manager [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.956 257641 DEBUG oslo.service.loopingcall [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.957 257641 DEBUG nova.compute.manager [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:23:39 np0005539550 nova_compute[257631]: 2025-11-29 08:23:39.957 257641 DEBUG nova.network.neutron [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.268 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.268 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.269 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.269 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.723 257641 DEBUG nova.compute.manager [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.724 257641 DEBUG oslo_concurrency.lockutils [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.724 257641 DEBUG oslo_concurrency.lockutils [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.724 257641 DEBUG oslo_concurrency.lockutils [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.724 257641 DEBUG nova.compute.manager [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:40 np0005539550 nova_compute[257631]: 2025-11-29 08:23:40.725 257641 DEBUG nova.compute.manager [req-284dc2a1-6997-4f8f-bc0a-487285dc31e2 req-7f86b584-74ab-4cf7-8859-354933f57b46 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-unplugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:23:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:40.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.255 257641 DEBUG nova.network.neutron [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.286 257641 INFO nova.compute.manager [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Took 1.33 seconds to deallocate network for instance.#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.329 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.330 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.390 257641 DEBUG nova.compute.manager [req-1162c0de-c644-4424-99dd-574f2696c6b9 req-b41c8e36-67dc-4494-bb88-dc6b279bc959 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-deleted-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.427 257641 DEBUG oslo_concurrency.processutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 299 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.7 MiB/s wr, 129 op/s
Nov 29 03:23:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:41.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451493538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.871 257641 DEBUG oslo_concurrency.processutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:41 np0005539550 nova_compute[257631]: 2025-11-29 08:23:41.877 257641 DEBUG nova.compute.provider_tree [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.195 257641 DEBUG nova.scheduler.client.report [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.230 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.288 257641 INFO nova.scheduler.client.report [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Deleted allocations for instance f09f0e1a-bc69-4cd4-b504-3ba084ffa875#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.367 257641 DEBUG oslo_concurrency.lockutils [None req-0011054c-4234-4fab-9198-eeef61d7e258 80ceb9112b3a4f119c05f21fd617af11 26e3508b949a4dbf960d7befc8f27869 - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.643 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:42.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.818 257641 DEBUG nova.compute.manager [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.818 257641 DEBUG oslo_concurrency.lockutils [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.819 257641 DEBUG oslo_concurrency.lockutils [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.819 257641 DEBUG oslo_concurrency.lockutils [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f09f0e1a-bc69-4cd4-b504-3ba084ffa875-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.819 257641 DEBUG nova.compute.manager [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] No waiting events found dispatching network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:42 np0005539550 nova_compute[257631]: 2025-11-29 08:23:42.820 257641 WARNING nova.compute.manager [req-b4da4ff5-ce7f-47c8-92f1-81a46ddee95c req-a3043d5c-d12b-4cff-951e-43827076d053 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Received unexpected event network-vif-plugged-573e5085-7652-4c15-b353-cd8eff879375 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:23:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 295 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 29 03:23:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:43.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:44 np0005539550 podman[338774]: 2025-11-29 08:23:44.34070244 +0000 UTC m=+0.084773825 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.840 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.840 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.857 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.920 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.921 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.928 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:23:44 np0005539550 nova_compute[257631]: 2025-11-29 08:23:44.929 257641 INFO nova.compute.claims [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.090 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Nov 29 03:23:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:23:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013167372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.580 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.587 257641 DEBUG nova.compute.provider_tree [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.617 257641 DEBUG nova.scheduler.client.report [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.659 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.660 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.713 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.714 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:23:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:45.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.734 257641 INFO nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.758 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.835 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.836 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.837 257641 INFO nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Creating image(s)#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.867 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.900 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.928 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.932 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.958 257641 DEBUG nova.policy [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '873186539acb4bf9b90513e0e1beb56f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.997 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.998 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.998 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:45 np0005539550 nova_compute[257631]: 2025-11-29 08:23:45.999 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.024 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.030 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 4702f4ee-458d-4146-b9b2-70ecf718176c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.336 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 4702f4ee-458d-4146-b9b2-70ecf718176c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.406 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] resizing rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.511 257641 DEBUG nova.objects.instance [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.528 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.529 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Ensure instance console log exists: /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.529 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.530 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.530 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:46.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:46 np0005539550 nova_compute[257631]: 2025-11-29 08:23:46.919 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Successfully created port: 0d1cf0d1-b379-4a62-8413-831aa8ff906b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:23:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:47Z|00567|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:23:47 np0005539550 nova_compute[257631]: 2025-11-29 08:23:47.538 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 146 op/s
Nov 29 03:23:47 np0005539550 nova_compute[257631]: 2025-11-29 08:23:47.646 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:47.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.261 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Successfully updated port: 0d1cf0d1-b379-4a62-8413-831aa8ff906b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.288 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.288 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.288 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.365 257641 DEBUG nova.compute.manager [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-changed-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.366 257641 DEBUG nova.compute.manager [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Refreshing instance network info cache due to event network-changed-0d1cf0d1-b379-4a62-8413-831aa8ff906b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.366 257641 DEBUG oslo_concurrency.lockutils [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:23:48 np0005539550 nova_compute[257631]: 2025-11-29 08:23:48.531 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:23:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:48.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.482 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 337 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 148 op/s
Nov 29 03:23:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:49.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.772 257641 DEBUG nova.network.neutron [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.800 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.801 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance network_info: |[{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.801 257641 DEBUG oslo_concurrency.lockutils [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.801 257641 DEBUG nova.network.neutron [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Refreshing network info cache for port 0d1cf0d1-b379-4a62-8413-831aa8ff906b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.805 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start _get_guest_xml network_info=[{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.812 257641 WARNING nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.818 257641 DEBUG nova.virt.libvirt.host [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.819 257641 DEBUG nova.virt.libvirt.host [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.826 257641 DEBUG nova.virt.libvirt.host [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.827 257641 DEBUG nova.virt.libvirt.host [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.828 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.828 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.829 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.829 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.829 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.830 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.830 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.830 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.830 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.831 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.831 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.831 257641 DEBUG nova.virt.hardware [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:23:49 np0005539550 nova_compute[257631]: 2025-11-29 08:23:49.835 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3044383214' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.263 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.298 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.305 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:50.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:23:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1205182744' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.790 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.791 257641 DEBUG nova.virt.libvirt.vif [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-2014288875',display_name='tempest-ServerStableDeviceRescueTest-server-2014288875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-2014288875',id=129,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-t1kduh0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:45Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=4702f4ee-458d-4146-b9b2-70ecf718176c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.791 257641 DEBUG nova.network.os_vif_util [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.792 257641 DEBUG nova.network.os_vif_util [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.793 257641 DEBUG nova.objects.instance [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.808 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <uuid>4702f4ee-458d-4146-b9b2-70ecf718176c</uuid>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <name>instance-00000081</name>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-2014288875</nova:name>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:23:49</nova:creationTime>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <nova:port uuid="0d1cf0d1-b379-4a62-8413-831aa8ff906b">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="serial">4702f4ee-458d-4146-b9b2-70ecf718176c</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="uuid">4702f4ee-458d-4146-b9b2-70ecf718176c</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8e:e3:35"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <target dev="tap0d1cf0d1-b3"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/console.log" append="off"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:23:50 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:23:50 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:23:50 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:23:50 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.809 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Preparing to wait for external event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.810 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.810 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.811 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.811 257641 DEBUG nova.virt.libvirt.vif [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-2014288875',display_name='tempest-ServerStableDeviceRescueTest-server-2014288875',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-2014288875',id=129,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-t1kduh0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:23:45Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=4702f4ee-458d-4146-b9b2-70ecf718176c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.812 257641 DEBUG nova.network.os_vif_util [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.812 257641 DEBUG nova.network.os_vif_util [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.813 257641 DEBUG os_vif [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.814 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.814 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.815 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.818 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.819 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d1cf0d1-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.819 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d1cf0d1-b3, col_values=(('external_ids', {'iface-id': '0d1cf0d1-b379-4a62-8413-831aa8ff906b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:e3:35', 'vm-uuid': '4702f4ee-458d-4146-b9b2-70ecf718176c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.821 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:50 np0005539550 NetworkManager[49039]: <info>  [1764404630.8219] manager: (tap0d1cf0d1-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.826 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.827 257641 INFO os_vif [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3')#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.926 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.927 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.927 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:8e:e3:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.928 257641 INFO nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Using config drive#033[00m
Nov 29 03:23:50 np0005539550 nova_compute[257631]: 2025-11-29 08:23:50.966 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 770 KiB/s rd, 5.6 MiB/s wr, 165 op/s
Nov 29 03:23:51 np0005539550 nova_compute[257631]: 2025-11-29 08:23:51.631 257641 INFO nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Creating config drive at /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config#033[00m
Nov 29 03:23:51 np0005539550 nova_compute[257631]: 2025-11-29 08:23:51.636 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb676bqnn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:51.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:51 np0005539550 nova_compute[257631]: 2025-11-29 08:23:51.767 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb676bqnn" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:51 np0005539550 nova_compute[257631]: 2025-11-29 08:23:51.800 257641 DEBUG nova.storage.rbd_utils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:23:51 np0005539550 nova_compute[257631]: 2025-11-29 08:23:51.804 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.302289) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632302354, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1437, "num_deletes": 260, "total_data_size": 2218124, "memory_usage": 2256416, "flush_reason": "Manual Compaction"}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632314804, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2179333, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46374, "largest_seqno": 47810, "table_properties": {"data_size": 2172780, "index_size": 3624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14730, "raw_average_key_size": 20, "raw_value_size": 2159196, "raw_average_value_size": 2949, "num_data_blocks": 160, "num_entries": 732, "num_filter_entries": 732, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404513, "oldest_key_time": 1764404513, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 12610 microseconds, and 5140 cpu microseconds.
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.314897) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2179333 bytes OK
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.314914) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.316646) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.316659) EVENT_LOG_v1 {"time_micros": 1764404632316655, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.316676) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2211825, prev total WAL file size 2211825, number of live WAL files 2.
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.317390) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353036' seq:72057594037927935, type:22 .. '6C6F676D0031373630' seq:0, type:0; will stop at (end)
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2128KB)], [101(9941KB)]
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632317462, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 12359772, "oldest_snapshot_seqno": -1}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 8216 keys, 12216172 bytes, temperature: kUnknown
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632402913, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 12216172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12161869, "index_size": 32687, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20549, "raw_key_size": 213474, "raw_average_key_size": 25, "raw_value_size": 12015796, "raw_average_value_size": 1462, "num_data_blocks": 1281, "num_entries": 8216, "num_filter_entries": 8216, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.403189) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 12216172 bytes
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.404541) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.5 rd, 142.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.7 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(11.3) write-amplify(5.6) OK, records in: 8752, records dropped: 536 output_compression: NoCompression
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.404563) EVENT_LOG_v1 {"time_micros": 1764404632404552, "job": 60, "event": "compaction_finished", "compaction_time_micros": 85536, "compaction_time_cpu_micros": 33096, "output_level": 6, "num_output_files": 1, "total_output_size": 12216172, "num_input_records": 8752, "num_output_records": 8216, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632405046, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404632407018, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.317205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.407143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.407151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.407155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.407158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:23:52.407161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.557 257641 DEBUG oslo_concurrency.processutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.558 257641 INFO nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Deleting local config drive /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:23:52 np0005539550 kernel: tap0d1cf0d1-b3: entered promiscuous mode
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.6096] manager: (tap0d1cf0d1-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Nov 29 03:23:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:52Z|00568|binding|INFO|Claiming lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b for this chassis.
Nov 29 03:23:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:52Z|00569|binding|INFO|0d1cf0d1-b379-4a62-8413-831aa8ff906b: Claiming fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.620 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.621 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.623 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:23:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:52Z|00570|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b ovn-installed in OVS
Nov 29 03:23:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:52Z|00571|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b up in Southbound
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.628 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.631 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 systemd-udevd[339130]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.634 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b074dce-46a9-4a12-b166-835ded304ede]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.634 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.636 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.637 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21a64694-d66f-4e91-927a-92d98fc0442f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.639 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[617061c7-6760-45b7-aa23-f809034d20de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 systemd-machined[216673]: New machine qemu-68-instance-00000081.
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.6484] device (tap0d1cf0d1-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.648 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.6491] device (tap0d1cf0d1-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.652 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[da0479f0-f043-4eec-bb64-d6989cf8b20c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 systemd[1]: Started Virtual Machine qemu-68-instance-00000081.
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.663 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c18a799b-2179-49a9-a81a-065833ce2d6f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.692 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[286b6dd1-1302-458a-a06f-21cb2c104aa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 systemd-udevd[339134]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.697 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[59b61232-96b9-4a51-aff4-5bef0830d672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.6987] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.727 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f2866848-d377-4865-88bc-c07e3502f5db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.730 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4b9b290f-0b4b-4bb7-bd2e-6021c14c0881]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.7521] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:23:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.761 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6c4d35b5-91c8-486b-b7d5-74217fee5392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.776 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd752d2e-da00-4334-8ab0-34b0a76a9cc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768041, 'reachable_time': 17330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339163, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.790 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[83c99118-4200-46c8-bc63-28df94417783]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 768041, 'tstamp': 768041}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339164, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.812 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd3dbd2-203f-4eff-989f-b07bf9009627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768041, 'reachable_time': 17330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339165, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.842 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[25feebf4-01a8-4945-9768-da852b317ad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.894 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0f9ac070-ed1a-473f-81c0-387db32eed6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.895 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.896 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.896 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.898 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:23:52 np0005539550 NetworkManager[49039]: <info>  [1764404632.8988] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.901 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.902 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:23:52Z|00572|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:23:52 np0005539550 nova_compute[257631]: 2025-11-29 08:23:52.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.918 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.919 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d063b6b8-23d3-4e88-8127-d7d9ede22508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.920 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:23:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:23:52.920 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:23:53 np0005539550 nova_compute[257631]: 2025-11-29 08:23:53.241 257641 DEBUG nova.network.neutron [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updated VIF entry in instance network info cache for port 0d1cf0d1-b379-4a62-8413-831aa8ff906b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:23:53 np0005539550 nova_compute[257631]: 2025-11-29 08:23:53.242 257641 DEBUG nova.network.neutron [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:23:53 np0005539550 nova_compute[257631]: 2025-11-29 08:23:53.266 257641 DEBUG oslo_concurrency.lockutils [req-21ff4210-f5fa-4e0c-adc9-ecee9cd7babd req-bc6fd54f-2d2a-4e9c-ac38-20e0ae82681d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:23:53 np0005539550 podman[339197]: 2025-11-29 08:23:53.290169646 +0000 UTC m=+0.047997763 container create 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:23:53 np0005539550 systemd[1]: Started libpod-conmon-87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a.scope.
Nov 29 03:23:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:23:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d597fb980a450622cc834076d85f018b6a993f54c77489a5b8da04780290de2c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:23:53 np0005539550 podman[339197]: 2025-11-29 08:23:53.35992563 +0000 UTC m=+0.117753767 container init 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:53 np0005539550 podman[339197]: 2025-11-29 08:23:53.267053721 +0000 UTC m=+0.024881858 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:23:53 np0005539550 podman[339197]: 2025-11-29 08:23:53.364962751 +0000 UTC m=+0.122790858 container start 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:23:53 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [NOTICE]   (339217) : New worker (339219) forked
Nov 29 03:23:53 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [NOTICE]   (339217) : Loading success.
Nov 29 03:23:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 156 KiB/s rd, 4.0 MiB/s wr, 92 op/s
Nov 29 03:23:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:53.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.003 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404634.0032296, 4702f4ee-458d-4146-b9b2-70ecf718176c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.004 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.450 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404619.4491863, f09f0e1a-bc69-4cd4-b504-3ba084ffa875 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.451 257641 INFO nova.compute.manager [-] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:23:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.938 257641 DEBUG nova.compute.manager [req-d5080bce-fc15-4351-8107-3845be03d508 req-2a584e74-c433-4128-93bf-f6362c511f38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.939 257641 DEBUG oslo_concurrency.lockutils [req-d5080bce-fc15-4351-8107-3845be03d508 req-2a584e74-c433-4128-93bf-f6362c511f38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.939 257641 DEBUG oslo_concurrency.lockutils [req-d5080bce-fc15-4351-8107-3845be03d508 req-2a584e74-c433-4128-93bf-f6362c511f38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.939 257641 DEBUG oslo_concurrency.lockutils [req-d5080bce-fc15-4351-8107-3845be03d508 req-2a584e74-c433-4128-93bf-f6362c511f38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.939 257641 DEBUG nova.compute.manager [req-d5080bce-fc15-4351-8107-3845be03d508 req-2a584e74-c433-4128-93bf-f6362c511f38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Processing event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.940 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.944 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.949 257641 INFO nova.virt.libvirt.driver [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance spawned successfully.#033[00m
Nov 29 03:23:54 np0005539550 nova_compute[257631]: 2025-11-29 08:23:54.949 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.210 257641 DEBUG nova.compute.manager [None req-de32265f-8410-4090-ae8f-5e01882849e1 - - - - - -] [instance: f09f0e1a-bc69-4cd4-b504-3ba084ffa875] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.216 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.220 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.220 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.221 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.221 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.222 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.222 257641 DEBUG nova.virt.libvirt.driver [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.226 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.294 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.295 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404634.003731, 4702f4ee-458d-4146-b9b2-70ecf718176c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.295 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.317 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.320 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404634.9436078, 4702f4ee-458d-4146-b9b2-70ecf718176c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.320 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.328 257641 INFO nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.329 257641 DEBUG nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.364 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.367 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.415 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.435 257641 INFO nova.compute.manager [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Took 10.53 seconds to build instance.#033[00m
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.454 257641 DEBUG oslo_concurrency.lockutils [None req-cb51987f-6eff-407c-ab62-609544a9bf9c 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 3.2 MiB/s wr, 67 op/s
Nov 29 03:23:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:55.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:55 np0005539550 nova_compute[257631]: 2025-11-29 08:23:55.821 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:56.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:23:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 215 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.652 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:23:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.919 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.920 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.920 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.920 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.920 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:57 np0005539550 nova_compute[257631]: 2025-11-29 08:23:57.920 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.141 257641 DEBUG nova.compute.manager [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.142 257641 DEBUG oslo_concurrency.lockutils [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.142 257641 DEBUG oslo_concurrency.lockutils [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.142 257641 DEBUG oslo_concurrency.lockutils [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.143 257641 DEBUG nova.compute.manager [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.143 257641 WARNING nova.compute.manager [req-60f787cc-c027-4d98-bdbe-fc59f7211d47 req-3612546d-87bd-43f7-b4b7-1f5eac470db5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.219 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.220 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Image id 4873db8c-b414-4e95-acd9-77caabebe722 yields fingerprint f62ef5f82502d01c82174408aec7f3ac942e2488 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.220 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] image 4873db8c-b414-4e95-acd9-77caabebe722 at (/var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488): checking#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.220 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] image 4873db8c-b414-4e95-acd9-77caabebe722 at (/var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.223 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.223 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] d17ba263-68c7-4428-9d64-9a809e93a457 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.223 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] 4702f4ee-458d-4146-b9b2-70ecf718176c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.224 257641 WARNING nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.224 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Active base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.224 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Removable base files: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.225 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.225 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.226 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 03:23:58 np0005539550 nova_compute[257631]: 2025-11-29 08:23:58.226 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 03:23:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:23:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:23:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:23:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:23:58.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:23:59
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr']
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:23:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Nov 29 03:23:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:23:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:23:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:23:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:23:59 np0005539550 nova_compute[257631]: 2025-11-29 08:23:59.767 257641 DEBUG nova.compute.manager [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:23:59 np0005539550 nova_compute[257631]: 2025-11-29 08:23:59.847 257641 INFO nova.compute.manager [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] instance snapshotting#033[00m
Nov 29 03:24:00 np0005539550 nova_compute[257631]: 2025-11-29 08:24:00.100 257641 INFO nova.virt.libvirt.driver [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Beginning live snapshot process#033[00m
Nov 29 03:24:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:00.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:00 np0005539550 nova_compute[257631]: 2025-11-29 08:24:00.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:01 np0005539550 podman[339327]: 2025-11-29 08:24:01.326717599 +0000 UTC m=+0.060185716 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Nov 29 03:24:01 np0005539550 podman[339326]: 2025-11-29 08:24:01.360722115 +0000 UTC m=+0.093042674 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:24:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 318 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 122 op/s
Nov 29 03:24:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:01 np0005539550 nova_compute[257631]: 2025-11-29 08:24:01.936 257641 DEBUG nova.virt.libvirt.imagebackend [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:24:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:02 np0005539550 nova_compute[257631]: 2025-11-29 08:24:02.310 257641 DEBUG nova.storage.rbd_utils [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(94ac4f6d42f24cb5968620094de45b54) on rbd image(4702f4ee-458d-4146-b9b2-70ecf718176c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:24:02 np0005539550 nova_compute[257631]: 2025-11-29 08:24:02.654 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Nov 29 03:24:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Nov 29 03:24:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:02.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Nov 29 03:24:03 np0005539550 nova_compute[257631]: 2025-11-29 08:24:03.136 257641 DEBUG nova.storage.rbd_utils [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk@94ac4f6d42f24cb5968620094de45b54 to images/c5df7ff6-8c6a-4656-a8c2-618a39624a42 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:24:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 23 KiB/s wr, 124 op/s
Nov 29 03:24:03 np0005539550 nova_compute[257631]: 2025-11-29 08:24:03.606 257641 DEBUG nova.storage.rbd_utils [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] flattening images/c5df7ff6-8c6a-4656-a8c2-618a39624a42 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:24:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c426b5b5-8a17-42e9-84c1-f73108225538 does not exist
Nov 29 03:24:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 76afdbd6-71d0-4ab1-be34-b18ae2deb502 does not exist
Nov 29 03:24:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1049f984-14b3-48fe-9383-51dbbced2bc0 does not exist
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:24:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:24:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:04.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:04 np0005539550 nova_compute[257631]: 2025-11-29 08:24:04.808 257641 DEBUG nova.storage.rbd_utils [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] removing snapshot(94ac4f6d42f24cb5968620094de45b54) on rbd image(4702f4ee-458d-4146-b9b2-70ecf718176c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:24:04 np0005539550 podman[339758]: 2025-11-29 08:24:04.959012591 +0000 UTC m=+0.056946288 container create a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:24:04 np0005539550 systemd[1]: Started libpod-conmon-a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7.scope.
Nov 29 03:24:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:04.932406132 +0000 UTC m=+0.030339919 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:05.041631584 +0000 UTC m=+0.139565371 container init a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:05.04897843 +0000 UTC m=+0.146912127 container start a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:05.053060818 +0000 UTC m=+0.150994555 container attach a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:24:05 np0005539550 affectionate_chandrasekhar[339774]: 167 167
Nov 29 03:24:05 np0005539550 systemd[1]: libpod-a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7.scope: Deactivated successfully.
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:05.060419955 +0000 UTC m=+0.158353682 container died a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cf1e158e7195422b51529d8f4d6e1ce90efabdb7dc8bf4b01c2a6647c22145f4-merged.mount: Deactivated successfully.
Nov 29 03:24:05 np0005539550 podman[339758]: 2025-11-29 08:24:05.099329789 +0000 UTC m=+0.197263476 container remove a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:24:05 np0005539550 systemd[1]: libpod-conmon-a8fab3e8b7d5801cf4921f2fc397d4ed8ba4fbe7f79ae15469081e15373e7be7.scope: Deactivated successfully.
Nov 29 03:24:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:24:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:24:05 np0005539550 podman[339798]: 2025-11-29 08:24:05.306173334 +0000 UTC m=+0.062624585 container create b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:24:05 np0005539550 systemd[1]: Started libpod-conmon-b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce.scope.
Nov 29 03:24:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:05 np0005539550 podman[339798]: 2025-11-29 08:24:05.277189238 +0000 UTC m=+0.033640539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:05 np0005539550 podman[339798]: 2025-11-29 08:24:05.403078469 +0000 UTC m=+0.159529750 container init b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:24:05 np0005539550 podman[339798]: 2025-11-29 08:24:05.413155451 +0000 UTC m=+0.169606702 container start b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:24:05 np0005539550 podman[339798]: 2025-11-29 08:24:05.4164227 +0000 UTC m=+0.172873981 container attach b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:24:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 301 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 130 KiB/s wr, 161 op/s
Nov 29 03:24:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:05 np0005539550 nova_compute[257631]: 2025-11-29 08:24:05.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Nov 29 03:24:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Nov 29 03:24:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Nov 29 03:24:06 np0005539550 recursing_elion[339814]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:24:06 np0005539550 recursing_elion[339814]: --> relative data size: 1.0
Nov 29 03:24:06 np0005539550 recursing_elion[339814]: --> All data devices are unavailable
Nov 29 03:24:06 np0005539550 systemd[1]: libpod-b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce.scope: Deactivated successfully.
Nov 29 03:24:06 np0005539550 podman[339798]: 2025-11-29 08:24:06.237359923 +0000 UTC m=+0.993811174 container died b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:24:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1e1088071addd8243057ea2bb6fc219f28527ffa2fd57f5f6cce8e4a6531e44f-merged.mount: Deactivated successfully.
Nov 29 03:24:06 np0005539550 nova_compute[257631]: 2025-11-29 08:24:06.270 257641 DEBUG nova.storage.rbd_utils [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(snap) on rbd image(c5df7ff6-8c6a-4656-a8c2-618a39624a42) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:24:06 np0005539550 podman[339798]: 2025-11-29 08:24:06.289652878 +0000 UTC m=+1.046104129 container remove b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_elion, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:24:06 np0005539550 systemd[1]: libpod-conmon-b82548dea8808f80a8036cc6acdac7c8b8563b882e8f7aef4dce079c2cb869ce.scope: Deactivated successfully.
Nov 29 03:24:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:06.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.857656361 +0000 UTC m=+0.038713250 container create 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:24:06 np0005539550 systemd[1]: Started libpod-conmon-9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4.scope.
Nov 29 03:24:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.931104244 +0000 UTC m=+0.112161183 container init 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.938206835 +0000 UTC m=+0.119263724 container start 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.842566639 +0000 UTC m=+0.023623538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:06 np0005539550 nostalgic_spence[340017]: 167 167
Nov 29 03:24:06 np0005539550 systemd[1]: libpod-9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4.scope: Deactivated successfully.
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.942785115 +0000 UTC m=+0.123842004 container attach 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.943234885 +0000 UTC m=+0.124291774 container died 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:24:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3e3c62d077b9a7cbf0ea2e49221e7a14411a5d0cc50f7644d4952f92bf8fe11b-merged.mount: Deactivated successfully.
Nov 29 03:24:06 np0005539550 podman[340000]: 2025-11-29 08:24:06.980737966 +0000 UTC m=+0.161794865 container remove 9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:07 np0005539550 systemd[1]: libpod-conmon-9a399157f9fc674819baeb23302f886e95f4d6de416fb42f3821c6fdf88779b4.scope: Deactivated successfully.
Nov 29 03:24:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Nov 29 03:24:07 np0005539550 podman[340040]: 2025-11-29 08:24:07.184086517 +0000 UTC m=+0.037326367 container create b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:24:07 np0005539550 systemd[1]: Started libpod-conmon-b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1.scope.
Nov 29 03:24:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d86d1a67387161fa4e8041ca1cb41acb74db2fc4d05cdbb6731ff2b89d2cf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d86d1a67387161fa4e8041ca1cb41acb74db2fc4d05cdbb6731ff2b89d2cf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d86d1a67387161fa4e8041ca1cb41acb74db2fc4d05cdbb6731ff2b89d2cf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d86d1a67387161fa4e8041ca1cb41acb74db2fc4d05cdbb6731ff2b89d2cf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:07 np0005539550 podman[340040]: 2025-11-29 08:24:07.169384804 +0000 UTC m=+0.022624674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:07 np0005539550 podman[340040]: 2025-11-29 08:24:07.267448287 +0000 UTC m=+0.120688187 container init b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:24:07 np0005539550 podman[340040]: 2025-11-29 08:24:07.279137998 +0000 UTC m=+0.132377868 container start b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:24:07 np0005539550 podman[340040]: 2025-11-29 08:24:07.282840977 +0000 UTC m=+0.136080857 container attach b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:24:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 329 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.3 MiB/s wr, 220 op/s
Nov 29 03:24:07 np0005539550 nova_compute[257631]: 2025-11-29 08:24:07.684 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:07.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:08 np0005539550 boring_shamir[340057]: {
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:    "0": [
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:        {
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "devices": [
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "/dev/loop3"
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            ],
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "lv_name": "ceph_lv0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "lv_size": "7511998464",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "name": "ceph_lv0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "tags": {
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.cluster_name": "ceph",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.crush_device_class": "",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.encrypted": "0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.osd_id": "0",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.type": "block",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:                "ceph.vdo": "0"
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            },
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "type": "block",
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:            "vg_name": "ceph_vg0"
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:        }
Nov 29 03:24:08 np0005539550 boring_shamir[340057]:    ]
Nov 29 03:24:08 np0005539550 boring_shamir[340057]: }
Nov 29 03:24:08 np0005539550 systemd[1]: libpod-b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539550 podman[340040]: 2025-11-29 08:24:08.048184667 +0000 UTC m=+0.901424517 container died b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:24:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d0d86d1a67387161fa4e8041ca1cb41acb74db2fc4d05cdbb6731ff2b89d2cf1-merged.mount: Deactivated successfully.
Nov 29 03:24:08 np0005539550 podman[340040]: 2025-11-29 08:24:08.099821986 +0000 UTC m=+0.953061836 container remove b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:24:08 np0005539550 systemd[1]: libpod-conmon-b08a10f4846653b071fae26cf7d68ed5aa938f7e928706d93ab638360b9aa6c1.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:24:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Nov 29 03:24:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.770951725 +0000 UTC m=+0.048263400 container create 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:24:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:08.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:08 np0005539550 systemd[1]: Started libpod-conmon-06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6.scope.
Nov 29 03:24:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.843885835 +0000 UTC m=+0.121197510 container init 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.849621703 +0000 UTC m=+0.126933368 container start 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.755157836 +0000 UTC m=+0.032469531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:08 np0005539550 frosty_engelbart[340235]: 167 167
Nov 29 03:24:08 np0005539550 systemd[1]: libpod-06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.854622523 +0000 UTC m=+0.131934228 container attach 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.855198627 +0000 UTC m=+0.132510292 container died 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-529f3b7f7c0100fc78d7805966013d32287483f31f971f1eef1f6bd3c9cc9991-merged.mount: Deactivated successfully.
Nov 29 03:24:08 np0005539550 podman[340219]: 2025-11-29 08:24:08.891299243 +0000 UTC m=+0.168610908 container remove 06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_engelbart, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:08 np0005539550 systemd[1]: libpod-conmon-06ac02371a402c111ff58b34c75538b1285eee0aa00626b2b782cf0d223a96d6.scope: Deactivated successfully.
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:24:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:24:09 np0005539550 podman[340260]: 2025-11-29 08:24:09.10989013 +0000 UTC m=+0.062746287 container create 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:24:09 np0005539550 systemd[1]: Started libpod-conmon-1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767.scope.
Nov 29 03:24:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bab9752910fd09b0c4c49d83d75d7b72fd577a423f31be03654b3256c24489/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539550 podman[340260]: 2025-11-29 08:24:09.085950065 +0000 UTC m=+0.038806292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:24:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bab9752910fd09b0c4c49d83d75d7b72fd577a423f31be03654b3256c24489/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bab9752910fd09b0c4c49d83d75d7b72fd577a423f31be03654b3256c24489/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6bab9752910fd09b0c4c49d83d75d7b72fd577a423f31be03654b3256c24489/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:09 np0005539550 podman[340260]: 2025-11-29 08:24:09.193141048 +0000 UTC m=+0.145997215 container init 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:24:09 np0005539550 podman[340260]: 2025-11-29 08:24:09.201748335 +0000 UTC m=+0.154604482 container start 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:24:09 np0005539550 podman[340260]: 2025-11-29 08:24:09.204779647 +0000 UTC m=+0.157635794 container attach 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:24:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.2 MiB/s wr, 190 op/s
Nov 29 03:24:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:09.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]: {
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:        "osd_id": 0,
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:        "type": "bluestore"
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]:    }
Nov 29 03:24:10 np0005539550 upbeat_ramanujan[340276]: }
Nov 29 03:24:10 np0005539550 systemd[1]: libpod-1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767.scope: Deactivated successfully.
Nov 29 03:24:10 np0005539550 podman[340260]: 2025-11-29 08:24:10.110995588 +0000 UTC m=+1.063851735 container died 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Nov 29 03:24:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b6bab9752910fd09b0c4c49d83d75d7b72fd577a423f31be03654b3256c24489-merged.mount: Deactivated successfully.
Nov 29 03:24:10 np0005539550 podman[340260]: 2025-11-29 08:24:10.172370511 +0000 UTC m=+1.125226658 container remove 1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_ramanujan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:24:10 np0005539550 systemd[1]: libpod-conmon-1c67a26fc76cfe1f7dee1bcc1aa9112ac202c00f7966cf94b3423a62e41ac767.scope: Deactivated successfully.
Nov 29 03:24:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:24:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:24:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 51b2e572-eb89-4eba-b6cd-75a3193507d4 does not exist
Nov 29 03:24:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a0f7a36e-547f-4064-b3c0-84e16203fc2c does not exist
Nov 29 03:24:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 991172e9-7e52-4e5a-b513-70da1e6583d0 does not exist
Nov 29 03:24:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:10Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:24:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:10Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:24:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:10 np0005539550 nova_compute[257631]: 2025-11-29 08:24:10.831 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:24:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 364 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.3 MiB/s wr, 253 op/s
Nov 29 03:24:11 np0005539550 nova_compute[257631]: 2025-11-29 08:24:11.660 257641 INFO nova.virt.libvirt.driver [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Snapshot image upload complete#033[00m
Nov 29 03:24:11 np0005539550 nova_compute[257631]: 2025-11-29 08:24:11.662 257641 INFO nova.compute.manager [None req-ea8e3597-eaaa-42c0-bad5-b3b174d9e4f0 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Took 11.81 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:24:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:11.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:12 np0005539550 nova_compute[257631]: 2025-11-29 08:24:12.685 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:13 np0005539550 nova_compute[257631]: 2025-11-29 08:24:13.298 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 369 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 5.7 MiB/s wr, 238 op/s
Nov 29 03:24:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:14 np0005539550 nova_compute[257631]: 2025-11-29 08:24:14.805 257641 INFO nova.compute.manager [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Rescuing#033[00m
Nov 29 03:24:14 np0005539550 nova_compute[257631]: 2025-11-29 08:24:14.805 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:14 np0005539550 nova_compute[257631]: 2025-11-29 08:24:14.805 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:14 np0005539550 nova_compute[257631]: 2025-11-29 08:24:14.805 257641 DEBUG nova.network.neutron [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:15 np0005539550 podman[340364]: 2025-11-29 08:24:15.358500828 +0000 UTC m=+0.092088971 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 160 op/s
Nov 29 03:24:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:15.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:15 np0005539550 nova_compute[257631]: 2025-11-29 08:24:15.833 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:16.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Nov 29 03:24:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Nov 29 03:24:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Nov 29 03:24:17 np0005539550 nova_compute[257631]: 2025-11-29 08:24:17.268 257641 DEBUG nova.network.neutron [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:17 np0005539550 nova_compute[257631]: 2025-11-29 08:24:17.341 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:17Z|00573|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:24:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:17Z|00574|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:24:17 np0005539550 nova_compute[257631]: 2025-11-29 08:24:17.448 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.8 MiB/s wr, 128 op/s
Nov 29 03:24:17 np0005539550 nova_compute[257631]: 2025-11-29 08:24:17.687 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:17.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:17 np0005539550 nova_compute[257631]: 2025-11-29 08:24:17.781 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:24:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:18.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:18.956 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:18.957 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:18.958 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:19 np0005539550 nova_compute[257631]: 2025-11-29 08:24:19.170 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 380 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 140 op/s
Nov 29 03:24:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:19.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006508962637260376 of space, bias 1.0, pg target 1.9526887911781128 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001541998185371301 of space, bias 1.0, pg target 0.461057457426019 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:24:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:24:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:20 np0005539550 nova_compute[257631]: 2025-11-29 08:24:20.801 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:24:20 np0005539550 nova_compute[257631]: 2025-11-29 08:24:20.835 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 kernel: tap0d1cf0d1-b3 (unregistering): left promiscuous mode
Nov 29 03:24:21 np0005539550 NetworkManager[49039]: <info>  [1764404661.2460] device (tap0d1cf0d1-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:21Z|00575|binding|INFO|Releasing lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b from this chassis (sb_readonly=0)
Nov 29 03:24:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:21Z|00576|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b down in Southbound
Nov 29 03:24:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:21Z|00577|binding|INFO|Removing iface tap0d1cf0d1-b3 ovn-installed in OVS
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.303 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:21.310 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:21.311 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:24:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:21.312 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:21.314 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c52b8b31-bd09-4219-8619-81e8c7b3c060]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:21.315 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 29 03:24:21 np0005539550 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000081.scope: Consumed 15.030s CPU time.
Nov 29 03:24:21 np0005539550 systemd-machined[216673]: Machine qemu-68-instance-00000081 terminated.
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.429 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.440 257641 INFO nova.virt.libvirt.driver [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance destroyed successfully.#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.441 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.472 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Attempting a stable device rescue#033[00m
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [NOTICE]   (339217) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [NOTICE]   (339217) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [WARNING]  (339217) : Exiting Master process...
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [WARNING]  (339217) : Exiting Master process...
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [ALERT]    (339217) : Current worker (339219) exited with code 143 (Terminated)
Nov 29 03:24:21 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[339213]: [WARNING]  (339217) : All workers exited. Exiting... (0)
Nov 29 03:24:21 np0005539550 systemd[1]: libpod-87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a.scope: Deactivated successfully.
Nov 29 03:24:21 np0005539550 podman[340465]: 2025-11-29 08:24:21.528886941 +0000 UTC m=+0.111640690 container died 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:24:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 394 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 3.0 MiB/s wr, 101 op/s
Nov 29 03:24:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d597fb980a450622cc834076d85f018b6a993f54c77489a5b8da04780290de2c-merged.mount: Deactivated successfully.
Nov 29 03:24:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:21.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:21 np0005539550 podman[340465]: 2025-11-29 08:24:21.809393024 +0000 UTC m=+0.392146813 container cleanup 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:24:21 np0005539550 systemd[1]: libpod-conmon-87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a.scope: Deactivated successfully.
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.911 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.916 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.917 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Creating image(s)#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.947 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:21 np0005539550 nova_compute[257631]: 2025-11-29 08:24:21.950 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.000 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:22 np0005539550 podman[340504]: 2025-11-29 08:24:22.008944354 +0000 UTC m=+0.171317683 container remove 87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.017 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21d64ab8-ab38-4255-8736-0ca7fe11b319]: (4, ('Sat Nov 29 08:24:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a)\n87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a\nSat Nov 29 08:24:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a)\n87c419b6ec0a391c085ab47a37615d4b8d36cd7b85ea3703e5f29993dfa3d77a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.019 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63ae3dce-2e6b-4939-8b84-9a1232447edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.020 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:22 np0005539550 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.031 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.035 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "6452e5b89715aff3587ef11d4439e8e4e5e6470d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.036 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "6452e5b89715aff3587ef11d4439e8e4e5e6470d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.045 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[98fd7ca0-405c-4f2e-bd45-d8fdd755307e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.059 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[67541560-720c-4510-9168-31eff1b49589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.060 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[42a85c20-4136-4cd2-8b6d-0a1394bfab49]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.076 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[deaff335-4020-4ade-aed2-5ea0d66bfe09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 768035, 'reachable_time': 28158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340577, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.081 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:22.081 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dac1c6d0-ea81-4912-84dd-894bde6cc985]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.351 257641 DEBUG nova.virt.libvirt.imagebackend [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c5df7ff6-8c6a-4656-a8c2-618a39624a42/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c5df7ff6-8c6a-4656-a8c2-618a39624a42/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.407 257641 DEBUG nova.virt.libvirt.imagebackend [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c5df7ff6-8c6a-4656-a8c2-618a39624a42/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.407 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning images/c5df7ff6-8c6a-4656-a8c2-618a39624a42@snap to None/4702f4ee-458d-4146-b9b2-70ecf718176c_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.738 257641 DEBUG oslo_concurrency.lockutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "6452e5b89715aff3587ef11d4439e8e4e5e6470d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.790 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:22.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.823 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.825 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start _get_guest_xml network_info=[{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:8e:e3:35"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'c5df7ff6-8c6a-4656-a8c2-618a39624a42', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.825 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.861 257641 WARNING nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.867 257641 DEBUG nova.virt.libvirt.host [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.867 257641 DEBUG nova.virt.libvirt.host [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.872 257641 DEBUG nova.virt.libvirt.host [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.872 257641 DEBUG nova.virt.libvirt.host [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.873 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.873 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.874 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.874 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.875 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.875 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.875 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.875 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.876 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.876 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.876 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.876 257641 DEBUG nova.virt.hardware [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.876 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:22 np0005539550 nova_compute[257631]: 2025-11-29 08:24:22.897 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583548615' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.401 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.439 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 400 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 450 KiB/s rd, 2.6 MiB/s wr, 86 op/s
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.726 257641 DEBUG nova.compute.manager [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.728 257641 DEBUG oslo_concurrency.lockutils [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.728 257641 DEBUG oslo_concurrency.lockutils [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.729 257641 DEBUG oslo_concurrency.lockutils [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.729 257641 DEBUG nova.compute.manager [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.729 257641 WARNING nova.compute.manager [req-f24b458c-b027-4d29-a016-8c96785b10b0 req-3bc533d7-270e-47ff-9ba8-050217dff613 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:24:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:23.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1457053158' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.931 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:23 np0005539550 nova_compute[257631]: 2025-11-29 08:24:23.933 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898385130' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.460 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.462 257641 DEBUG nova.virt.libvirt.vif [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-2014288875',display_name='tempest-ServerStableDeviceRescueTest-server-2014288875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-2014288875',id=129,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-t1kduh0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:11Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=4702f4ee-458d-4146-b9b2-70ecf718176c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:8e:e3:35"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.462 257641 DEBUG nova.network.os_vif_util [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:8e:e3:35"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.463 257641 DEBUG nova.network.os_vif_util [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.465 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.485 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <uuid>4702f4ee-458d-4146-b9b2-70ecf718176c</uuid>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <name>instance-00000081</name>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-2014288875</nova:name>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:24:22</nova:creationTime>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <nova:port uuid="0d1cf0d1-b379-4a62-8413-831aa8ff906b">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="serial">4702f4ee-458d-4146-b9b2-70ecf718176c</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="uuid">4702f4ee-458d-4146-b9b2-70ecf718176c</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/4702f4ee-458d-4146-b9b2-70ecf718176c_disk.rescue">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <target dev="sdb" bus="scsi"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <boot order="1"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8e:e3:35"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <target dev="tap0d1cf0d1-b3"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/console.log" append="off"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:24:24 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:24:24 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:24:24 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:24:24 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.494 257641 INFO nova.virt.libvirt.driver [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance destroyed successfully.#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.569 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.570 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.570 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.570 257641 DEBUG nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:8e:e3:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.570 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Using config drive#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.598 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.632 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:24 np0005539550 nova_compute[257631]: 2025-11-29 08:24:24.759 257641 DEBUG nova.objects.instance [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'keypairs' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:24.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 403 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 442 KiB/s rd, 2.6 MiB/s wr, 95 op/s
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.627 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Creating config drive at /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.632 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_nz49ocr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.768 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_nz49ocr" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.797 257641 DEBUG nova.storage.rbd_utils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.800 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.837 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.860 257641 DEBUG nova.compute.manager [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.860 257641 DEBUG oslo_concurrency.lockutils [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.861 257641 DEBUG oslo_concurrency.lockutils [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.861 257641 DEBUG oslo_concurrency.lockutils [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.861 257641 DEBUG nova.compute.manager [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.862 257641 WARNING nova.compute.manager [req-1bdde422-3db3-4fbb-b077-7394e51a1d96 req-f770e49e-8b5d-4077-bc2d-836f23444d60 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.963 257641 DEBUG oslo_concurrency.processutils [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue 4702f4ee-458d-4146-b9b2-70ecf718176c_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:25 np0005539550 nova_compute[257631]: 2025-11-29 08:24:25.964 257641 INFO nova.virt.libvirt.driver [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Deleting local config drive /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:24:26 np0005539550 kernel: tap0d1cf0d1-b3: entered promiscuous mode
Nov 29 03:24:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:26Z|00578|binding|INFO|Claiming lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b for this chassis.
Nov 29 03:24:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:26Z|00579|binding|INFO|0d1cf0d1-b379-4a62-8413-831aa8ff906b: Claiming fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.0424] manager: (tap0d1cf0d1-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.049 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.052 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.054 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:24:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:26Z|00580|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b ovn-installed in OVS
Nov 29 03:24:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:26Z|00581|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b up in Southbound
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.067 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd07f9c5-9486-4a5b-a8db-e6b921a4f06b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.067 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.068 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:26 np0005539550 systemd-udevd[340822]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.071 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.071 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.072 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ba504449-0764-47a5-9c47-1c861f423d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.074 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8152c0-fbcc-4641-93ec-534dfdc42541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.0850] device (tap0d1cf0d1-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.0860] device (tap0d1cf0d1-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.085 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e8199ce5-374e-49b4-a4c3-ea49776e7901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 systemd-machined[216673]: New machine qemu-69-instance-00000081.
Nov 29 03:24:26 np0005539550 systemd[1]: Started Virtual Machine qemu-69-instance-00000081.
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45da23ad-8dd1-4b8b-b5c5-9383dbc40bab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.142 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[873b0291-cdf6-4680-95c8-8a4508e8f4f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.1498] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/265)
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.149 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c87f571-0a23-436a-a3c9-3d879a0c287f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.184 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a9f03e89-6c25-4977-b0a3-242fa88676a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.188 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1406cc9d-dbce-4348-9696-f7eaae776ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.2106] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.216 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ffeceffa-e5c9-471b-a703-78763ebab599]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.237 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e8a2aa-9725-41b8-87a6-ef05bf356977]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771387, 'reachable_time': 26721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340856, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.255 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5efece4b-16de-4a1d-bb1e-62699ff21065]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 771387, 'tstamp': 771387}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340857, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.275 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07199c34-cf87-4a9f-aca4-959dde385738]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771387, 'reachable_time': 26721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340858, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.310 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa53233-43ea-4064-ad88-3fa6a769f938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.367 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c1437bd5-7028-407d-abfe-dd7a9b144e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.369 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.369 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.370 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.373 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 NetworkManager[49039]: <info>  [1764404666.3745] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Nov 29 03:24:26 np0005539550 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.376 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.377 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.378 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:26Z|00582|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:24:26 np0005539550 nova_compute[257631]: 2025-11-29 08:24:26.394 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.396 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.396 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[297c48f9-7113-41d7-b3ce-9365173de0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.397 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:26.398 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:26 np0005539550 podman[340909]: 2025-11-29 08:24:26.761055394 +0000 UTC m=+0.058206938 container create 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:26 np0005539550 systemd[1]: Started libpod-conmon-621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29.scope.
Nov 29 03:24:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:26.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:26 np0005539550 podman[340909]: 2025-11-29 08:24:26.734716412 +0000 UTC m=+0.031868046 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/911ea439af4baae0529e2aa3b581623adc288ee05a36dc70db66fd8fcca943d5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:26 np0005539550 podman[340909]: 2025-11-29 08:24:26.838682608 +0000 UTC m=+0.135834152 container init 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:24:26 np0005539550 podman[340909]: 2025-11-29 08:24:26.844101338 +0000 UTC m=+0.141252882 container start 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:24:26 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [NOTICE]   (340928) : New worker (340930) forked
Nov 29 03:24:26 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [NOTICE]   (340928) : Loading success.
Nov 29 03:24:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.5 MiB/s wr, 100 op/s
Nov 29 03:24:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:27.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.784 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.967 257641 DEBUG nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.967 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.968 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.968 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.968 257641 DEBUG nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.969 257641 WARNING nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.969 257641 DEBUG nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.969 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.969 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.970 257641 DEBUG oslo_concurrency.lockutils [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.970 257641 DEBUG nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:27 np0005539550 nova_compute[257631]: 2025-11-29 08:24:27.970 257641 WARNING nova.compute.manager [req-a26e4280-c8ce-4536-b15d-305839b07ea9 req-715ee1b4-aa5c-4118-baf7-091526bc0220 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:24:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:28.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:28 np0005539550 nova_compute[257631]: 2025-11-29 08:24:28.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 393 KiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 03:24:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:30.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:30 np0005539550 nova_compute[257631]: 2025-11-29 08:24:30.840 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:31 np0005539550 nova_compute[257631]: 2025-11-29 08:24:31.227 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:31 np0005539550 nova_compute[257631]: 2025-11-29 08:24:31.227 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 295 KiB/s rd, 1.2 MiB/s wr, 76 op/s
Nov 29 03:24:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:31.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:32 np0005539550 podman[340941]: 2025-11-29 08:24:32.335058142 +0000 UTC m=+0.069018748 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 03:24:32 np0005539550 podman[340942]: 2025-11-29 08:24:32.335582534 +0000 UTC m=+0.064799516 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:24:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:32.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.835 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.944 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:24:32 np0005539550 nova_compute[257631]: 2025-11-29 08:24:32.945 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077477618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.373 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.462 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.463 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.466 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.466 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.466 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:24:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 98 KiB/s wr, 42 op/s
Nov 29 03:24:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:24:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4201.4 total, 600.0 interval#012Cumulative writes: 37K writes, 143K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.83 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7756 writes, 29K keys, 7756 commit groups, 1.0 writes per commit group, ingest: 30.54 MB, 0.05 MB/s#012Interval WAL: 7756 writes, 3116 syncs, 2.49 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.650 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.652 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4119MB free_disk=20.85146713256836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.652 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.653 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:33.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.904 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance d17ba263-68c7-4428-9d64-9a809e93a457 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.904 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 4702f4ee-458d-4146-b9b2-70ecf718176c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:24:33 np0005539550 nova_compute[257631]: 2025-11-29 08:24:33.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.171 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.287 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 4702f4ee-458d-4146-b9b2-70ecf718176c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.288 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404674.2870297, 4702f4ee-458d-4146-b9b2-70ecf718176c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.289 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.295 257641 DEBUG nova.compute.manager [None req-f6e7ae0b-b3b8-4ab2-9242-c3bba20a0249 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.344 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.348 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.380 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.380 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404674.2886102, 4702f4ee-458d-4146-b9b2-70ecf718176c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.381 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.402 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.404 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1031151806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.649 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.653 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.680 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.707 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:24:34 np0005539550 nova_compute[257631]: 2025-11-29 08:24:34.708 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:34.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 639 KiB/s wr, 44 op/s
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.709 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.709 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:24:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:35.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.842 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.971 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.971 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:35 np0005539550 nova_compute[257631]: 2025-11-29 08:24:35.972 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:24:36 np0005539550 nova_compute[257631]: 2025-11-29 08:24:36.588 257641 INFO nova.compute.manager [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Unrescuing#033[00m
Nov 29 03:24:36 np0005539550 nova_compute[257631]: 2025-11-29 08:24:36.588 257641 DEBUG oslo_concurrency.lockutils [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:36 np0005539550 nova_compute[257631]: 2025-11-29 08:24:36.589 257641 DEBUG oslo_concurrency.lockutils [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:36 np0005539550 nova_compute[257631]: 2025-11-29 08:24:36.589 257641 DEBUG nova.network.neutron [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:36.954 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:36.955 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:24:36 np0005539550 nova_compute[257631]: 2025-11-29 08:24:36.956 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 433 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1001 KiB/s wr, 84 op/s
Nov 29 03:24:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:37.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:37 np0005539550 nova_compute[257631]: 2025-11-29 08:24:37.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.505 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updating instance_info_cache with network_info: [{"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.537 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-d17ba263-68c7-4428-9d64-9a809e93a457" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.538 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.539 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.540 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.542 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.542 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:24:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:38.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.949 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.950 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:38 np0005539550 nova_compute[257631]: 2025-11-29 08:24:38.950 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.109 257641 DEBUG nova.network.neutron [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.141 257641 DEBUG oslo_concurrency.lockutils [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.142 257641 DEBUG nova.objects.instance [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:39 np0005539550 kernel: tap0d1cf0d1-b3 (unregistering): left promiscuous mode
Nov 29 03:24:39 np0005539550 NetworkManager[49039]: <info>  [1764404679.3305] device (tap0d1cf0d1-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00583|binding|INFO|Releasing lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b from this chassis (sb_readonly=0)
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00584|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b down in Southbound
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00585|binding|INFO|Removing iface tap0d1cf0d1-b3 ovn-installed in OVS
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.342 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.348 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.349 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.351 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.352 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7e213d-aaa2-491b-aac4-5812b3ea0f58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.352 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.357 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 29 03:24:39 np0005539550 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d00000081.scope: Consumed 5.690s CPU time.
Nov 29 03:24:39 np0005539550 systemd-machined[216673]: Machine qemu-69-instance-00000081 terminated.
Nov 29 03:24:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.595 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.600 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.609 257641 INFO nova.virt.libvirt.driver [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance destroyed successfully.#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.609 257641 DEBUG nova.objects.instance [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:39.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [NOTICE]   (340928) : haproxy version is 2.8.14-c23fe91
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [NOTICE]   (340928) : path to executable is /usr/sbin/haproxy
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [WARNING]  (340928) : Exiting Master process...
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [WARNING]  (340928) : Exiting Master process...
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [ALERT]    (340928) : Current worker (340930) exited with code 143 (Terminated)
Nov 29 03:24:39 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[340924]: [WARNING]  (340928) : All workers exited. Exiting... (0)
Nov 29 03:24:39 np0005539550 systemd[1]: libpod-621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29.scope: Deactivated successfully.
Nov 29 03:24:39 np0005539550 podman[341093]: 2025-11-29 08:24:39.883783216 +0000 UTC m=+0.443717491 container died 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29-userdata-shm.mount: Deactivated successfully.
Nov 29 03:24:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-911ea439af4baae0529e2aa3b581623adc288ee05a36dc70db66fd8fcca943d5-merged.mount: Deactivated successfully.
Nov 29 03:24:39 np0005539550 podman[341093]: 2025-11-29 08:24:39.945928328 +0000 UTC m=+0.505862593 container cleanup 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:24:39 np0005539550 kernel: tap0d1cf0d1-b3: entered promiscuous mode
Nov 29 03:24:39 np0005539550 systemd-udevd[341076]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.952 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 NetworkManager[49039]: <info>  [1764404679.9556] manager: (tap0d1cf0d1-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00586|binding|INFO|Claiming lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b for this chassis.
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00587|binding|INFO|0d1cf0d1-b379-4a62-8413-831aa8ff906b: Claiming fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:24:39 np0005539550 systemd[1]: libpod-conmon-621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29.scope: Deactivated successfully.
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:39.963 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '7', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:39 np0005539550 NetworkManager[49039]: <info>  [1764404679.9663] device (tap0d1cf0d1-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:39 np0005539550 NetworkManager[49039]: <info>  [1764404679.9683] device (tap0d1cf0d1-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00588|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b ovn-installed in OVS
Nov 29 03:24:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:39Z|00589|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b up in Southbound
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.991 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:39 np0005539550 nova_compute[257631]: 2025-11-29 08:24:39.997 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 systemd-machined[216673]: New machine qemu-70-instance-00000081.
Nov 29 03:24:40 np0005539550 systemd[1]: Started Virtual Machine qemu-70-instance-00000081.
Nov 29 03:24:40 np0005539550 podman[341193]: 2025-11-29 08:24:40.045303733 +0000 UTC m=+0.061757523 container remove 621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.050 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a4e82a-448c-4c66-b8da-571a49c0f576]: (4, ('Sat Nov 29 08:24:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29)\n621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29\nSat Nov 29 08:24:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29)\n621d7c435962b012f4cfcd9c34412c58cc1b5d7aa5943077d83ba231781d9f29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.053 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4804d2-3f77-431e-9f60-06c657c7f7dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.054 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.056 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.075 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.077 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7690bb84-4cb9-4268-af54-1c304f8ec377]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.097 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16b0adb7-77d4-45f8-8445-56a9616a1571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.098 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a64c42ac-31bf-417a-b0c3-3ddbe3816b71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.113 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd3b25a-f253-4372-890e-a08e5e9f0aa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 771380, 'reachable_time': 30782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341215, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.115 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.116 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[43dbf082-be61-4cb7-987f-7b852499ce01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.117 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.118 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.129 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a057afb6-1a75-40db-af90-db4a03a70c09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.130 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5da19f7d-31 in ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.132 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5da19f7d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.133 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dee7c2c2-37b9-4dda-bb53-74c7c1dd4740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.133 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[48fcf911-1896-4c15-9805-1310209b914c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.146 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab4796d-83b7-48f9-8b84-e8c64d1d2a65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.161 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c19606-0bb0-4e8f-ba22-bf5027d262e6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.190 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[228980dd-b659-4640-9905-8d6de90fbc3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.195 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91451d9a-ff39-4f3e-ba6c-48870162e973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 NetworkManager[49039]: <info>  [1764404680.2006] manager: (tap5da19f7d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/268)
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.239 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bc163c57-7921-4723-b35a-62fa3d9cc95a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.242 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[753f303a-0ad3-4b3e-a5ac-c314a346eaea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 NetworkManager[49039]: <info>  [1764404680.2639] device (tap5da19f7d-30): carrier: link connected
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.270 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[79462c2c-d672-451a-b112-254fe2b90857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.290 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4d319c9a-91f5-447b-a79b-f7857991fbab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 16999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341240, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.308 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef06f6-dfac-4d13-b59f-1e0d4a95ee5b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe18:8e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772792, 'tstamp': 772792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341241, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.329 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[609ffa63-e956-449b-b7bf-ab6ae4307a45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 16999, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341242, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.364 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[247fc08f-50bd-4b0d-a759-7f19dfa8e9e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.421 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c750fa-d4c7-4260-af54-ed8eda2b9b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.423 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.423 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.423 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.429 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 NetworkManager[49039]: <info>  [1764404680.4299] manager: (tap5da19f7d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/269)
Nov 29 03:24:40 np0005539550 kernel: tap5da19f7d-30: entered promiscuous mode
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.434 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:40Z|00590|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.436 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.437 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.438 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[47e09d73-735e-4c01-8497-9a454114aa98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.439 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.pid.haproxy
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:24:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:40.440 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'env', 'PROCESS_TAG=haproxy-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5da19f7d-3aa0-41e7-88b0-b9ef17fa4445.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.554 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 4702f4ee-458d-4146-b9b2-70ecf718176c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.555 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404680.5546014, 4702f4ee-458d-4146-b9b2-70ecf718176c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.555 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.597 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.619 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.659 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.659 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404680.5699415, 4702f4ee-458d-4146-b9b2-70ecf718176c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.660 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.681 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.685 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.707 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:24:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:40.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:40 np0005539550 nova_compute[257631]: 2025-11-29 08:24:40.844 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:40 np0005539550 podman[341333]: 2025-11-29 08:24:40.772415965 +0000 UTC m=+0.023139826 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:24:41 np0005539550 podman[341333]: 2025-11-29 08:24:41.023159114 +0000 UTC m=+0.273882955 container create 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:24:41 np0005539550 systemd[1]: Started libpod-conmon-81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef.scope.
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.092 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.094 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.095 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.095 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.096 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.096 257641 WARNING nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.097 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.098 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.098 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.099 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.100 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.101 257641 WARNING nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.102 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.102 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.103 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.104 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.104 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa2cd0b888db859ab32c2082a68ded59b92cd3c001d1af44de99bb318f45792/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.105 257641 WARNING nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.106 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.106 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.107 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.107 257641 DEBUG oslo_concurrency.lockutils [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.107 257641 DEBUG nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:41 np0005539550 nova_compute[257631]: 2025-11-29 08:24:41.108 257641 WARNING nova.compute.manager [req-721ac8e6-dc9a-403e-a802-628ec599446d req-33e974eb-4d9a-4e8a-96bb-35fb5c5dfd61 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:24:41 np0005539550 podman[341333]: 2025-11-29 08:24:41.135967061 +0000 UTC m=+0.386690952 container init 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:24:41 np0005539550 podman[341333]: 2025-11-29 08:24:41.144398724 +0000 UTC m=+0.395122575 container start 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:24:41 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [NOTICE]   (341353) : New worker (341355) forked
Nov 29 03:24:41 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [NOTICE]   (341353) : Loading success.
Nov 29 03:24:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.2 MiB/s wr, 174 op/s
Nov 29 03:24:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539550 nova_compute[257631]: 2025-11-29 08:24:42.060 257641 DEBUG nova.compute.manager [None req-1c296791-c89b-4c93-aae5-2dc7f42f8ce1 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:42.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:42 np0005539550 nova_compute[257631]: 2025-11-29 08:24:42.913 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 215 op/s
Nov 29 03:24:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:43.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.6 MiB/s wr, 285 op/s
Nov 29 03:24:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:45 np0005539550 nova_compute[257631]: 2025-11-29 08:24:45.846 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:45.956 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:46 np0005539550 podman[341366]: 2025-11-29 08:24:46.377361334 +0000 UTC m=+0.116397214 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.467 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.468 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.492 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.583 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.583 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.591 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.592 257641 INFO nova.compute.claims [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:24:46 np0005539550 nova_compute[257631]: 2025-11-29 08:24:46.848 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:46.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:24:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1983929129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.303 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.309 257641 DEBUG nova.compute.provider_tree [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.354 257641 DEBUG nova.scheduler.client.report [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.399 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.400 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.462 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.462 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.487 257641 INFO nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.515 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:24:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 492 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 3.0 MiB/s wr, 313 op/s
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.629 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.630 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.631 257641 INFO nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Creating image(s)#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.666 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.697 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.731 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.736 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.839 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.840 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.841 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.842 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.872 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.876 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 346e849d-fa61-4451-b34c-d6165fea3aa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:47 np0005539550 nova_compute[257631]: 2025-11-29 08:24:47.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.164 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 346e849d-fa61-4451-b34c-d6165fea3aa4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.288s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.232 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] resizing rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.322 257641 DEBUG nova.objects.instance [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.355 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.355 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Ensure instance console log exists: /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.356 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.356 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.357 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:48 np0005539550 nova_compute[257631]: 2025-11-29 08:24:48.439 257641 DEBUG nova.policy [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '873186539acb4bf9b90513e0e1beb56f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:24:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:48.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 473 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 2.6 MiB/s wr, 281 op/s
Nov 29 03:24:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:49.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:49 np0005539550 nova_compute[257631]: 2025-11-29 08:24:49.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:49 np0005539550 nova_compute[257631]: 2025-11-29 08:24:49.940 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Successfully created port: c5532a42-8b51-4dea-bf3e-e272409f89f4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:24:50 np0005539550 nova_compute[257631]: 2025-11-29 08:24:50.848 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:50.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.431 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Successfully updated port: c5532a42-8b51-4dea-bf3e-e272409f89f4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.458 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.458 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.459 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:24:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 475 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 2.7 MiB/s wr, 378 op/s
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.573 257641 DEBUG nova.compute.manager [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-changed-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.574 257641 DEBUG nova.compute.manager [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Refreshing instance network info cache due to event network-changed-c5532a42-8b51-4dea-bf3e-e272409f89f4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.575 257641 DEBUG oslo_concurrency.lockutils [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:24:51 np0005539550 nova_compute[257631]: 2025-11-29 08:24:51.628 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:24:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:51.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:52.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:52 np0005539550 nova_compute[257631]: 2025-11-29 08:24:52.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:52 np0005539550 nova_compute[257631]: 2025-11-29 08:24:52.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:24:52 np0005539550 nova_compute[257631]: 2025-11-29 08:24:52.937 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.011 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 2.2 MiB/s wr, 340 op/s
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.618 257641 DEBUG nova.network.neutron [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.656 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.656 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance network_info: |[{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.657 257641 DEBUG oslo_concurrency.lockutils [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.657 257641 DEBUG nova.network.neutron [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Refreshing network info cache for port c5532a42-8b51-4dea-bf3e-e272409f89f4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.659 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start _get_guest_xml network_info=[{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.663 257641 WARNING nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.667 257641 DEBUG nova.virt.libvirt.host [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.668 257641 DEBUG nova.virt.libvirt.host [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.672 257641 DEBUG nova.virt.libvirt.host [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.673 257641 DEBUG nova.virt.libvirt.host [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.674 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.674 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.674 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.675 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.675 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.675 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.675 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.675 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.676 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.676 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.676 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.676 257641 DEBUG nova.virt.hardware [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:24:53 np0005539550 nova_compute[257631]: 2025-11-29 08:24:53.679 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:53.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:53Z|00591|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:24:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:53Z|00592|binding|INFO|Releasing lport 79109459-2a40-4b69-936e-ac2a2aa77985 from this chassis (sb_readonly=0)
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.064 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087248948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.085 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:54Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8e:e3:35 10.100.0.5
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.109 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.114 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.365740) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694365935, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 803, "num_deletes": 252, "total_data_size": 1071569, "memory_usage": 1099392, "flush_reason": "Manual Compaction"}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694374232, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 1058759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47811, "largest_seqno": 48613, "table_properties": {"data_size": 1054685, "index_size": 1790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9687, "raw_average_key_size": 20, "raw_value_size": 1046270, "raw_average_value_size": 2170, "num_data_blocks": 78, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404633, "oldest_key_time": 1764404633, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 8483 microseconds, and 4175 cpu microseconds.
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.374281) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 1058759 bytes OK
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.374298) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.375707) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.375728) EVENT_LOG_v1 {"time_micros": 1764404694375724, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.375744) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1067568, prev total WAL file size 1067568, number of live WAL files 2.
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.376371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(1033KB)], [104(11MB)]
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694376461, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 13274931, "oldest_snapshot_seqno": -1}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 8178 keys, 11322474 bytes, temperature: kUnknown
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694492415, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 11322474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11269107, "index_size": 31815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 213521, "raw_average_key_size": 26, "raw_value_size": 11124346, "raw_average_value_size": 1360, "num_data_blocks": 1239, "num_entries": 8178, "num_filter_entries": 8178, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.492984) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 11322474 bytes
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.495744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.4 rd, 97.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 8698, records dropped: 520 output_compression: NoCompression
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.495769) EVENT_LOG_v1 {"time_micros": 1764404694495758, "job": 62, "event": "compaction_finished", "compaction_time_micros": 116020, "compaction_time_cpu_micros": 25857, "output_level": 6, "num_output_files": 1, "total_output_size": 11322474, "num_input_records": 8698, "num_output_records": 8178, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694496095, "job": 62, "event": "table_file_deletion", "file_number": 106}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404694498105, "job": 62, "event": "table_file_deletion", "file_number": 104}
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.376212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.498139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.498143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.498145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.498147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:24:54.498149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:24:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3495354090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.550 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.551 257641 DEBUG nova.virt.libvirt.vif [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1614242741',display_name='tempest-ServerStableDeviceRescueTest-server-1614242741',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1614242741',id=133,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-v7xvcml6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:47Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=346e849d-fa61-4451-b34c-d6165fea3aa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.552 257641 DEBUG nova.network.os_vif_util [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.553 257641 DEBUG nova.network.os_vif_util [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.554 257641 DEBUG nova.objects.instance [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.572 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <uuid>346e849d-fa61-4451-b34c-d6165fea3aa4</uuid>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <name>instance-00000085</name>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1614242741</nova:name>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:24:53</nova:creationTime>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:24:54 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <nova:port uuid="c5532a42-8b51-4dea-bf3e-e272409f89f4">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:24:54 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="serial">346e849d-fa61-4451-b34c-d6165fea3aa4</entry>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="uuid">346e849d-fa61-4451-b34c-d6165fea3aa4</entry>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ae:bd:45"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <target dev="tapc5532a42-8b"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/console.log" append="off"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:24:54 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:24:54 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:24:54 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:24:54 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.574 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Preparing to wait for external event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.574 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.574 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.574 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.575 257641 DEBUG nova.virt.libvirt.vif [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:24:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1614242741',display_name='tempest-ServerStableDeviceRescueTest-server-1614242741',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1614242741',id=133,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-v7xvcml6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:24:47Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=346e849d-fa61-4451-b34c-d6165fea3aa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.576 257641 DEBUG nova.network.os_vif_util [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.576 257641 DEBUG nova.network.os_vif_util [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.577 257641 DEBUG os_vif [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.578 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.578 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.584 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5532a42-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.584 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5532a42-8b, col_values=(('external_ids', {'iface-id': 'c5532a42-8b51-4dea-bf3e-e272409f89f4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:bd:45', 'vm-uuid': '346e849d-fa61-4451-b34c-d6165fea3aa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539550 NetworkManager[49039]: <info>  [1764404694.5868] manager: (tapc5532a42-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.595 257641 INFO os_vif [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b')#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.654 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.655 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.655 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:ae:bd:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.655 257641 INFO nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Using config drive#033[00m
Nov 29 03:24:54 np0005539550 nova_compute[257631]: 2025-11-29 08:24:54.682 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:54.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.214 257641 DEBUG nova.network.neutron [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updated VIF entry in instance network info cache for port c5532a42-8b51-4dea-bf3e-e272409f89f4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.215 257641 DEBUG nova.network.neutron [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.238 257641 DEBUG oslo_concurrency.lockutils [req-249e2d68-f46a-45a0-92ef-18cd81f421a0 req-7dcd9efa-0317-438b-a36e-c484a3efd4d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.496 257641 INFO nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Creating config drive at /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.501 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplveejdu1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 310 op/s
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.679 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplveejdu1" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.709 257641 DEBUG nova.storage.rbd_utils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.713 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:24:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:55.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.871 257641 DEBUG oslo_concurrency.processutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.872 257641 INFO nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Deleting local config drive /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config because it was imported into RBD.#033[00m
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:24:55 np0005539550 kernel: tapc5532a42-8b: entered promiscuous mode
Nov 29 03:24:55 np0005539550 NetworkManager[49039]: <info>  [1764404695.9294] manager: (tapc5532a42-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Nov 29 03:24:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:55Z|00593|binding|INFO|Claiming lport c5532a42-8b51-4dea-bf3e-e272409f89f4 for this chassis.
Nov 29 03:24:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:55Z|00594|binding|INFO|c5532a42-8b51-4dea-bf3e-e272409f89f4: Claiming fa:16:3e:ae:bd:45 10.100.0.10
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:55.939 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:24:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:55.940 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:24:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:55.941 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:24:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:55.955 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[807ca2d8-b692-4ea8-9b44-191aef8af60e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:55Z|00595|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 ovn-installed in OVS
Nov 29 03:24:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:24:55Z|00596|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 up in Southbound
Nov 29 03:24:55 np0005539550 systemd-udevd[341725]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.966 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539550 systemd-machined[216673]: New machine qemu-71-instance-00000085.
Nov 29 03:24:55 np0005539550 nova_compute[257631]: 2025-11-29 08:24:55.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:55 np0005539550 systemd[1]: Started Virtual Machine qemu-71-instance-00000085.
Nov 29 03:24:55 np0005539550 NetworkManager[49039]: <info>  [1764404695.9862] device (tapc5532a42-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:24:55 np0005539550 NetworkManager[49039]: <info>  [1764404695.9913] device (tapc5532a42-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:24:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:55.997 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9db6b8ba-cf03-4628-a58b-6d57fe30c9ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.000 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9d4a5d-d87a-4b88-915b-8f07bbe88942]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.028 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[53c2d445-aeb5-4d4c-8b53-0648cace885c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd8e85d-9fc0-44dc-8b65-6e1efecec5b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341737, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.065 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[74ce9353-2838-4571-b79c-4074de42e2f9]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341738, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341738, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.068 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.069 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.071 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.071 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.072 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.072 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:24:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:24:56.073 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.520 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404696.519447, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.521 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.545 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.549 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404696.5205822, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.549 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.568 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.574 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.594 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.848 257641 DEBUG nova.compute.manager [req-e8c0d37c-956e-48f0-8c18-84f5a6dd13c2 req-97c9179c-4bf0-40bb-8e31-f45970c81cc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.849 257641 DEBUG oslo_concurrency.lockutils [req-e8c0d37c-956e-48f0-8c18-84f5a6dd13c2 req-97c9179c-4bf0-40bb-8e31-f45970c81cc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.850 257641 DEBUG oslo_concurrency.lockutils [req-e8c0d37c-956e-48f0-8c18-84f5a6dd13c2 req-97c9179c-4bf0-40bb-8e31-f45970c81cc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.850 257641 DEBUG oslo_concurrency.lockutils [req-e8c0d37c-956e-48f0-8c18-84f5a6dd13c2 req-97c9179c-4bf0-40bb-8e31-f45970c81cc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.851 257641 DEBUG nova.compute.manager [req-e8c0d37c-956e-48f0-8c18-84f5a6dd13c2 req-97c9179c-4bf0-40bb-8e31-f45970c81cc9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Processing event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.852 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.857 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404696.8574493, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.858 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.861 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:24:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000048s ======
Nov 29 03:24:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:56.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.865 257641 INFO nova.virt.libvirt.driver [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance spawned successfully.#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.866 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.876 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.885 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.891 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.891 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.892 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.893 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.894 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.894 257641 DEBUG nova.virt.libvirt.driver [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.917 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.944 257641 INFO nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Took 9.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:24:56 np0005539550 nova_compute[257631]: 2025-11-29 08:24:56.944 257641 DEBUG nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:57 np0005539550 nova_compute[257631]: 2025-11-29 08:24:57.001 257641 INFO nova.compute.manager [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Took 10.46 seconds to build instance.#033[00m
Nov 29 03:24:57 np0005539550 nova_compute[257631]: 2025-11-29 08:24:57.042 257641 DEBUG oslo_concurrency.lockutils [None req-e1ac16f6-da60-4a39-8142-314effe1585b 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:24:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 254 op/s
Nov 29 03:24:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:57.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:58 np0005539550 nova_compute[257631]: 2025-11-29 08:24:58.013 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:24:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:24:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:24:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:24:58.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.011 257641 DEBUG nova.compute.manager [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.012 257641 DEBUG oslo_concurrency.lockutils [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.012 257641 DEBUG oslo_concurrency.lockutils [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.012 257641 DEBUG oslo_concurrency.lockutils [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.013 257641 DEBUG nova.compute.manager [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.013 257641 WARNING nova.compute.manager [req-a96b41b0-fd61-46d0-b990-877df5a085e6 req-648b15b1-01ab-47f4-a027-223f1acbe637 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.100 257641 DEBUG nova.compute.manager [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.144 257641 INFO nova.compute.manager [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] instance snapshotting#033[00m
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:24:59
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'vms', '.rgw.root', 'backups']
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.461 257641 INFO nova.virt.libvirt.driver [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Beginning live snapshot process#033[00m
Nov 29 03:24:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.8 MiB/s wr, 242 op/s
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.609 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.616 257641 DEBUG nova.virt.libvirt.imagebackend [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:24:59 np0005539550 nova_compute[257631]: 2025-11-29 08:24:59.815 257641 DEBUG nova.storage.rbd_utils [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(9149e6d7e5e74147b6e6a19cdc8cc73f) on rbd image(346e849d-fa61-4451-b34c-d6165fea3aa4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:24:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:24:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:24:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:24:59.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Nov 29 03:25:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Nov 29 03:25:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Nov 29 03:25:00 np0005539550 nova_compute[257631]: 2025-11-29 08:25:00.747 257641 DEBUG nova.storage.rbd_utils [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk@9149e6d7e5e74147b6e6a19cdc8cc73f to images/7462d96e-a79d-4947-a963-039652361944 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:25:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 166 op/s
Nov 29 03:25:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000048s ======
Nov 29 03:25:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:01.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Nov 29 03:25:02 np0005539550 nova_compute[257631]: 2025-11-29 08:25:02.002 257641 DEBUG nova.storage.rbd_utils [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] flattening images/7462d96e-a79d-4947-a963-039652361944 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:25:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:02 np0005539550 nova_compute[257631]: 2025-11-29 08:25:02.270 257641 DEBUG nova.storage.rbd_utils [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] removing snapshot(9149e6d7e5e74147b6e6a19cdc8cc73f) on rbd image(346e849d-fa61-4451-b34c-d6165fea3aa4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:25:02 np0005539550 nova_compute[257631]: 2025-11-29 08:25:02.828 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:03 np0005539550 nova_compute[257631]: 2025-11-29 08:25:03.018 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Nov 29 03:25:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Nov 29 03:25:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Nov 29 03:25:03 np0005539550 nova_compute[257631]: 2025-11-29 08:25:03.111 257641 DEBUG nova.storage.rbd_utils [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] creating snapshot(snap) on rbd image(7462d96e-a79d-4947-a963-039652361944) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:25:03 np0005539550 podman[341980]: 2025-11-29 08:25:03.324529649 +0000 UTC m=+0.064948790 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 03:25:03 np0005539550 podman[341981]: 2025-11-29 08:25:03.324545519 +0000 UTC m=+0.060253737 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:25:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 518 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 288 op/s
Nov 29 03:25:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:03.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Nov 29 03:25:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Nov 29 03:25:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Nov 29 03:25:04 np0005539550 nova_compute[257631]: 2025-11-29 08:25:04.613 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:05 np0005539550 nova_compute[257631]: 2025-11-29 08:25:05.468 257641 INFO nova.virt.libvirt.driver [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Snapshot image upload complete#033[00m
Nov 29 03:25:05 np0005539550 nova_compute[257631]: 2025-11-29 08:25:05.468 257641 INFO nova.compute.manager [None req-0cc7216a-a348-4b10-b236-a2e1759b2e2e 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Took 6.32 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:25:05 np0005539550 nova_compute[257631]: 2025-11-29 08:25:05.492 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 559 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 6.7 MiB/s wr, 365 op/s
Nov 29 03:25:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:06.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:07 np0005539550 nova_compute[257631]: 2025-11-29 08:25:07.851 257641 INFO nova.compute.manager [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Rescuing#033[00m
Nov 29 03:25:07 np0005539550 nova_compute[257631]: 2025-11-29 08:25:07.852 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:07 np0005539550 nova_compute[257631]: 2025-11-29 08:25:07.853 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:07 np0005539550 nova_compute[257631]: 2025-11-29 08:25:07.854 257641 DEBUG nova.network.neutron [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 581 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.5 MiB/s wr, 298 op/s
Nov 29 03:25:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:07.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:08 np0005539550 nova_compute[257631]: 2025-11-29 08:25:08.022 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:25:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:08.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:25:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:25:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:25:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:25:09 np0005539550 nova_compute[257631]: 2025-11-29 08:25:09.615 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 581 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 5.9 MiB/s wr, 271 op/s
Nov 29 03:25:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:09.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:10 np0005539550 nova_compute[257631]: 2025-11-29 08:25:10.691 257641 DEBUG nova.network.neutron [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:10 np0005539550 nova_compute[257631]: 2025-11-29 08:25:10.739 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:10.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:11 np0005539550 nova_compute[257631]: 2025-11-29 08:25:11.031 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:25:11 np0005539550 podman[342193]: 2025-11-29 08:25:11.592115386 +0000 UTC m=+0.084710984 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:11 np0005539550 podman[342193]: 2025-11-29 08:25:11.698339596 +0000 UTC m=+0.190935184 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 599 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.3 MiB/s wr, 135 op/s
Nov 29 03:25:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:11.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:12 np0005539550 podman[342346]: 2025-11-29 08:25:12.338890271 +0000 UTC m=+0.064304635 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:25:12 np0005539550 podman[342346]: 2025-11-29 08:25:12.349374332 +0000 UTC m=+0.074788726 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:25:12 np0005539550 podman[342409]: 2025-11-29 08:25:12.597045227 +0000 UTC m=+0.069749845 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git)
Nov 29 03:25:12 np0005539550 podman[342409]: 2025-11-29 08:25:12.608570574 +0000 UTC m=+0.081275182 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2)
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:25:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:12.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.022 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539550 kernel: tapc5532a42-8b (unregistering): left promiscuous mode
Nov 29 03:25:13 np0005539550 NetworkManager[49039]: <info>  [1764404713.3434] device (tapc5532a42-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.352 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:13Z|00597|binding|INFO|Releasing lport c5532a42-8b51-4dea-bf3e-e272409f89f4 from this chassis (sb_readonly=0)
Nov 29 03:25:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:13Z|00598|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 down in Southbound
Nov 29 03:25:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:13Z|00599|binding|INFO|Removing iface tapc5532a42-8b ovn-installed in OVS
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.355 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.363 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.364 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.366 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.371 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.384 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66d82f83-5400-4c4a-8ab8-c1e7db12f064]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 29 03:25:13 np0005539550 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000085.scope: Consumed 13.889s CPU time.
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.412 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[149219a9-59c4-4694-bb80-35c5a9fe22a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 systemd-machined[216673]: Machine qemu-71-instance-00000085 terminated.
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.415 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[400d8f07-4362-4fce-a543-7fcf13739cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.442 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[53721724-5ca4-4c06-90fe-868e356bc852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.458 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[13287d52-24b6-4002-ae12-78bd9fe256b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342584, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.472 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[96e14769-019b-4446-b1d0-918a25fc1927]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342585, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342585, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.474 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:25:13 np0005539550 nova_compute[257631]: 2025-11-29 08:25:13.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.524 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.524 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.524 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:13.525 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6cd6a2b7-733e-4ad7-8e0a-48372e8c983c does not exist
Nov 29 03:25:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 78620d64-8716-4e93-b00c-a5f3a43fbf35 does not exist
Nov 29 03:25:13 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5c9fc33c-52f3-4eff-b1ea-acd24f9973c8 does not exist
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:25:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:25:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 620 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 6.2 MiB/s wr, 157 op/s
Nov 29 03:25:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:13.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.046 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.053 257641 INFO nova.virt.libvirt.driver [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance destroyed successfully.#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.053 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.070 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Attempting a stable device rescue#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.133 257641 DEBUG nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.133 257641 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.134 257641 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.134 257641 DEBUG oslo_concurrency.lockutils [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.134 257641 DEBUG nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.134 257641 WARNING nova.compute.manager [req-5efa89bf-05e6-419e-814d-805daebf9145 req-109facd2-a7e0-4200-a5b6-a77172413aa4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:25:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:14 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.163986036 +0000 UTC m=+0.059672813 container create 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:25:14 np0005539550 systemd[1]: Started libpod-conmon-1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37.scope.
Nov 29 03:25:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.14583397 +0000 UTC m=+0.041520767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.24747181 +0000 UTC m=+0.143158617 container init 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.2545566 +0000 UTC m=+0.150243377 container start 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.257712196 +0000 UTC m=+0.153398973 container attach 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:25:14 np0005539550 charming_fermat[342753]: 167 167
Nov 29 03:25:14 np0005539550 systemd[1]: libpod-1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37.scope: Deactivated successfully.
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.263885114 +0000 UTC m=+0.159571901 container died 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-10a98abd5bb1c793eac7f629dcd3bee18aada6970ec21f159c7fff11f51fcdd1-merged.mount: Deactivated successfully.
Nov 29 03:25:14 np0005539550 podman[342737]: 2025-11-29 08:25:14.306676371 +0000 UTC m=+0.202363148 container remove 1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:25:14 np0005539550 systemd[1]: libpod-conmon-1bff6da774b0653e70811409a611970b9eea112e5267f899a47d76310fc75d37.scope: Deactivated successfully.
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.329 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.333 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.333 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Creating image(s)#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.356 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.359 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.394 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.419 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.422 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "f48338e0dda04f27551f033b71353a43c7d326ff" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.423 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f48338e0dda04f27551f033b71353a43c7d326ff" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:14 np0005539550 podman[342831]: 2025-11-29 08:25:14.523165717 +0000 UTC m=+0.053896454 container create 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:14 np0005539550 systemd[1]: Started libpod-conmon-9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4.scope.
Nov 29 03:25:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:14 np0005539550 podman[342831]: 2025-11-29 08:25:14.497940582 +0000 UTC m=+0.028671359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:14 np0005539550 podman[342831]: 2025-11-29 08:25:14.612794439 +0000 UTC m=+0.143525206 container init 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:25:14 np0005539550 podman[342831]: 2025-11-29 08:25:14.619391337 +0000 UTC m=+0.150122084 container start 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:25:14 np0005539550 podman[342831]: 2025-11-29 08:25:14.623059625 +0000 UTC m=+0.153790382 container attach 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.762 257641 DEBUG nova.virt.libvirt.imagebackend [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/7462d96e-a79d-4947-a963-039652361944/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/7462d96e-a79d-4947-a963-039652361944/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.815 257641 DEBUG nova.virt.libvirt.imagebackend [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/7462d96e-a79d-4947-a963-039652361944/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.816 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] cloning images/7462d96e-a79d-4947-a963-039652361944@snap to None/346e849d-fa61-4451-b34c-d6165fea3aa4_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:25:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:14.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.922 257641 DEBUG oslo_concurrency.lockutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "f48338e0dda04f27551f033b71353a43c7d326ff" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.973 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'migration_context' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.985 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.987 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start _get_guest_xml network_info=[{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:ae:bd:45"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'usb', 'dev': 'sdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '7462d96e-a79d-4947-a963-039652361944', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:14 np0005539550 nova_compute[257631]: 2025-11-29 08:25:14.987 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.005 257641 WARNING nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.010 257641 DEBUG nova.virt.libvirt.host [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.011 257641 DEBUG nova.virt.libvirt.host [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.014 257641 DEBUG nova.virt.libvirt.host [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.014 257641 DEBUG nova.virt.libvirt.host [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.015 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.016 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.017 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.017 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.017 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.017 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.017 257641 DEBUG nova.virt.hardware [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.018 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.036 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:15 np0005539550 vigilant_haibt[342847]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:25:15 np0005539550 vigilant_haibt[342847]: --> relative data size: 1.0
Nov 29 03:25:15 np0005539550 vigilant_haibt[342847]: --> All data devices are unavailable
Nov 29 03:25:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755841708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:15 np0005539550 systemd[1]: libpod-9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4.scope: Deactivated successfully.
Nov 29 03:25:15 np0005539550 podman[342831]: 2025-11-29 08:25:15.486587801 +0000 UTC m=+1.017318538 container died 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.489 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-573ffe1e9c60b634eef70f39a90d5a99a879bfdabadfef72c9948fe0f409bd78-merged.mount: Deactivated successfully.
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.531 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:15 np0005539550 podman[342831]: 2025-11-29 08:25:15.692142025 +0000 UTC m=+1.222872772 container remove 9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haibt, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:25:15 np0005539550 systemd[1]: libpod-conmon-9864b29f7e331b0348245a1014c42aca058294edcbb90cdf48c903aecd5bfaa4.scope: Deactivated successfully.
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.753 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.755 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.755 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.755 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.756 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.758 257641 INFO nova.compute.manager [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Terminating instance#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.759 257641 DEBUG nova.compute.manager [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:25:15 np0005539550 kernel: tap72d454e3-eb (unregistering): left promiscuous mode
Nov 29 03:25:15 np0005539550 NetworkManager[49039]: <info>  [1764404715.8244] device (tap72d454e3-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.846 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:15Z|00600|binding|INFO|Releasing lport 72d454e3-eb73-4ea1-94dd-f13b08f74cda from this chassis (sb_readonly=0)
Nov 29 03:25:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:15Z|00601|binding|INFO|Setting lport 72d454e3-eb73-4ea1-94dd-f13b08f74cda down in Southbound
Nov 29 03:25:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:15Z|00602|binding|INFO|Removing iface tap72d454e3-eb ovn-installed in OVS
Nov 29 03:25:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:15.861 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:a1:5d 10.100.0.6'], port_security=['fa:16:3e:0a:a1:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd17ba263-68c7-4428-9d64-9a809e93a457', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0bace34c102e4d56b089fd695d324f10', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a26ea06d-6837-4c64-a5e9-9d9016316b21, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=72d454e3-eb73-4ea1-94dd-f13b08f74cda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:15.863 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 72d454e3-eb73-4ea1-94dd-f13b08f74cda in datapath 7fc1dfc3-8d7f-4854-980d-37a93f366035 unbound from our chassis#033[00m
Nov 29 03:25:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:15.864 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7fc1dfc3-8d7f-4854-980d-37a93f366035, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:25:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:15.865 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5294131c-7772-4967-926f-3b062b4021d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:15.866 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 namespace which is not needed anymore#033[00m
Nov 29 03:25:15 np0005539550 nova_compute[257631]: 2025-11-29 08:25:15.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:15 np0005539550 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000080.scope: Deactivated successfully.
Nov 29 03:25:15 np0005539550 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000080.scope: Consumed 17.724s CPU time.
Nov 29 03:25:15 np0005539550 systemd-machined[216673]: Machine qemu-66-instance-00000080 terminated.
Nov 29 03:25:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 612 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 167 op/s
Nov 29 03:25:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:15.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.015 257641 INFO nova.virt.libvirt.driver [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Instance destroyed successfully.#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.016 257641 DEBUG nova.objects.instance [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lazy-loading 'resources' on Instance uuid d17ba263-68c7-4428-9d64-9a809e93a457 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [NOTICE]   (338268) : haproxy version is 2.8.14-c23fe91
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [NOTICE]   (338268) : path to executable is /usr/sbin/haproxy
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [WARNING]  (338268) : Exiting Master process...
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [WARNING]  (338268) : Exiting Master process...
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.034 257641 DEBUG nova.virt.libvirt.vif [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-207824385',display_name='tempest-ServerActionsTestOtherA-server-207824385',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-207824385',id=128,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:23:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0bace34c102e4d56b089fd695d324f10',ramdisk_id='',reservation_id='r-yo5nxlbx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-1954650991',owner_user_name='tempest-ServerActionsTestOtherA-1954650991-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:23:25Z,user_data=None,user_id='1552f15deb524705a9456cbe9b54c429',uuid=d17ba263-68c7-4428-9d64-9a809e93a457,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.034 257641 DEBUG nova.network.os_vif_util [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converting VIF {"id": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "address": "fa:16:3e:0a:a1:5d", "network": {"id": "7fc1dfc3-8d7f-4854-980d-37a93f366035", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-644729119-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0bace34c102e4d56b089fd695d324f10", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72d454e3-eb", "ovs_interfaceid": "72d454e3-eb73-4ea1-94dd-f13b08f74cda", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [ALERT]    (338268) : Current worker (338270) exited with code 143 (Terminated)
Nov 29 03:25:16 np0005539550 neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035[338264]: [WARNING]  (338268) : All workers exited. Exiting... (0)
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.036 257641 DEBUG nova.network.os_vif_util [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.036 257641 DEBUG os_vif [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:25:16 np0005539550 systemd[1]: libpod-54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.042 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72d454e3-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.045 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539550 podman[343116]: 2025-11-29 08:25:16.047156406 +0000 UTC m=+0.058540606 container died 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/501715006' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.056 257641 INFO os_vif [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:a1:5d,bridge_name='br-int',has_traffic_filtering=True,id=72d454e3-eb73-4ea1-94dd-f13b08f74cda,network=Network(7fc1dfc3-8d7f-4854-980d-37a93f366035),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72d454e3-eb')#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.078 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.079 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cecfee3fad69887674f56fe024fe570f3aae4a43783c246245efe83e080363cd-merged.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12-userdata-shm.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539550 podman[343116]: 2025-11-29 08:25:16.091320196 +0000 UTC m=+0.102704396 container cleanup 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:25:16 np0005539550 systemd[1]: libpod-conmon-54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539550 podman[343201]: 2025-11-29 08:25:16.151698586 +0000 UTC m=+0.039964121 container remove 54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.158 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f34aac5f-ba3c-42e5-8aa3-ee494b63b179]: (4, ('Sat Nov 29 08:25:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12)\n54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12\nSat Nov 29 08:25:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 (54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12)\n54efda0331a879a3d1d7aa8eb3087e0b4033addd59c0b8a7a293bbcf82210d12\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.160 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1af22f6-44fe-4448-83ec-5ac05c42bec2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.161 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fc1dfc3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:16 np0005539550 kernel: tap7fc1dfc3-80: left promiscuous mode
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.187 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[34fd51ec-ec71-4ebd-a7c3-54e5fe987296]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.188 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.200 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[07702252-e4ec-483f-abb4-9a674cd9ca68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.201 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[153d3a2e-680e-4d1b-a81a-384584f96052]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.216 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[262ac8e8-933d-45e3-9a43-24ec067a516b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 765273, 'reachable_time': 32897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343233, 'error': None, 'target': 'ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7fc1dfc3\x2d8d7f\x2d4854\x2d980d\x2d37a93f366035.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.223 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7fc1dfc3-8d7f-4854-980d-37a93f366035 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:25:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:16.223 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[babdf85d-90cd-47ea-8b55-1304eeb2d74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.341 257641 DEBUG nova.compute.manager [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-unplugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.341 257641 DEBUG oslo_concurrency.lockutils [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.341 257641 DEBUG oslo_concurrency.lockutils [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.342 257641 DEBUG oslo_concurrency.lockutils [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.342 257641 DEBUG nova.compute.manager [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] No waiting events found dispatching network-vif-unplugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.342 257641 DEBUG nova.compute.manager [req-ae5ddf27-f0ab-4836-bcfa-e38a33328c31 req-42909276-e8a3-40a4-b007-29f476fd6a5c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-unplugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.409612286 +0000 UTC m=+0.044803806 container create 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.414 257641 DEBUG nova.compute.manager [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.414 257641 DEBUG oslo_concurrency.lockutils [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.414 257641 DEBUG oslo_concurrency.lockutils [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.414 257641 DEBUG oslo_concurrency.lockutils [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.415 257641 DEBUG nova.compute.manager [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.415 257641 WARNING nova.compute.manager [req-111d469e-3827-4ace-94d2-af308af89c50 req-61e21bf3-f3aa-4f49-838e-f6e8f296ea14 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:25:16 np0005539550 systemd[1]: Started libpod-conmon-3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee.scope.
Nov 29 03:25:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.485635111 +0000 UTC m=+0.120826671 container init 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.392648339 +0000 UTC m=+0.027839879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.493286064 +0000 UTC m=+0.128477584 container start 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.497387033 +0000 UTC m=+0.132578583 container attach 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:16 np0005539550 musing_blackburn[343299]: 167 167
Nov 29 03:25:16 np0005539550 systemd[1]: libpod-3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.503185502 +0000 UTC m=+0.138377032 container died 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:25:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d15d8e5b26c725753c7e9738de1bd0e99a28898a2fd5c3b1ea5eaf795448c92d-merged.mount: Deactivated successfully.
Nov 29 03:25:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009719159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:16 np0005539550 podman[343295]: 2025-11-29 08:25:16.54225153 +0000 UTC m=+0.093892085 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.556 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.558 257641 DEBUG nova.virt.libvirt.vif [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1614242741',display_name='tempest-ServerStableDeviceRescueTest-server-1614242741',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1614242741',id=133,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-v7xvcml6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:05Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=346e849d-fa61-4451-b34c-d6165fea3aa4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:ae:bd:45"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.558 257641 DEBUG nova.network.os_vif_util [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "vif_mac": "fa:16:3e:ae:bd:45"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.559 257641 DEBUG nova.network.os_vif_util [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.560 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'pci_devices' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:16 np0005539550 podman[343281]: 2025-11-29 08:25:16.562200999 +0000 UTC m=+0.197392519 container remove 3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:25:16 np0005539550 systemd[1]: libpod-conmon-3e587106b22ce131d8476bdb2099159ddaa9c1e329ead0cab20c238f50ad04ee.scope: Deactivated successfully.
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.574 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <uuid>346e849d-fa61-4451-b34c-d6165fea3aa4</uuid>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <name>instance-00000085</name>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-1614242741</nova:name>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:25:15</nova:creationTime>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:user uuid="873186539acb4bf9b90513e0e1beb56f">tempest-ServerStableDeviceRescueTest-507673154-project-member</nova:user>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:project uuid="a9a83f8d8d7f4d08890407f978c05166">tempest-ServerStableDeviceRescueTest-507673154</nova:project>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <nova:port uuid="c5532a42-8b51-4dea-bf3e-e272409f89f4">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="serial">346e849d-fa61-4451-b34c-d6165fea3aa4</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="uuid">346e849d-fa61-4451-b34c-d6165fea3aa4</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/346e849d-fa61-4451-b34c-d6165fea3aa4_disk.rescue">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <target dev="sdb" bus="usb"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <boot order="1"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ae:bd:45"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <target dev="tapc5532a42-8b"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/console.log" append="off"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:25:16 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:25:16 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:25:16 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:25:16 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.587 257641 INFO nova.virt.libvirt.driver [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Deleting instance files /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457_del#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.588 257641 INFO nova.virt.libvirt.driver [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Deletion of /var/lib/nova/instances/d17ba263-68c7-4428-9d64-9a809e93a457_del complete#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.592 257641 INFO nova.virt.libvirt.driver [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance destroyed successfully.#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.698 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.698 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.698 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.699 257641 DEBUG nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] No VIF found with MAC fa:16:3e:ae:bd:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.699 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Using config drive#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.726 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.741 257641 INFO nova.compute.manager [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.742 257641 DEBUG oslo.service.loopingcall [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.742 257641 DEBUG nova.compute.manager [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.742 257641 DEBUG nova.network.neutron [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.755 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:16 np0005539550 podman[343352]: 2025-11-29 08:25:16.766523053 +0000 UTC m=+0.053502945 container create cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:25:16 np0005539550 systemd[1]: Started libpod-conmon-cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc.scope.
Nov 29 03:25:16 np0005539550 nova_compute[257631]: 2025-11-29 08:25:16.814 257641 DEBUG nova.objects.instance [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'keypairs' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:16 np0005539550 podman[343352]: 2025-11-29 08:25:16.746982974 +0000 UTC m=+0.033962886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832b855f558f06fe1b17b8ba2a05257821e0d2b031e93ffadbcd50e02807d2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832b855f558f06fe1b17b8ba2a05257821e0d2b031e93ffadbcd50e02807d2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832b855f558f06fe1b17b8ba2a05257821e0d2b031e93ffadbcd50e02807d2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5832b855f558f06fe1b17b8ba2a05257821e0d2b031e93ffadbcd50e02807d2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:16 np0005539550 podman[343352]: 2025-11-29 08:25:16.891347339 +0000 UTC m=+0.178327321 container init cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:25:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:16.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:16 np0005539550 podman[343352]: 2025-11-29 08:25:16.903268225 +0000 UTC m=+0.190248157 container start cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:25:16 np0005539550 podman[343352]: 2025-11-29 08:25:16.90682663 +0000 UTC m=+0.193806572 container attach cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.126 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Creating config drive at /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.132 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwhhtjjag execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.268 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwhhtjjag" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.295 257641 DEBUG nova.storage.rbd_utils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] rbd image 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.299 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.471 257641 DEBUG oslo_concurrency.processutils [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue 346e849d-fa61-4451-b34c-d6165fea3aa4_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.471 257641 INFO nova.virt.libvirt.driver [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Deleting local config drive /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.482 257641 DEBUG nova.network.neutron [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.501 257641 INFO nova.compute.manager [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Took 0.76 seconds to deallocate network for instance.#033[00m
Nov 29 03:25:17 np0005539550 kernel: tapc5532a42-8b: entered promiscuous mode
Nov 29 03:25:17 np0005539550 NetworkManager[49039]: <info>  [1764404717.5403] manager: (tapc5532a42-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:17Z|00603|binding|INFO|Claiming lport c5532a42-8b51-4dea-bf3e-e272409f89f4 for this chassis.
Nov 29 03:25:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:17Z|00604|binding|INFO|c5532a42-8b51-4dea-bf3e-e272409f89f4: Claiming fa:16:3e:ae:bd:45 10.100.0.10
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.546 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '5', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.546 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.547 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.548 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.550 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:25:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:17Z|00605|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 ovn-installed in OVS
Nov 29 03:25:17 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:17Z|00606|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 up in Southbound
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.569 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91f83719-a5f5-49a6-b92c-2a858f2e3383]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.570 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:17 np0005539550 systemd-udevd[343450]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:17 np0005539550 systemd-machined[216673]: New machine qemu-72-instance-00000085.
Nov 29 03:25:17 np0005539550 NetworkManager[49039]: <info>  [1764404717.6070] device (tapc5532a42-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:17 np0005539550 NetworkManager[49039]: <info>  [1764404717.6076] device (tapc5532a42-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:17 np0005539550 systemd[1]: Started Virtual Machine qemu-72-instance-00000085.
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.615 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[59406216-150e-403e-8c4c-2fcabb33af17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.619 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5c485486-957a-4f64-9c3d-fcf920d1f17a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.652 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7d1cf58e-16fe-4498-9a96-e54971c983de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.679 257641 DEBUG oslo_concurrency.processutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.682 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b5310317-233e-48c3-9fa6-a7518e88a480]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343458, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]: {
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:    "0": [
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:        {
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "devices": [
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "/dev/loop3"
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            ],
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "lv_name": "ceph_lv0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "lv_size": "7511998464",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "name": "ceph_lv0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "tags": {
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.cluster_name": "ceph",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.crush_device_class": "",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.encrypted": "0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.osd_id": "0",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.type": "block",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:                "ceph.vdo": "0"
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            },
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "type": "block",
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:            "vg_name": "ceph_vg0"
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:        }
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]:    ]
Nov 29 03:25:17 np0005539550 elastic_volhard[343387]: }
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.707 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd3170c-e888-4087-a8a7-a370fd208511]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343463, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343463, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.709 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:17 np0005539550 nova_compute[257631]: 2025-11-29 08:25:17.711 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.711 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.711 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.712 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:17.712 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:17 np0005539550 systemd[1]: libpod-cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc.scope: Deactivated successfully.
Nov 29 03:25:17 np0005539550 podman[343352]: 2025-11-29 08:25:17.722062307 +0000 UTC m=+1.009042199 container died cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:25:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5832b855f558f06fe1b17b8ba2a05257821e0d2b031e93ffadbcd50e02807d2b-merged.mount: Deactivated successfully.
Nov 29 03:25:17 np0005539550 podman[343352]: 2025-11-29 08:25:17.782434026 +0000 UTC m=+1.069413918 container remove cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_volhard, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:25:17 np0005539550 systemd[1]: libpod-conmon-cfab62037b5ea87697197369d4bd587f539a4239206311e937f58a66271590fc.scope: Deactivated successfully.
Nov 29 03:25:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 4.7 MiB/s wr, 188 op/s
Nov 29 03:25:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:17.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.073 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3026784456' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.166 257641 DEBUG oslo_concurrency.processutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.173 257641 DEBUG nova.compute.provider_tree [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.187 257641 DEBUG nova.scheduler.client.report [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.208 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.236 257641 INFO nova.scheduler.client.report [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Deleted allocations for instance d17ba263-68c7-4428-9d64-9a809e93a457#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.316 257641 DEBUG oslo_concurrency.lockutils [None req-4840d8a0-574f-4ef0-97ac-5c4af8ededfc 1552f15deb524705a9456cbe9b54c429 0bace34c102e4d56b089fd695d324f10 - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.426 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 346e849d-fa61-4451-b34c-d6165fea3aa4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.427 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404718.4263167, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.427 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.430 257641 DEBUG nova.compute.manager [None req-2c1c2e6d-d372-4874-bb9e-8f9479684e49 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.437061868 +0000 UTC m=+0.042423129 container create a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.440 257641 DEBUG nova.compute.manager [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.440 257641 DEBUG oslo_concurrency.lockutils [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.441 257641 DEBUG oslo_concurrency.lockutils [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.441 257641 DEBUG oslo_concurrency.lockutils [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d17ba263-68c7-4428-9d64-9a809e93a457-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.441 257641 DEBUG nova.compute.manager [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] No waiting events found dispatching network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.441 257641 WARNING nova.compute.manager [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received unexpected event network-vif-plugged-72d454e3-eb73-4ea1-94dd-f13b08f74cda for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.441 257641 DEBUG nova.compute.manager [req-3bcb5e71-8ada-462e-903d-5d840bf25c03 req-3ca4f69d-059a-4a7a-be1b-afd8884d980b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Received event network-vif-deleted-72d454e3-eb73-4ea1-94dd-f13b08f74cda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.469 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.471 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:18 np0005539550 systemd[1]: Started libpod-conmon-a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de.scope.
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.510 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.511 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404718.4287915, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.511 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.417364496 +0000 UTC m=+0.022725777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.531531036 +0000 UTC m=+0.136892307 container init a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.539961438 +0000 UTC m=+0.145322699 container start a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.544992579 +0000 UTC m=+0.150353860 container attach a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:25:18 np0005539550 systemd[1]: libpod-a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de.scope: Deactivated successfully.
Nov 29 03:25:18 np0005539550 beautiful_mestorf[343713]: 167 167
Nov 29 03:25:18 np0005539550 conmon[343713]: conmon a0abe47d4ae60c31eead <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de.scope/container/memory.events
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.548638976 +0000 UTC m=+0.154000247 container died a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.561 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.571 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4cbb3a95fa9d32a61611f9a6817ad6d67955d365fcd24cd155464e1a6ebb70d1-merged.mount: Deactivated successfully.
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.575 257641 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.575 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.575 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.576 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.576 257641 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.576 257641 WARNING nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.577 257641 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.577 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.577 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.577 257641 DEBUG oslo_concurrency.lockutils [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.578 257641 DEBUG nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:18 np0005539550 nova_compute[257631]: 2025-11-29 08:25:18.578 257641 WARNING nova.compute.manager [req-e0159feb-2e24-4884-a160-97c28837849f req-0b963106-1503-47bd-a8d7-b6637a7ed0db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:25:18 np0005539550 podman[343697]: 2025-11-29 08:25:18.58959487 +0000 UTC m=+0.194956131 container remove a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_mestorf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:25:18 np0005539550 systemd[1]: libpod-conmon-a0abe47d4ae60c31eead44244cd7407f614ebb8ddbcefe27d98a9491e776c7de.scope: Deactivated successfully.
Nov 29 03:25:18 np0005539550 podman[343738]: 2025-11-29 08:25:18.784808045 +0000 UTC m=+0.061206800 container create cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:18 np0005539550 systemd[1]: Started libpod-conmon-cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9.scope.
Nov 29 03:25:18 np0005539550 podman[343738]: 2025-11-29 08:25:18.764656711 +0000 UTC m=+0.041055476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:25:18 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34556ee1e2bf7f9ae41f8e72b3cd6318aeef1811dca3e3bddf5269a3e27e9228/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34556ee1e2bf7f9ae41f8e72b3cd6318aeef1811dca3e3bddf5269a3e27e9228/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34556ee1e2bf7f9ae41f8e72b3cd6318aeef1811dca3e3bddf5269a3e27e9228/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:18 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34556ee1e2bf7f9ae41f8e72b3cd6318aeef1811dca3e3bddf5269a3e27e9228/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:18 np0005539550 podman[343738]: 2025-11-29 08:25:18.881465445 +0000 UTC m=+0.157864220 container init cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:25:18 np0005539550 podman[343738]: 2025-11-29 08:25:18.888834032 +0000 UTC m=+0.165232777 container start cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:25:18 np0005539550 podman[343738]: 2025-11-29 08:25:18.892875669 +0000 UTC m=+0.169274444 container attach cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:25:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:18.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:18.957 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:18.958 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:18.959 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:19 np0005539550 practical_gates[343754]: {
Nov 29 03:25:19 np0005539550 practical_gates[343754]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:25:19 np0005539550 practical_gates[343754]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:25:19 np0005539550 practical_gates[343754]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:25:19 np0005539550 practical_gates[343754]:        "osd_id": 0,
Nov 29 03:25:19 np0005539550 practical_gates[343754]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:25:19 np0005539550 practical_gates[343754]:        "type": "bluestore"
Nov 29 03:25:19 np0005539550 practical_gates[343754]:    }
Nov 29 03:25:19 np0005539550 practical_gates[343754]: }
Nov 29 03:25:19 np0005539550 systemd[1]: libpod-cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9.scope: Deactivated successfully.
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.749 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:19 np0005539550 podman[343738]: 2025-11-29 08:25:19.750807611 +0000 UTC m=+1.027206356 container died cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:25:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-34556ee1e2bf7f9ae41f8e72b3cd6318aeef1811dca3e3bddf5269a3e27e9228-merged.mount: Deactivated successfully.
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.787 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 4702f4ee-458d-4146-b9b2-70ecf718176c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.787 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.790 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.790 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.791 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.792 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:19 np0005539550 podman[343738]: 2025-11-29 08:25:19.802486322 +0000 UTC m=+1.078885087 container remove cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_gates, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:25:19 np0005539550 systemd[1]: libpod-conmon-cb759296fb30e92ed40c8afcec8dd0f7c2116c6436d6870bc3487e39ab3068f9.scope: Deactivated successfully.
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.821 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:19 np0005539550 nova_compute[257631]: 2025-11-29 08:25:19.823 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:25:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:25:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6cf00ae1-ce55-4fc7-90a4-40dcbbc23aca does not exist
Nov 29 03:25:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e2a065b8-6643-474e-871b-913a763f4fb8 does not exist
Nov 29 03:25:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1b58d3ae-242e-440c-a864-6ed2813cbc51 does not exist
Nov 29 03:25:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 533 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 467 KiB/s rd, 4.7 MiB/s wr, 188 op/s
Nov 29 03:25:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:20 np0005539550 nova_compute[257631]: 2025-11-29 08:25:20.074 257641 INFO nova.compute.manager [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Unrescuing#033[00m
Nov 29 03:25:20 np0005539550 nova_compute[257631]: 2025-11-29 08:25:20.075 257641 DEBUG oslo_concurrency.lockutils [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:20 np0005539550 nova_compute[257631]: 2025-11-29 08:25:20.075 257641 DEBUG oslo_concurrency.lockutils [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquired lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:20 np0005539550 nova_compute[257631]: 2025-11-29 08:25:20.075 257641 DEBUG nova.network.neutron [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010774539654289969 of space, bias 1.0, pg target 3.232361896286991 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016194252512562814 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1526529130025125 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:25:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:25:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:25:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:20.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.046 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.636 257641 DEBUG nova.network.neutron [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.650 257641 DEBUG oslo_concurrency.lockutils [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Releasing lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.651 257641 DEBUG nova.objects.instance [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'flavor' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:21 np0005539550 kernel: tapc5532a42-8b (unregistering): left promiscuous mode
Nov 29 03:25:21 np0005539550 NetworkManager[49039]: <info>  [1764404721.7605] device (tapc5532a42-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:25:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:21Z|00607|binding|INFO|Releasing lport c5532a42-8b51-4dea-bf3e-e272409f89f4 from this chassis (sb_readonly=0)
Nov 29 03:25:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:21Z|00608|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 down in Southbound
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.772 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:21Z|00609|binding|INFO|Removing iface tapc5532a42-8b ovn-installed in OVS
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.792 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.793 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.794 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:25:21 np0005539550 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.809 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a353a09d-0354-44fb-9dea-be0945f4babe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000085.scope: Consumed 3.978s CPU time.
Nov 29 03:25:21 np0005539550 systemd-machined[216673]: Machine qemu-72-instance-00000085 terminated.
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.843 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[50e027b6-1c47-41ae-a795-d798bbd351d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.846 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a118aebf-cbb0-4d3d-ac20-405b41294ff9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.870 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c87f5b06-2ca4-4e6e-8ce2-3bbeadad8496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.889 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0c3f1fe-1b80-4691-ab3e-f56363d71071]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343901, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.905 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e5ddc089-0d89-49ce-be15-38601c880600]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343902, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343902, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.906 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.908 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.913 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.913 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.913 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.914 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:21.914 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.936 257641 INFO nova.virt.libvirt.driver [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance destroyed successfully.#033[00m
Nov 29 03:25:21 np0005539550 nova_compute[257631]: 2025-11-29 08:25:21.936 257641 DEBUG nova.objects.instance [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'numa_topology' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 397 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.0 MiB/s wr, 288 op/s
Nov 29 03:25:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:22 np0005539550 kernel: tapc5532a42-8b: entered promiscuous mode
Nov 29 03:25:22 np0005539550 NetworkManager[49039]: <info>  [1764404722.0353] manager: (tapc5532a42-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Nov 29 03:25:22 np0005539550 systemd-udevd[343893]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00610|binding|INFO|Claiming lport c5532a42-8b51-4dea-bf3e-e272409f89f4 for this chassis.
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00611|binding|INFO|c5532a42-8b51-4dea-bf3e-e272409f89f4: Claiming fa:16:3e:ae:bd:45 10.100.0.10
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.036 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 NetworkManager[49039]: <info>  [1764404722.0475] device (tapc5532a42-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:22 np0005539550 NetworkManager[49039]: <info>  [1764404722.0481] device (tapc5532a42-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.047 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '6', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00612|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.051 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 bound to our chassis#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.057 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.057 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.065 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 systemd-machined[216673]: New machine qemu-73-instance-00000085.
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.077 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b5996883-6263-43a9-bcad-356c58ae8cae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 systemd[1]: Started Virtual Machine qemu-73-instance-00000085.
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.110 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6e3e2ca0-5cb9-4867-b657-d0f10b0c4e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.113 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0083c67e-6620-4e54-873c-07c31b6cf10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.147 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7fb0204d-882f-40d7-923f-293606ab7480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.163 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d88304b-f4be-43bb-aee6-21ae2fb4a067]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343938, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c893477c-adba-45ba-8c9e-35c2e287b2f8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343939, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343939, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.182 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.183 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.279 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.280 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.281 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.282 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:22.282 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00613|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00614|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 ovn-installed in OVS
Nov 29 03:25:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:22Z|00615|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 up in Southbound
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.291 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.813 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.814 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.814 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.814 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.815 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.815 257641 WARNING nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.815 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.815 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.816 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.816 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.816 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.817 257641 WARNING nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.817 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.817 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.817 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.818 257641 DEBUG oslo_concurrency.lockutils [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.818 257641 DEBUG nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.818 257641 WARNING nova.compute.manager [req-54a48350-143b-4239-91c2-14a56cb31e9b req-f27e62a8-71a8-4fe1-b072-2814ffe3ba09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state rescued and task_state unrescuing.#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.857 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 346e849d-fa61-4451-b34c-d6165fea3aa4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.857 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404722.85708, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.858 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.884 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.887 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:22.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.908 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.908 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404722.8577816, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.908 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.924 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.927 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:22 np0005539550 nova_compute[257631]: 2025-11-29 08:25:22.951 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:25:23 np0005539550 nova_compute[257631]: 2025-11-29 08:25:23.075 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:23 np0005539550 nova_compute[257631]: 2025-11-29 08:25:23.243 257641 DEBUG nova.compute.manager [None req-6986097e-3307-443e-9b16-0a010043ff17 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 289 op/s
Nov 29 03:25:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:23.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.891 257641 DEBUG nova.compute.manager [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.892 257641 DEBUG oslo_concurrency.lockutils [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.892 257641 DEBUG oslo_concurrency.lockutils [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.892 257641 DEBUG oslo_concurrency.lockutils [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.892 257641 DEBUG nova.compute.manager [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:24 np0005539550 nova_compute[257631]: 2025-11-29 08:25:24.893 257641 WARNING nova.compute.manager [req-2f7470c0-0a27-4419-af8f-2f9cbebafc7c req-fe41401d-17f5-44e7-a684-a2bae6be104d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:24.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 341 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 288 op/s
Nov 29 03:25:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:25.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:26 np0005539550 nova_compute[257631]: 2025-11-29 08:25:26.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 341 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 423 KiB/s wr, 308 op/s
Nov 29 03:25:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:27.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:28 np0005539550 nova_compute[257631]: 2025-11-29 08:25:28.078 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 341 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 268 op/s
Nov 29 03:25:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:29.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.011 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404716.0102992, d17ba263-68c7-4428-9d64-9a809e93a457 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.012 257641 INFO nova.compute.manager [-] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.043 257641 DEBUG nova.compute.manager [None req-14130e08-8e36-4a50-b9b3-e7c6c224f0c4 - - - - - -] [instance: d17ba263-68c7-4428-9d64-9a809e93a457] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.963 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:31 np0005539550 nova_compute[257631]: 2025-11-29 08:25:31.964 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 346 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 857 KiB/s wr, 276 op/s
Nov 29 03:25:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:31.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:33 np0005539550 nova_compute[257631]: 2025-11-29 08:25:33.100 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:33 np0005539550 nova_compute[257631]: 2025-11-29 08:25:33.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:33 np0005539550 nova_compute[257631]: 2025-11-29 08:25:33.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:25:33 np0005539550 nova_compute[257631]: 2025-11-29 08:25:33.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:25:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.0 MiB/s wr, 189 op/s
Nov 29 03:25:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:33.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:34 np0005539550 nova_compute[257631]: 2025-11-29 08:25:34.181 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:34 np0005539550 nova_compute[257631]: 2025-11-29 08:25:34.182 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:34 np0005539550 nova_compute[257631]: 2025-11-29 08:25:34.182 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:25:34 np0005539550 nova_compute[257631]: 2025-11-29 08:25:34.182 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:34 np0005539550 podman[344011]: 2025-11-29 08:25:34.333815191 +0000 UTC m=+0.069929399 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 03:25:34 np0005539550 podman[344010]: 2025-11-29 08:25:34.338535214 +0000 UTC m=+0.075155785 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:25:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.731 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.750 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.750 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.751 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.751 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.776 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.777 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.777 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.777 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:25:35 np0005539550 nova_compute[257631]: 2025-11-29 08:25:35.778 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:35Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:bd:45 10.100.0.10
Nov 29 03:25:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:35Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:bd:45 10.100.0.10
Nov 29 03:25:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 409 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.7 MiB/s wr, 194 op/s
Nov 29 03:25:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625592929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.257 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.372 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.373 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.378 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.378 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.587 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.588 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3950MB free_disk=20.852951049804688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.588 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.589 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.720 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 4702f4ee-458d-4146-b9b2-70ecf718176c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.720 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 346e849d-fa61-4451-b34c-d6165fea3aa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.720 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.720 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.791 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.833 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.833 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.858 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.879 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:25:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:36 np0005539550 nova_compute[257631]: 2025-11-29 08:25:36.953 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1396404046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.430 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.436 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.454 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.489 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.489 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.659 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:37 np0005539550 nova_compute[257631]: 2025-11-29 08:25:37.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.5 MiB/s wr, 221 op/s
Nov 29 03:25:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:37.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:38 np0005539550 nova_compute[257631]: 2025-11-29 08:25:38.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:38.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:39.639 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:39.640 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:25:39 np0005539550 nova_compute[257631]: 2025-11-29 08:25:39.640 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:39.641 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:39 np0005539550 nova_compute[257631]: 2025-11-29 08:25:39.915 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:39 np0005539550 nova_compute[257631]: 2025-11-29 08:25:39.915 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:39 np0005539550 nova_compute[257631]: 2025-11-29 08:25:39.932 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:25:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 436 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 839 KiB/s rd, 4.5 MiB/s wr, 151 op/s
Nov 29 03:25:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.017 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.017 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.022 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.023 257641 INFO nova.compute.claims [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.137 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:25:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589429380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.573 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.581 257641 DEBUG nova.compute.provider_tree [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.599 257641 DEBUG nova.scheduler.client.report [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.624 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.626 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.673 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.674 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.702 257641 INFO nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.725 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.844 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.847 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.848 257641 INFO nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Creating image(s)#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.887 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.927 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:40.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.954 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.959 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.985 257641 DEBUG nova.policy [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.989 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:40 np0005539550 nova_compute[257631]: 2025-11-29 08:25:40.990 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.022 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.022 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.023 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.024 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.052 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.056 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.081 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.350 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.453 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.595 257641 DEBUG nova.objects.instance [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid 870400f0-dfee-46aa-85e0-ee30dae2ee74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.610 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.611 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Ensure instance console log exists: /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.611 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.612 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.612 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:41 np0005539550 nova_compute[257631]: 2025-11-29 08:25:41.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:25:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 276 op/s
Nov 29 03:25:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:42 np0005539550 nova_compute[257631]: 2025-11-29 08:25:42.253 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Successfully created port: 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:25:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:42.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.150 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.258 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Successfully updated port: 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:25:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Nov 29 03:25:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.276 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.277 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.277 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.388 257641 DEBUG nova.compute.manager [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.389 257641 DEBUG nova.compute.manager [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing instance network info cache due to event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.389 257641 DEBUG oslo_concurrency.lockutils [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:43 np0005539550 nova_compute[257631]: 2025-11-29 08:25:43.516 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:25:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 481 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.9 MiB/s wr, 310 op/s
Nov 29 03:25:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:25:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:25:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Nov 29 03:25:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Nov 29 03:25:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.851 257641 DEBUG nova.network.neutron [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.867 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.868 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance network_info: |[{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.868 257641 DEBUG oslo_concurrency.lockutils [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.869 257641 DEBUG nova.network.neutron [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.873 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Start _get_guest_xml network_info=[{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.879 257641 WARNING nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.883 257641 DEBUG nova.virt.libvirt.host [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.884 257641 DEBUG nova.virt.libvirt.host [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.891 257641 DEBUG nova.virt.libvirt.host [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.892 257641 DEBUG nova.virt.libvirt.host [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.893 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.893 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.893 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.893 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.894 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.894 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.894 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.894 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.895 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.895 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.895 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.896 257641 DEBUG nova.virt.hardware [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:25:44 np0005539550 nova_compute[257631]: 2025-11-29 08:25:44.899 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:25:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:44.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683994367' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.390 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.417 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.421 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:25:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2785467917' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.839 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.841 257641 DEBUG nova.virt.libvirt.vif [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1030237697',display_name='tempest-TestNetworkAdvancedServerOps-server-1030237697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1030237697',id=136,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8hSICEdc8yh+RVv5/YT7HqxmkgTR1xm7P0ua/Xm6wYuxoPU/VIwfyVEtyXd/Lmc9z/b0hFgS+TiHem2uVcK1wX7nmwhbeKiJ77cXnXXanqoxFn/BxMiwo6s/prT9Qlfw==',key_name='tempest-TestNetworkAdvancedServerOps-802837668',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-pkfa0e3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:40Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=870400f0-dfee-46aa-85e0-ee30dae2ee74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.842 257641 DEBUG nova.network.os_vif_util [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.843 257641 DEBUG nova.network.os_vif_util [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.844 257641 DEBUG nova.objects.instance [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid 870400f0-dfee-46aa-85e0-ee30dae2ee74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.862 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <uuid>870400f0-dfee-46aa-85e0-ee30dae2ee74</uuid>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <name>instance-00000088</name>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1030237697</nova:name>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:25:44</nova:creationTime>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <nova:port uuid="8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="serial">870400f0-dfee-46aa-85e0-ee30dae2ee74</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="uuid">870400f0-dfee-46aa-85e0-ee30dae2ee74</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/870400f0-dfee-46aa-85e0-ee30dae2ee74_disk">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:4d:4a:3a"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <target dev="tap8e96d4f5-b3"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/console.log" append="off"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:25:45 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:25:45 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:25:45 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:25:45 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.863 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Preparing to wait for external event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.864 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.864 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.864 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.865 257641 DEBUG nova.virt.libvirt.vif [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:25:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1030237697',display_name='tempest-TestNetworkAdvancedServerOps-server-1030237697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1030237697',id=136,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8hSICEdc8yh+RVv5/YT7HqxmkgTR1xm7P0ua/Xm6wYuxoPU/VIwfyVEtyXd/Lmc9z/b0hFgS+TiHem2uVcK1wX7nmwhbeKiJ77cXnXXanqoxFn/BxMiwo6s/prT9Qlfw==',key_name='tempest-TestNetworkAdvancedServerOps-802837668',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-pkfa0e3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:25:40Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=870400f0-dfee-46aa-85e0-ee30dae2ee74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.865 257641 DEBUG nova.network.os_vif_util [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.866 257641 DEBUG nova.network.os_vif_util [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.866 257641 DEBUG os_vif [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.867 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.868 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.870 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.870 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e96d4f5-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.871 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e96d4f5-b3, col_values=(('external_ids', {'iface-id': '8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:4a:3a', 'vm-uuid': '870400f0-dfee-46aa-85e0-ee30dae2ee74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.872 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539550 NetworkManager[49039]: <info>  [1764404745.8735] manager: (tap8e96d4f5-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.879 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.880 257641 INFO os_vif [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3')#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.946 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.948 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.948 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:4d:4a:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.948 257641 INFO nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Using config drive#033[00m
Nov 29 03:25:45 np0005539550 nova_compute[257631]: 2025-11-29 08:25:45.969 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 5.4 MiB/s wr, 439 op/s
Nov 29 03:25:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:46.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.456 257641 DEBUG nova.network.neutron [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updated VIF entry in instance network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.457 257641 DEBUG nova.network.neutron [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.469 257641 INFO nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Creating config drive at /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.478 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjo87y_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.531 257641 DEBUG oslo_concurrency.lockutils [req-c58aef5a-51f1-4c79-8b50-23c0b60c79ea req-5f375974-ad4e-408d-b16f-67250c4ccb50 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.644 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsjo87y_o" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.691 257641 DEBUG nova.storage.rbd_utils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.695 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.863 257641 DEBUG oslo_concurrency.processutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config 870400f0-dfee-46aa-85e0-ee30dae2ee74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.864 257641 INFO nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Deleting local config drive /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74/disk.config because it was imported into RBD.#033[00m
Nov 29 03:25:46 np0005539550 kernel: tap8e96d4f5-b3: entered promiscuous mode
Nov 29 03:25:46 np0005539550 NetworkManager[49039]: <info>  [1764404746.9406] manager: (tap8e96d4f5-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Nov 29 03:25:46 np0005539550 nova_compute[257631]: 2025-11-29 08:25:46.941 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:46Z|00616|binding|INFO|Claiming lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for this chassis.
Nov 29 03:25:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:46Z|00617|binding|INFO|8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7: Claiming fa:16:3e:4d:4a:3a 10.100.0.14
Nov 29 03:25:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:46.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.955 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:4a:3a 10.100.0.14'], port_security=['fa:16:3e:4d:4a:3a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '870400f0-dfee-46aa-85e0-ee30dae2ee74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38642321-47cd-438d-bcd2-dc522b9bd850', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd47b7209-ced1-414f-bc97-b5859dfea11d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=743557d6-422c-40e3-9124-4df909a58562, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.956 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 in datapath 38642321-47cd-438d-bcd2-dc522b9bd850 bound to our chassis#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.958 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38642321-47cd-438d-bcd2-dc522b9bd850#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[490933aa-6492-4d47-bd84-2b284670da69]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.969 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38642321-41 in ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.971 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38642321-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.971 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[93c776f7-86f3-434d-8c4f-dd7bae1fce00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.972 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c25e33f-860b-4deb-bce2-a6070631d3da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:46.986 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0020d18c-cb4a-4af1-9573-5e193e69634e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:46 np0005539550 systemd-machined[216673]: New machine qemu-74-instance-00000088.
Nov 29 03:25:47 np0005539550 systemd[1]: Started Virtual Machine qemu-74-instance-00000088.
Nov 29 03:25:47 np0005539550 systemd-udevd[344481]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.013 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63a928db-e69e-410f-b969-fc4cd6c59743]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 NetworkManager[49039]: <info>  [1764404747.0281] device (tap8e96d4f5-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:25:47 np0005539550 NetworkManager[49039]: <info>  [1764404747.0296] device (tap8e96d4f5-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.030 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:47Z|00618|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 ovn-installed in OVS
Nov 29 03:25:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:47Z|00619|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 up in Southbound
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.047 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ecded0c2-1042-4d79-8703-4bfbdb9f2c6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.054 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c6fcce-2ed7-4c79-9375-c172e9d08a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 systemd-udevd[344489]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:25:47 np0005539550 NetworkManager[49039]: <info>  [1764404747.0565] manager: (tap38642321-40): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.089 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[585e94f3-5bcc-4480-bf10-df51c9abaf84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.093 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8af09c7d-0981-49e8-8969-648d5348f819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 podman[344469]: 2025-11-29 08:25:47.11222136 +0000 UTC m=+0.113686310 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:47 np0005539550 NetworkManager[49039]: <info>  [1764404747.1155] device (tap38642321-40): carrier: link connected
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.123 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1a439834-7bad-4305-a357-fbfa96e97b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.139 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[881833f9-a441-41f4-beb4-cfc9326817a0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38642321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:f9:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779478, 'reachable_time': 34693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344530, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.154 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccc78d7-6969-4039-b528-91d039884cba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:f98b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 779478, 'tstamp': 779478}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344531, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.169 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c2934c3e-5253-467c-ba87-d664e0578f30]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38642321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:f9:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 177], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779478, 'reachable_time': 34693, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344532, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.198 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bf003e3e-ddfd-4751-9b15-78330cab6525]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.258 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[80c4988d-1240-48fa-8da3-2f677cecb80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.260 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38642321-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.260 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.260 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38642321-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 kernel: tap38642321-40: entered promiscuous mode
Nov 29 03:25:47 np0005539550 NetworkManager[49039]: <info>  [1764404747.2636] manager: (tap38642321-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.266 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.268 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38642321-40, col_values=(('external_ids', {'iface-id': '1f39aec2-e2f9-4abf-8561-ea0dffb6b44d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.268 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:47Z|00620|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.291 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.293 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4531753a-6c42-4ff5-8832-23903a0a9c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.294 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-38642321-47cd-438d-bcd2-dc522b9bd850
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 38642321-47cd-438d-bcd2-dc522b9bd850
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:25:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:25:47.294 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'env', 'PROCESS_TAG=haproxy-38642321-47cd-438d-bcd2-dc522b9bd850', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38642321-47cd-438d-bcd2-dc522b9bd850.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:25:47 np0005539550 podman[344564]: 2025-11-29 08:25:47.735328915 +0000 UTC m=+0.063594997 container create 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:25:47 np0005539550 systemd[1]: Started libpod-conmon-0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a.scope.
Nov 29 03:25:47 np0005539550 podman[344564]: 2025-11-29 08:25:47.70678994 +0000 UTC m=+0.035056052 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:25:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:25:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29db0efb1e2c05f82f258b28d45895e6ad3f401333b64b19f631d839792de381/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:25:47 np0005539550 podman[344564]: 2025-11-29 08:25:47.842717923 +0000 UTC m=+0.170984105 container init 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:25:47 np0005539550 podman[344564]: 2025-11-29 08:25:47.849839614 +0000 UTC m=+0.178105746 container start 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:25:47 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [NOTICE]   (344584) : New worker (344586) forked
Nov 29 03:25:47 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [NOTICE]   (344584) : Loading success.
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.911 257641 DEBUG nova.compute.manager [req-30851162-d15a-4761-9298-fe6651430c27 req-fb80c638-20b0-4dc3-8691-5f09ba92c8bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.912 257641 DEBUG oslo_concurrency.lockutils [req-30851162-d15a-4761-9298-fe6651430c27 req-fb80c638-20b0-4dc3-8691-5f09ba92c8bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.912 257641 DEBUG oslo_concurrency.lockutils [req-30851162-d15a-4761-9298-fe6651430c27 req-fb80c638-20b0-4dc3-8691-5f09ba92c8bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.913 257641 DEBUG oslo_concurrency.lockutils [req-30851162-d15a-4761-9298-fe6651430c27 req-fb80c638-20b0-4dc3-8691-5f09ba92c8bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:47 np0005539550 nova_compute[257631]: 2025-11-29 08:25:47.913 257641 DEBUG nova.compute.manager [req-30851162-d15a-4761-9298-fe6651430c27 req-fb80c638-20b0-4dc3-8691-5f09ba92c8bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Processing event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:25:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 561 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 7.1 MiB/s wr, 221 op/s
Nov 29 03:25:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:48.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.153 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.565 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404748.564918, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.566 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Started (Lifecycle Event)#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.569 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.572 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.576 257641 INFO nova.virt.libvirt.driver [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance spawned successfully.#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.576 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.605 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.609 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.609 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.610 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.610 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.611 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.611 257641 DEBUG nova.virt.libvirt.driver [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.615 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.645 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.645 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404748.5661347, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.646 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.678 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.683 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404748.5713992, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.684 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.729 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.734 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.779 257641 INFO nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Took 7.93 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.780 257641 DEBUG nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.792 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.871 257641 INFO nova.compute.manager [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Took 8.88 seconds to build instance.#033[00m
Nov 29 03:25:48 np0005539550 nova_compute[257631]: 2025-11-29 08:25:48.896 257641 DEBUG oslo_concurrency.lockutils [None req-ec54bd7f-92e2-49a5-8101-6031e4fdd420 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:48.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 561 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.6 MiB/s wr, 152 op/s
Nov 29 03:25:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:50.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.256 257641 DEBUG nova.compute.manager [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.257 257641 DEBUG oslo_concurrency.lockutils [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.257 257641 DEBUG oslo_concurrency.lockutils [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.257 257641 DEBUG oslo_concurrency.lockutils [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.257 257641 DEBUG nova.compute.manager [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.257 257641 WARNING nova.compute.manager [req-5a4e01a2-1dd6-48b5-ac8b-b66e260f6c70 req-19a33e87-08f7-4f28-bd66-7044f3527c8e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:25:50 np0005539550 nova_compute[257631]: 2025-11-29 08:25:50.873 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:25:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:50.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:25:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 606 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 8.0 MiB/s wr, 277 op/s
Nov 29 03:25:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:52.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Nov 29 03:25:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Nov 29 03:25:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Nov 29 03:25:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:52.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:53 np0005539550 nova_compute[257631]: 2025-11-29 08:25:53.156 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 614 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.4 MiB/s wr, 270 op/s
Nov 29 03:25:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:54.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:54.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539550 NetworkManager[49039]: <info>  [1764404755.4432] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Nov 29 03:25:55 np0005539550 NetworkManager[49039]: <info>  [1764404755.4444] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:55Z|00621|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:25:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:25:55Z|00622|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.564 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.804 257641 DEBUG nova.compute.manager [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.804 257641 DEBUG nova.compute.manager [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing instance network info cache due to event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.805 257641 DEBUG oslo_concurrency.lockutils [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.805 257641 DEBUG oslo_concurrency.lockutils [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.806 257641 DEBUG nova.network.neutron [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:25:55 np0005539550 nova_compute[257631]: 2025-11-29 08:25:55.875 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 637 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.2 MiB/s wr, 293 op/s
Nov 29 03:25:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 616 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 294 op/s
Nov 29 03:25:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:25:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:25:58.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:25:58 np0005539550 nova_compute[257631]: 2025-11-29 08:25:58.159 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:25:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:25:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:25:58 np0005539550 nova_compute[257631]: 2025-11-29 08:25:58.551 257641 DEBUG nova.network.neutron [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updated VIF entry in instance network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:25:58 np0005539550 nova_compute[257631]: 2025-11-29 08:25:58.551 257641 DEBUG nova.network.neutron [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:25:58 np0005539550 nova_compute[257631]: 2025-11-29 08:25:58.592 257641 DEBUG oslo_concurrency.lockutils [req-9e6a2c24-5542-4e63-a6c8-0e4fd8488540 req-71fb7d75-beb0-4e10-bdc8-dbbfbe277f5d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:25:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:25:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:25:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:25:58.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:25:59
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'volumes', 'backups', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:25:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 616 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 294 op/s
Nov 29 03:26:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:26:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:00.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:26:00 np0005539550 nova_compute[257631]: 2025-11-29 08:26:00.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:26:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:00.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:26:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 502 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.5 MiB/s wr, 339 op/s
Nov 29 03:26:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:02.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:02Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:4a:3a 10.100.0.14
Nov 29 03:26:02 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:02Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:4a:3a 10.100.0.14
Nov 29 03:26:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:02.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:03 np0005539550 nova_compute[257631]: 2025-11-29 08:26:03.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 500 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 333 op/s
Nov 29 03:26:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:04.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:04 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:04Z|00623|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:04 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:04Z|00624|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:26:04 np0005539550 nova_compute[257631]: 2025-11-29 08:26:04.224 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:04.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:05 np0005539550 podman[344700]: 2025-11-29 08:26:05.405012929 +0000 UTC m=+0.110034976 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 29 03:26:05 np0005539550 podman[344701]: 2025-11-29 08:26:05.415143786 +0000 UTC m=+0.120167383 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:05Z|00625|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:05Z|00626|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:26:05 np0005539550 nova_compute[257631]: 2025-11-29 08:26:05.748 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539550 nova_compute[257631]: 2025-11-29 08:26:05.879 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 496 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 331 op/s
Nov 29 03:26:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:06.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:06.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 501 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 353 op/s
Nov 29 03:26:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:08 np0005539550 nova_compute[257631]: 2025-11-29 08:26:08.207 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:26:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:26:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:08.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.406 257641 INFO nova.compute.manager [None req-ad09f2ee-00ac-479b-843a-d6972a48d8d7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Get console output#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.414 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.829 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.830 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.830 257641 INFO nova.compute.manager [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Rebooting instance#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.857 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.857 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:09 np0005539550 nova_compute[257631]: 2025-11-29 08:26:09.857 257641 DEBUG nova.network.neutron [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:26:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 501 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.1 MiB/s wr, 336 op/s
Nov 29 03:26:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:10 np0005539550 nova_compute[257631]: 2025-11-29 08:26:10.880 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:10 np0005539550 nova_compute[257631]: 2025-11-29 08:26:10.888 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:10.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:11 np0005539550 nova_compute[257631]: 2025-11-29 08:26:11.486 257641 DEBUG nova.network.neutron [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:11 np0005539550 nova_compute[257631]: 2025-11-29 08:26:11.507 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:11 np0005539550 nova_compute[257631]: 2025-11-29 08:26:11.508 257641 DEBUG nova.compute.manager [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 501 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 403 op/s
Nov 29 03:26:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:12.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:12.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:13 np0005539550 nova_compute[257631]: 2025-11-29 08:26:13.209 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539550 nova_compute[257631]: 2025-11-29 08:26:13.859 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539550 kernel: tap8e96d4f5-b3 (unregistering): left promiscuous mode
Nov 29 03:26:13 np0005539550 NetworkManager[49039]: <info>  [1764404773.9626] device (tap8e96d4f5-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:13Z|00627|binding|INFO|Releasing lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 from this chassis (sb_readonly=0)
Nov 29 03:26:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:13Z|00628|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 down in Southbound
Nov 29 03:26:13 np0005539550 nova_compute[257631]: 2025-11-29 08:26:13.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:13Z|00629|binding|INFO|Removing iface tap8e96d4f5-b3 ovn-installed in OVS
Nov 29 03:26:13 np0005539550 nova_compute[257631]: 2025-11-29 08:26:13.976 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:13.981 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:4a:3a 10.100.0.14'], port_security=['fa:16:3e:4d:4a:3a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '870400f0-dfee-46aa-85e0-ee30dae2ee74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38642321-47cd-438d-bcd2-dc522b9bd850', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd47b7209-ced1-414f-bc97-b5859dfea11d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=743557d6-422c-40e3-9124-4df909a58562, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:13.983 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 in datapath 38642321-47cd-438d-bcd2-dc522b9bd850 unbound from our chassis#033[00m
Nov 29 03:26:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:13.984 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38642321-47cd-438d-bcd2-dc522b9bd850, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:13.986 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[70e6d2f0-2b6b-415e-aa8e-cd4b02e57188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:13.986 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 namespace which is not needed anymore#033[00m
Nov 29 03:26:13 np0005539550 nova_compute[257631]: 2025-11-29 08:26:13.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 501 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.9 MiB/s wr, 269 op/s
Nov 29 03:26:14 np0005539550 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 29 03:26:14 np0005539550 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000088.scope: Consumed 15.539s CPU time.
Nov 29 03:26:14 np0005539550 systemd-machined[216673]: Machine qemu-74-instance-00000088 terminated.
Nov 29 03:26:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:14.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [NOTICE]   (344584) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [NOTICE]   (344584) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [WARNING]  (344584) : Exiting Master process...
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [WARNING]  (344584) : Exiting Master process...
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [ALERT]    (344584) : Current worker (344586) exited with code 143 (Terminated)
Nov 29 03:26:14 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344580]: [WARNING]  (344584) : All workers exited. Exiting... (0)
Nov 29 03:26:14 np0005539550 systemd[1]: libpod-0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a.scope: Deactivated successfully.
Nov 29 03:26:14 np0005539550 podman[344761]: 2025-11-29 08:26:14.121900346 +0000 UTC m=+0.048223746 container died 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.130 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-29db0efb1e2c05f82f258b28d45895e6ad3f401333b64b19f631d839792de381-merged.mount: Deactivated successfully.
Nov 29 03:26:14 np0005539550 podman[344761]: 2025-11-29 08:26:14.160826504 +0000 UTC m=+0.087149894 container cleanup 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:14 np0005539550 systemd[1]: libpod-conmon-0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a.scope: Deactivated successfully.
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.188 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.196 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 podman[344790]: 2025-11-29 08:26:14.242209921 +0000 UTC m=+0.056832864 container remove 0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.247 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d00de062-5044-432e-b15a-fdf037f1bf99]: (4, ('Sat Nov 29 08:26:14 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 (0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a)\n0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a\nSat Nov 29 08:26:14 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 (0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a)\n0f1d7972aae837e7ac65bf70fbd83b1281d92ba24a5627dfffa05f8f6efec46a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.250 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eb1e5f8a-5f66-4a23-a5c1-99f569715ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.251 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38642321-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.254 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 kernel: tap38642321-40: left promiscuous mode
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.279 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c08a91-ad6c-491a-abed-029989c057d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.295 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[94e9b15e-249f-4aca-a02f-59ec6c8f6698]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.297 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a400ce3d-05c6-4d68-9939-beb168788569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.311 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0a56c1d6-2fc1-42db-b9ac-921d648c8b2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 779470, 'reachable_time': 28898, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344817, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 systemd[1]: run-netns-ovnmeta\x2d38642321\x2d47cd\x2d438d\x2dbcd2\x2ddc522b9bd850.mount: Deactivated successfully.
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.315 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.316 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[bcfbc00c-d54c-4ea1-b7ba-7401134aa2a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.681 257641 INFO nova.virt.libvirt.driver [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance shutdown successfully.#033[00m
Nov 29 03:26:14 np0005539550 kernel: tap8e96d4f5-b3: entered promiscuous mode
Nov 29 03:26:14 np0005539550 NetworkManager[49039]: <info>  [1764404774.7522] manager: (tap8e96d4f5-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Nov 29 03:26:14 np0005539550 systemd-udevd[344742]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:26:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:14Z|00630|binding|INFO|Claiming lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for this chassis.
Nov 29 03:26:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:14Z|00631|binding|INFO|8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7: Claiming fa:16:3e:4d:4a:3a 10.100.0.14
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.753 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.761 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:4a:3a 10.100.0.14'], port_security=['fa:16:3e:4d:4a:3a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '870400f0-dfee-46aa-85e0-ee30dae2ee74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38642321-47cd-438d-bcd2-dc522b9bd850', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd47b7209-ced1-414f-bc97-b5859dfea11d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=743557d6-422c-40e3-9124-4df909a58562, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.762 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 in datapath 38642321-47cd-438d-bcd2-dc522b9bd850 bound to our chassis#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.763 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 38642321-47cd-438d-bcd2-dc522b9bd850#033[00m
Nov 29 03:26:14 np0005539550 NetworkManager[49039]: <info>  [1764404774.7705] device (tap8e96d4f5-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:26:14 np0005539550 NetworkManager[49039]: <info>  [1764404774.7716] device (tap8e96d4f5-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:26:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:14Z|00632|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 ovn-installed in OVS
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.770 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:14Z|00633|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 up in Southbound
Nov 29 03:26:14 np0005539550 nova_compute[257631]: 2025-11-29 08:26:14.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.780 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b522d9a3-bca4-4243-aea8-3e483542be61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.781 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap38642321-41 in ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.784 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap38642321-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.784 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49822341-8fe5-40c0-a78f-cdc398840671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9a82ae-467a-4710-9faa-1f4ae2c08ea5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.795 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[41608302-64ce-4ac9-9fe0-045b733dbdd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 systemd-machined[216673]: New machine qemu-75-instance-00000088.
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.819 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[83316f28-1381-4645-9dc6-e707637f45ec]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 systemd[1]: Started Virtual Machine qemu-75-instance-00000088.
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.849 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9df081c6-ebf7-4c3b-b728-7e0636232c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.854 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3a000ad2-419e-4a9e-b0c8-b6e28690f154]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 NetworkManager[49039]: <info>  [1764404774.8563] manager: (tap38642321-40): new Veth device (/org/freedesktop/NetworkManager/Devices/281)
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.891 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccbc87f-da6c-4cc2-9259-c538292ba880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.894 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6e308baf-73ca-4d9f-99af-7fc88d27310c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 NetworkManager[49039]: <info>  [1764404774.9205] device (tap38642321-40): carrier: link connected
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.926 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[838519a7-67f8-4708-b89a-54321f8104e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.947 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[628c560a-fc33-4dcd-afd9-0d181431abef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38642321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:f9:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782258, 'reachable_time': 35713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344863, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.963 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[65fdfe40-9075-4a26-8ee2-c107044aae47]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:f98b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 782258, 'tstamp': 782258}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344864, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:14.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:14.982 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b6b21389-d8a3-430c-9dff-01a272c9c175]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap38642321-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:f9:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782258, 'reachable_time': 35713, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344865, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.016 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0f073575-9b5d-4d9a-bf9b-d5b234d84c18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.069 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7fc75d-b532-4bcb-8c2c-74875fe11c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.071 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38642321-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.071 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.072 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap38642321-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:15 np0005539550 kernel: tap38642321-40: entered promiscuous mode
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.074 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:15 np0005539550 NetworkManager[49039]: <info>  [1764404775.0749] manager: (tap38642321-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.078 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap38642321-40, col_values=(('external_ids', {'iface-id': '1f39aec2-e2f9-4abf-8561-ea0dffb6b44d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:15Z|00634|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.098 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.098 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.100 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99396eb1-f1b4-400c-b1c2-306109187cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.101 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-38642321-47cd-438d-bcd2-dc522b9bd850
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/38642321-47cd-438d-bcd2-dc522b9bd850.pid.haproxy
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 38642321-47cd-438d-bcd2-dc522b9bd850
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:26:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:15.104 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'env', 'PROCESS_TAG=haproxy-38642321-47cd-438d-bcd2-dc522b9bd850', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/38642321-47cd-438d-bcd2-dc522b9bd850.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.412 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 870400f0-dfee-46aa-85e0-ee30dae2ee74 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.413 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404775.4121423, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.414 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.417 257641 INFO nova.virt.libvirt.driver [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance running successfully.#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.417 257641 INFO nova.virt.libvirt.driver [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance soft rebooted successfully.#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.418 257641 DEBUG nova.compute.manager [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.441 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.444 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:15 np0005539550 podman[344939]: 2025-11-29 08:26:15.469588538 +0000 UTC m=+0.051964430 container create 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.489 257641 DEBUG oslo_concurrency.lockutils [None req-172559b1-1d3f-404c-bec5-b84fded0a954 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.491 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] During sync_power_state the instance has a pending task (reboot_started). Skip.#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.492 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404775.413396, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.492 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Started (Lifecycle Event)#033[00m
Nov 29 03:26:15 np0005539550 systemd[1]: Started libpod-conmon-1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e.scope.
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.519 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.523 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:26:15 np0005539550 podman[344939]: 2025-11-29 08:26:15.445179288 +0000 UTC m=+0.027555200 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:26:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac24c5f78605b5361766974c48281666d549a7858b70e3d56aab8899efe6e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:15 np0005539550 podman[344939]: 2025-11-29 08:26:15.559471161 +0000 UTC m=+0.141847073 container init 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:15 np0005539550 podman[344939]: 2025-11-29 08:26:15.564692373 +0000 UTC m=+0.147068265 container start 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:15 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [NOTICE]   (344958) : New worker (344960) forked
Nov 29 03:26:15 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [NOTICE]   (344958) : Loading success.
Nov 29 03:26:15 np0005539550 nova_compute[257631]: 2025-11-29 08:26:15.882 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 529 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 227 op/s
Nov 29 03:26:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:16.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:16 np0005539550 nova_compute[257631]: 2025-11-29 08:26:16.384 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:16.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:17 np0005539550 podman[344970]: 2025-11-29 08:26:17.377703403 +0000 UTC m=+0.117170466 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:26:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 196 op/s
Nov 29 03:26:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:18 np0005539550 nova_compute[257631]: 2025-11-29 08:26:18.212 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:18.958 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:18.959 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:18.960 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:18.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 547 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 29 03:26:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:20.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:20Z|00635|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:20Z|00636|binding|INFO|Releasing lport 1f39aec2-e2f9-4abf-8561-ea0dffb6b44d from this chassis (sb_readonly=0)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009672021856970036 of space, bias 1.0, pg target 2.901606557091011 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:26:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:26:20 np0005539550 nova_compute[257631]: 2025-11-29 08:26:20.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:20 np0005539550 nova_compute[257631]: 2025-11-29 08:26:20.884 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:20.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.085 257641 DEBUG nova.compute.manager [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.086 257641 DEBUG oslo_concurrency.lockutils [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.086 257641 DEBUG oslo_concurrency.lockutils [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.086 257641 DEBUG oslo_concurrency.lockutils [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.087 257641 DEBUG nova.compute.manager [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:21 np0005539550 nova_compute[257631]: 2025-11-29 08:26:21.087 257641 WARNING nova.compute.manager [req-a8a49211-4721-4777-b3b4-48db7da25c0b req-da765e8e-ab03-4047-9cfc-26a53e417b01 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6c2b1882-6609-4833-aaa9-54df86ce2e85 does not exist
Nov 29 03:26:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 613b224a-e9c7-4df0-8451-e4221430663f does not exist
Nov 29 03:26:21 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a19bb88-6559-4b95-beb1-d0d9ec6b664e does not exist
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:26:21 np0005539550 podman[345324]: 2025-11-29 08:26:21.997663622 +0000 UTC m=+0.049399975 container create bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 03:26:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 603 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.8 MiB/s wr, 243 op/s
Nov 29 03:26:22 np0005539550 systemd[1]: Started libpod-conmon-bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d.scope.
Nov 29 03:26:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:22.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:21.977549172 +0000 UTC m=+0.029285555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:22.078458574 +0000 UTC m=+0.130194947 container init bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:22.088099669 +0000 UTC m=+0.139836022 container start bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:22.091741471 +0000 UTC m=+0.143477854 container attach bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:26:22 np0005539550 youthful_wu[345341]: 167 167
Nov 29 03:26:22 np0005539550 systemd[1]: libpod-bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d.scope: Deactivated successfully.
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:22.095794884 +0000 UTC m=+0.147531247 container died bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:26:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9f920059a93a8b6a9118d60a1b643c72fdc101bb3122f1701684ecb381d1a5af-merged.mount: Deactivated successfully.
Nov 29 03:26:22 np0005539550 podman[345324]: 2025-11-29 08:26:22.140390677 +0000 UTC m=+0.192127050 container remove bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:26:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:22 np0005539550 systemd[1]: libpod-conmon-bb21ec249f98b00d7d4e371a08ef35b9719f3afb16a445c6ff99e901db81fb6d.scope: Deactivated successfully.
Nov 29 03:26:22 np0005539550 podman[345367]: 2025-11-29 08:26:22.346914821 +0000 UTC m=+0.045846445 container create eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:26:22 np0005539550 systemd[1]: Started libpod-conmon-eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a.scope.
Nov 29 03:26:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:22 np0005539550 podman[345367]: 2025-11-29 08:26:22.326179425 +0000 UTC m=+0.025111069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:22 np0005539550 podman[345367]: 2025-11-29 08:26:22.429393496 +0000 UTC m=+0.128325150 container init eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:26:22 np0005539550 podman[345367]: 2025-11-29 08:26:22.440623981 +0000 UTC m=+0.139555605 container start eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:26:22 np0005539550 podman[345367]: 2025-11-29 08:26:22.444309805 +0000 UTC m=+0.143241429 container attach eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:26:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:22.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.215 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:23 np0005539550 festive_vaughan[345382]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:26:23 np0005539550 festive_vaughan[345382]: --> relative data size: 1.0
Nov 29 03:26:23 np0005539550 festive_vaughan[345382]: --> All data devices are unavailable
Nov 29 03:26:23 np0005539550 systemd[1]: libpod-eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a.scope: Deactivated successfully.
Nov 29 03:26:23 np0005539550 podman[345367]: 2025-11-29 08:26:23.264602095 +0000 UTC m=+0.963533719 container died eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:26:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-017b5e670d905f029702e749008c40b0b3a4a92ede8cdd558755d3208438a036-merged.mount: Deactivated successfully.
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.313 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.316 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.317 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.317 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.318 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.318 257641 WARNING nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.318 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.319 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.319 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.319 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.319 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.320 257641 WARNING nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.320 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.320 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.320 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.321 257641 DEBUG oslo_concurrency.lockutils [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.321 257641 DEBUG nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:23 np0005539550 nova_compute[257631]: 2025-11-29 08:26:23.321 257641 WARNING nova.compute.manager [req-a9cc4bbe-edf6-4629-b58a-1cf0e8d9a578 req-0432b6e7-136d-4fbe-b2be-d87ce95f5757 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:26:23 np0005539550 podman[345367]: 2025-11-29 08:26:23.328742234 +0000 UTC m=+1.027673858 container remove eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:26:23 np0005539550 systemd[1]: libpod-conmon-eed50a7dc236a2c652daee8e3a81010b04f6af6ff62da601d749a83d2b31ed9a.scope: Deactivated successfully.
Nov 29 03:26:23 np0005539550 podman[345552]: 2025-11-29 08:26:23.918128181 +0000 UTC m=+0.036711713 container create 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:23 np0005539550 systemd[1]: Started libpod-conmon-3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd.scope.
Nov 29 03:26:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:23 np0005539550 podman[345552]: 2025-11-29 08:26:23.993873185 +0000 UTC m=+0.112456747 container init 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:26:24 np0005539550 podman[345552]: 2025-11-29 08:26:23.903155061 +0000 UTC m=+0.021738613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:24 np0005539550 podman[345552]: 2025-11-29 08:26:24.000196575 +0000 UTC m=+0.118780127 container start 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:26:24 np0005539550 podman[345552]: 2025-11-29 08:26:24.003002046 +0000 UTC m=+0.121585578 container attach 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:26:24 np0005539550 hopeful_zhukovsky[345569]: 167 167
Nov 29 03:26:24 np0005539550 systemd[1]: libpod-3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd.scope: Deactivated successfully.
Nov 29 03:26:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 624 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.8 MiB/s wr, 216 op/s
Nov 29 03:26:24 np0005539550 podman[345552]: 2025-11-29 08:26:24.005845489 +0000 UTC m=+0.124429021 container died 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:26:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1e083f332f4e69621ad023701f18f5d0d2d50b332dc20193374ce1de4d40441f-merged.mount: Deactivated successfully.
Nov 29 03:26:24 np0005539550 podman[345552]: 2025-11-29 08:26:24.044355427 +0000 UTC m=+0.162938959 container remove 3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:26:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:24.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:24 np0005539550 systemd[1]: libpod-conmon-3894dc867f7ad213bef9a6f2e955b7b4d43a5fd6e2a22768a8f2156d673f4ecd.scope: Deactivated successfully.
Nov 29 03:26:24 np0005539550 podman[345593]: 2025-11-29 08:26:24.241983455 +0000 UTC m=+0.047058126 container create a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:26:24 np0005539550 systemd[1]: Started libpod-conmon-a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514.scope.
Nov 29 03:26:24 np0005539550 podman[345593]: 2025-11-29 08:26:24.224185613 +0000 UTC m=+0.029260314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db517252565d7f40f55f52a0a139b27452be01160a5df291fb905245d2834457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db517252565d7f40f55f52a0a139b27452be01160a5df291fb905245d2834457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db517252565d7f40f55f52a0a139b27452be01160a5df291fb905245d2834457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db517252565d7f40f55f52a0a139b27452be01160a5df291fb905245d2834457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:24 np0005539550 podman[345593]: 2025-11-29 08:26:24.339390549 +0000 UTC m=+0.144465250 container init a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:26:24 np0005539550 podman[345593]: 2025-11-29 08:26:24.348076989 +0000 UTC m=+0.153151660 container start a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:26:24 np0005539550 podman[345593]: 2025-11-29 08:26:24.352901802 +0000 UTC m=+0.157976473 container attach a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:26:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:24.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:25 np0005539550 friendly_williams[345610]: {
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:    "0": [
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:        {
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "devices": [
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "/dev/loop3"
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            ],
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "lv_name": "ceph_lv0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "lv_size": "7511998464",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "name": "ceph_lv0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "tags": {
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.cluster_name": "ceph",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.crush_device_class": "",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.encrypted": "0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.osd_id": "0",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.type": "block",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:                "ceph.vdo": "0"
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            },
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "type": "block",
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:            "vg_name": "ceph_vg0"
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:        }
Nov 29 03:26:25 np0005539550 friendly_williams[345610]:    ]
Nov 29 03:26:25 np0005539550 friendly_williams[345610]: }
Nov 29 03:26:25 np0005539550 systemd[1]: libpod-a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514.scope: Deactivated successfully.
Nov 29 03:26:25 np0005539550 podman[345593]: 2025-11-29 08:26:25.112174473 +0000 UTC m=+0.917249164 container died a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:26:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-db517252565d7f40f55f52a0a139b27452be01160a5df291fb905245d2834457-merged.mount: Deactivated successfully.
Nov 29 03:26:25 np0005539550 podman[345593]: 2025-11-29 08:26:25.177655696 +0000 UTC m=+0.982730367 container remove a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:26:25 np0005539550 systemd[1]: libpod-conmon-a23695fbbca5d8e8a81a1f38438d00ce5baef10e6735e85e46e289405db63514.scope: Deactivated successfully.
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.883781307 +0000 UTC m=+0.047280162 container create 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:26:25 np0005539550 nova_compute[257631]: 2025-11-29 08:26:25.886 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:25 np0005539550 systemd[1]: Started libpod-conmon-204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad.scope.
Nov 29 03:26:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.864053106 +0000 UTC m=+0.027551991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.970045217 +0000 UTC m=+0.133544092 container init 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.978264246 +0000 UTC m=+0.141763101 container start 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.981892798 +0000 UTC m=+0.145391673 container attach 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:26:25 np0005539550 gallant_williamson[345794]: 167 167
Nov 29 03:26:25 np0005539550 systemd[1]: libpod-204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad.scope: Deactivated successfully.
Nov 29 03:26:25 np0005539550 podman[345777]: 2025-11-29 08:26:25.984876514 +0000 UTC m=+0.148375369 container died 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:26:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 640 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 5.4 MiB/s wr, 299 op/s
Nov 29 03:26:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1ad5c6cb3aabb992da757996a00b51cfaa6e9302215dddafc0297e6ea2125fd9-merged.mount: Deactivated successfully.
Nov 29 03:26:26 np0005539550 podman[345777]: 2025-11-29 08:26:26.02174711 +0000 UTC m=+0.185245965 container remove 204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:26:26 np0005539550 systemd[1]: libpod-conmon-204b839d7673a6a880c4553d05617e55a84a47c58a0bb535eeaa8292b7125fad.scope: Deactivated successfully.
Nov 29 03:26:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:26 np0005539550 podman[345817]: 2025-11-29 08:26:26.199134545 +0000 UTC m=+0.041949426 container create 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:26:26 np0005539550 systemd[1]: Started libpod-conmon-227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784.scope.
Nov 29 03:26:26 np0005539550 podman[345817]: 2025-11-29 08:26:26.181660881 +0000 UTC m=+0.024475782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:26:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc49b4eb02d5455b055843fa64b955b917b46e6c7e6f4daa1b97795cff2d7679/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc49b4eb02d5455b055843fa64b955b917b46e6c7e6f4daa1b97795cff2d7679/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc49b4eb02d5455b055843fa64b955b917b46e6c7e6f4daa1b97795cff2d7679/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc49b4eb02d5455b055843fa64b955b917b46e6c7e6f4daa1b97795cff2d7679/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:26:26 np0005539550 podman[345817]: 2025-11-29 08:26:26.298549289 +0000 UTC m=+0.141364200 container init 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:26:26 np0005539550 podman[345817]: 2025-11-29 08:26:26.305113316 +0000 UTC m=+0.147928197 container start 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:26:26 np0005539550 podman[345817]: 2025-11-29 08:26:26.309538749 +0000 UTC m=+0.152353630 container attach 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:26:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:26.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]: {
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:        "osd_id": 0,
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:        "type": "bluestore"
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]:    }
Nov 29 03:26:27 np0005539550 xenodochial_curie[345834]: }
Nov 29 03:26:27 np0005539550 systemd[1]: libpod-227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784.scope: Deactivated successfully.
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:27 np0005539550 podman[345856]: 2025-11-29 08:26:27.193689031 +0000 UTC m=+0.027619453 container died 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:26:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cc49b4eb02d5455b055843fa64b955b917b46e6c7e6f4daa1b97795cff2d7679-merged.mount: Deactivated successfully.
Nov 29 03:26:27 np0005539550 podman[345856]: 2025-11-29 08:26:27.245836915 +0000 UTC m=+0.079767317 container remove 227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_curie, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:26:27 np0005539550 systemd[1]: libpod-conmon-227080b3928d7167301025d3ee568ba505b54dc19df7dc4e48f37e325ccdd784.scope: Deactivated successfully.
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c1174dc1-16ca-481d-bf8a-86c54eff2b63 does not exist
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 34285ceb-35dc-43ca-a7c4-03dbb5feae89 does not exist
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3204711b-925c-4c30-a35e-801f01b07882 does not exist
Nov 29 03:26:27 np0005539550 nova_compute[257631]: 2025-11-29 08:26:27.544 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 642 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.2 MiB/s wr, 325 op/s
Nov 29 03:26:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:28 np0005539550 nova_compute[257631]: 2025-11-29 08:26:28.216 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:28Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:4a:3a 10.100.0.14
Nov 29 03:26:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:28.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 642 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 3.6 MiB/s wr, 274 op/s
Nov 29 03:26:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:30 np0005539550 nova_compute[257631]: 2025-11-29 08:26:30.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:30.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 642 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.6 MiB/s wr, 388 op/s
Nov 29 03:26:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:32.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:32 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Nov 29 03:26:32 np0005539550 nova_compute[257631]: 2025-11-29 08:26:32.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:33.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:33 np0005539550 nova_compute[257631]: 2025-11-29 08:26:33.218 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539550 nova_compute[257631]: 2025-11-29 08:26:33.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:33 np0005539550 nova_compute[257631]: 2025-11-29 08:26:33.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:33 np0005539550 nova_compute[257631]: 2025-11-29 08:26:33.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:26:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 647 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.2 MiB/s wr, 300 op/s
Nov 29 03:26:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:34 np0005539550 nova_compute[257631]: 2025-11-29 08:26:34.469 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:34 np0005539550 nova_compute[257631]: 2025-11-29 08:26:34.470 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:34 np0005539550 nova_compute[257631]: 2025-11-29 08:26:34.470 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:26:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:35.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:35 np0005539550 nova_compute[257631]: 2025-11-29 08:26:35.646 257641 INFO nova.compute.manager [None req-bbeface6-35d4-4e3f-b99f-73862a1f9e01 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Get console output#033[00m
Nov 29 03:26:35 np0005539550 nova_compute[257631]: 2025-11-29 08:26:35.653 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:26:35 np0005539550 nova_compute[257631]: 2025-11-29 08:26:35.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.7 MiB/s wr, 311 op/s
Nov 29 03:26:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:36 np0005539550 podman[345925]: 2025-11-29 08:26:36.33830495 +0000 UTC m=+0.065854524 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:36 np0005539550 podman[345926]: 2025-11-29 08:26:36.342760633 +0000 UTC m=+0.065197687 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.539 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [{"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.556 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-346e849d-fa61-4451-b34c-d6165fea3aa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.556 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.557 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.557 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.557 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.584 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.584 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.585 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.585 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:26:36 np0005539550 nova_compute[257631]: 2025-11-29 08:26:36.586 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:37.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247107837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.047 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.071 257641 DEBUG nova.compute.manager [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.072 257641 DEBUG nova.compute.manager [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing instance network info cache due to event network-changed-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.073 257641 DEBUG oslo_concurrency.lockutils [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.073 257641 DEBUG oslo_concurrency.lockutils [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.073 257641 DEBUG nova.network.neutron [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Refreshing network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.152 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.153 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.158 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.158 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.163 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.163 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:26:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.165 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.165 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.166 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.166 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.166 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.168 257641 INFO nova.compute.manager [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Terminating instance#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.169 257641 DEBUG nova.compute.manager [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:26:37 np0005539550 kernel: tap8e96d4f5-b3 (unregistering): left promiscuous mode
Nov 29 03:26:37 np0005539550 NetworkManager[49039]: <info>  [1764404797.2197] device (tap8e96d4f5-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:26:37 np0005539550 virtqemud[256287]: An error occurred, but the cause is unknown
Nov 29 03:26:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:37Z|00637|binding|INFO|Releasing lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 from this chassis (sb_readonly=0)
Nov 29 03:26:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:37Z|00638|binding|INFO|Setting lport 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 down in Southbound
Nov 29 03:26:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:37Z|00639|binding|INFO|Removing iface tap8e96d4f5-b3 ovn-installed in OVS
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.235 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:4a:3a 10.100.0.14'], port_security=['fa:16:3e:4d:4a:3a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '870400f0-dfee-46aa-85e0-ee30dae2ee74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-38642321-47cd-438d-bcd2-dc522b9bd850', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd47b7209-ced1-414f-bc97-b5859dfea11d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=743557d6-422c-40e3-9124-4df909a58562, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.236 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 in datapath 38642321-47cd-438d-bcd2-dc522b9bd850 unbound from our chassis#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.238 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 38642321-47cd-438d-bcd2-dc522b9bd850, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.239 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d2490d2-c6b7-4952-92cf-8b3233fd00d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.240 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 namespace which is not needed anymore#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000088.scope: Deactivated successfully.
Nov 29 03:26:37 np0005539550 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000088.scope: Consumed 14.096s CPU time.
Nov 29 03:26:37 np0005539550 systemd-machined[216673]: Machine qemu-75-instance-00000088 terminated.
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [NOTICE]   (344958) : haproxy version is 2.8.14-c23fe91
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [NOTICE]   (344958) : path to executable is /usr/sbin/haproxy
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [WARNING]  (344958) : Exiting Master process...
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [WARNING]  (344958) : Exiting Master process...
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [ALERT]    (344958) : Current worker (344960) exited with code 143 (Terminated)
Nov 29 03:26:37 np0005539550 neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850[344954]: [WARNING]  (344958) : All workers exited. Exiting... (0)
Nov 29 03:26:37 np0005539550 systemd[1]: libpod-1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e.scope: Deactivated successfully.
Nov 29 03:26:37 np0005539550 conmon[344954]: conmon 1bdedee47a3ef96108ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e.scope/container/memory.events
Nov 29 03:26:37 np0005539550 podman[346008]: 2025-11-29 08:26:37.395397234 +0000 UTC m=+0.053601833 container died 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.406 257641 INFO nova.virt.libvirt.driver [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Instance destroyed successfully.#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.406 257641 DEBUG nova.objects.instance [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid 870400f0-dfee-46aa-85e0-ee30dae2ee74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.422 257641 DEBUG nova.virt.libvirt.vif [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:25:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1030237697',display_name='tempest-TestNetworkAdvancedServerOps-server-1030237697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1030237697',id=136,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL8hSICEdc8yh+RVv5/YT7HqxmkgTR1xm7P0ua/Xm6wYuxoPU/VIwfyVEtyXd/Lmc9z/b0hFgS+TiHem2uVcK1wX7nmwhbeKiJ77cXnXXanqoxFn/BxMiwo6s/prT9Qlfw==',key_name='tempest-TestNetworkAdvancedServerOps-802837668',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-pkfa0e3j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:26:15Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=870400f0-dfee-46aa-85e0-ee30dae2ee74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.423 257641 DEBUG nova.network.os_vif_util [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.424 257641 DEBUG nova.network.os_vif_util [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.425 257641 DEBUG os_vif [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.427 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e96d4f5-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.436 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.443 257641 INFO os_vif [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:4a:3a,bridge_name='br-int',has_traffic_filtering=True,id=8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7,network=Network(38642321-47cd-438d-bcd2-dc522b9bd850),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e96d4f5-b3')#033[00m
Nov 29 03:26:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e-userdata-shm.mount: Deactivated successfully.
Nov 29 03:26:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ac24c5f78605b5361766974c48281666d549a7858b70e3d56aab8899efe6e8a-merged.mount: Deactivated successfully.
Nov 29 03:26:37 np0005539550 podman[346008]: 2025-11-29 08:26:37.457259205 +0000 UTC m=+0.115463814 container cleanup 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:26:37 np0005539550 systemd[1]: libpod-conmon-1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e.scope: Deactivated successfully.
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.479 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.485 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3735MB free_disk=20.71929931640625GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.486 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.486 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:37 np0005539550 podman[346065]: 2025-11-29 08:26:37.52520392 +0000 UTC m=+0.042449119 container remove 1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.531 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5d816d-b011-45da-a494-fd95daab1bea]: (4, ('Sat Nov 29 08:26:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 (1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e)\n1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e\nSat Nov 29 08:26:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 (1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e)\n1bdedee47a3ef96108ca907dba7d131a0d95a13a5737e1d1be5d7c4b6011e07e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.533 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[116a31ce-4c66-4b99-8027-f71fc09840d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.534 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap38642321-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.535 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 kernel: tap38642321-40: left promiscuous mode
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.552 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.553 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ecf367-9fea-49ed-8285-c4c936d32fcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.570 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 4702f4ee-458d-4146-b9b2-70ecf718176c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.571 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 346e849d-fa61-4451-b34c-d6165fea3aa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.571 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 870400f0-dfee-46aa-85e0-ee30dae2ee74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.571 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.571 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9b439ba0-aa60-4e34-9241-864d6e9d152d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.571 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.572 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c2bb171e-4f02-4486-a018-8e8c13bf3154]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2e173810-f0bb-4a4d-b229-c813f8e8f85e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 782250, 'reachable_time': 36203, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346083, 'error': None, 'target': 'ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 systemd[1]: run-netns-ovnmeta\x2d38642321\x2d47cd\x2d438d\x2dbcd2\x2ddc522b9bd850.mount: Deactivated successfully.
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.594 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-38642321-47cd-438d-bcd2-dc522b9bd850 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:26:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:37.594 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[9bac247a-3939-4061-8882-f4b9847877cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.678 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.905 257641 INFO nova.virt.libvirt.driver [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Deleting instance files /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74_del#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.906 257641 INFO nova.virt.libvirt.driver [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Deletion of /var/lib/nova/instances/870400f0-dfee-46aa-85e0-ee30dae2ee74_del complete#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.972 257641 INFO nova.compute.manager [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.973 257641 DEBUG oslo.service.loopingcall [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.973 257641 DEBUG nova.compute.manager [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:26:37 np0005539550 nova_compute[257631]: 2025-11-29 08:26:37.973 257641 DEBUG nova.network.neutron [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:26:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 662 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 317 op/s
Nov 29 03:26:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2415913008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.105 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.110 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.129 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.151 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.151 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.220 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.916 257641 DEBUG nova.network.neutron [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updated VIF entry in instance network info cache for port 8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.917 257641 DEBUG nova.network.neutron [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [{"id": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "address": "fa:16:3e:4d:4a:3a", "network": {"id": "38642321-47cd-438d-bcd2-dc522b9bd850", "bridge": "br-int", "label": "tempest-network-smoke--115313789", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e96d4f5-b3", "ovs_interfaceid": "8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:38 np0005539550 nova_compute[257631]: 2025-11-29 08:26:38.936 257641 DEBUG oslo_concurrency.lockutils [req-d530d103-1c65-4e14-9a43-4f4d85f4814f req-bed1a763-2117-4653-a101-017a3af0d013 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-870400f0-dfee-46aa-85e0-ee30dae2ee74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:26:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:39.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.136 257641 DEBUG nova.network.neutron [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.152 257641 INFO nova.compute.manager [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Took 1.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.174 257641 DEBUG nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.174 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.175 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.175 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.175 257641 DEBUG nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.175 257641 DEBUG nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-unplugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.176 257641 DEBUG nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.176 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.176 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.176 257641 DEBUG oslo_concurrency.lockutils [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.176 257641 DEBUG nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] No waiting events found dispatching network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.177 257641 WARNING nova.compute.manager [req-a083c9f6-ea68-4c0e-a826-bcd22c0d4a60 req-80222a16-72c1-40a6-b714-8525c04557de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received unexpected event network-vif-plugged-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.190 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.190 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.251 257641 DEBUG oslo_concurrency.processutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.513 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:26:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4170868954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.701 257641 DEBUG oslo_concurrency.processutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.707 257641 DEBUG nova.compute.provider_tree [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.727 257641 DEBUG nova.scheduler.client.report [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.756 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.788 257641 INFO nova.scheduler.client.report [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance 870400f0-dfee-46aa-85e0-ee30dae2ee74#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.850 257641 DEBUG oslo_concurrency.lockutils [None req-7c668cba-489c-4ce2-88c7-75cc1f8957db fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "870400f0-dfee-46aa-85e0-ee30dae2ee74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:26:39 np0005539550 nova_compute[257631]: 2025-11-29 08:26:39.916 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 666 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.7 MiB/s wr, 388 op/s
Nov 29 03:26:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:40.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:40.469 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:26:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:40.469 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:26:40 np0005539550 nova_compute[257631]: 2025-11-29 08:26:40.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:40 np0005539550 nova_compute[257631]: 2025-11-29 08:26:40.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:40 np0005539550 nova_compute[257631]: 2025-11-29 08:26:40.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:26:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:41.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:41 np0005539550 nova_compute[257631]: 2025-11-29 08:26:41.259 257641 DEBUG nova.compute.manager [req-f066404f-75e5-4c41-9fa8-02bc05eba570 req-e843a82a-4216-431b-a6a3-1f97053d9ca0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Received event network-vif-deleted-8e96d4f5-b34a-4a2c-9282-b85aaf64b0b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:26:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.4 MiB/s wr, 490 op/s
Nov 29 03:26:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:42.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Nov 29 03:26:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Nov 29 03:26:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Nov 29 03:26:42 np0005539550 nova_compute[257631]: 2025-11-29 08:26:42.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:42 np0005539550 nova_compute[257631]: 2025-11-29 08:26:42.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:43 np0005539550 nova_compute[257631]: 2025-11-29 08:26:43.222 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:43Z|00640|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:26:43.472 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:26:43 np0005539550 nova_compute[257631]: 2025-11-29 08:26:43.490 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 702 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 8.8 MiB/s wr, 476 op/s
Nov 29 03:26:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:44.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Nov 29 03:26:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Nov 29 03:26:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Nov 29 03:26:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:45.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Nov 29 03:26:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Nov 29 03:26:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Nov 29 03:26:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 775 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 14 MiB/s wr, 487 op/s
Nov 29 03:26:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:46.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:47 np0005539550 nova_compute[257631]: 2025-11-29 08:26:47.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 12 MiB/s wr, 361 op/s
Nov 29 03:26:48 np0005539550 podman[346186]: 2025-11-29 08:26:48.021451941 +0000 UTC m=+0.116205312 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:26:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:48 np0005539550 nova_compute[257631]: 2025-11-29 08:26:48.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:49.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:49 np0005539550 nova_compute[257631]: 2025-11-29 08:26:49.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:26:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 770 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.9 MiB/s wr, 367 op/s
Nov 29 03:26:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:50.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:50 np0005539550 nova_compute[257631]: 2025-11-29 08:26:50.598 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:51.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 788 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 8.3 MiB/s wr, 324 op/s
Nov 29 03:26:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:52.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Nov 29 03:26:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Nov 29 03:26:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Nov 29 03:26:52 np0005539550 nova_compute[257631]: 2025-11-29 08:26:52.403 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404797.400729, 870400f0-dfee-46aa-85e0-ee30dae2ee74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:26:52 np0005539550 nova_compute[257631]: 2025-11-29 08:26:52.404 257641 INFO nova.compute.manager [-] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:26:52 np0005539550 nova_compute[257631]: 2025-11-29 08:26:52.428 257641 DEBUG nova.compute.manager [None req-28dfb948-bcdc-4b63-b624-cd82fa304be2 - - - - - -] [instance: 870400f0-dfee-46aa-85e0-ee30dae2ee74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:26:52 np0005539550 nova_compute[257631]: 2025-11-29 08:26:52.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:53.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:53 np0005539550 nova_compute[257631]: 2025-11-29 08:26:53.228 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 5.3 MiB/s wr, 264 op/s
Nov 29 03:26:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:26:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:26:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:55.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:26:55Z|00641|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:26:55 np0005539550 nova_compute[257631]: 2025-11-29 08:26:55.477 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.4 MiB/s wr, 239 op/s
Nov 29 03:26:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:56.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:26:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:26:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:26:57 np0005539550 nova_compute[257631]: 2025-11-29 08:26:57.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:57 np0005539550 nova_compute[257631]: 2025-11-29 08:26:57.526 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Nov 29 03:26:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:26:58.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:26:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:26:58 np0005539550 nova_compute[257631]: 2025-11-29 08:26:58.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:26:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:26:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:26:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:26:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:26:59
Nov 29 03:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Nov 29 03:26:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:27:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Nov 29 03:27:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:00 np0005539550 nova_compute[257631]: 2025-11-29 08:27:00.756 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:00 np0005539550 nova_compute[257631]: 2025-11-29 08:27:00.756 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:00 np0005539550 nova_compute[257631]: 2025-11-29 08:27:00.937 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:27:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:01.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.063 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.063 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.070 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.070 257641 INFO nova.compute.claims [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.551 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167667500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.985 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:01 np0005539550 nova_compute[257631]: 2025-11-29 08:27:01.991 257641 DEBUG nova.compute.provider_tree [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 600 KiB/s wr, 111 op/s
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.043 257641 DEBUG nova.scheduler.client.report [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.140 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.141 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:27:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.225 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.226 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.256 257641 INFO nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.309 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.444 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.446 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.447 257641 INFO nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating image(s)#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.486 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.521 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.554 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.559 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.587 257641 DEBUG nova.policy [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fed6803a835e471f9bd60e3236e78e5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4145ed6cde61439ebcc12fae2609b724', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.622 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.623 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.624 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.624 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.647 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:02 np0005539550 nova_compute[257631]: 2025-11-29 08:27:02.650 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.273 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.307 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.377 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.485 257641 DEBUG nova.objects.instance [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.547 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.547 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Ensure instance console log exists: /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.548 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.548 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.548 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:03 np0005539550 nova_compute[257631]: 2025-11-29 08:27:03.954 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Successfully created port: a85ee38f-1a42-4405-9db6-3b9430481c2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:27:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 787 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 508 KiB/s wr, 106 op/s
Nov 29 03:27:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.074 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Successfully updated port: a85ee38f-1a42-4405-9db6-3b9430481c2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:27:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.177 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.177 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.178 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.236 257641 DEBUG nova.compute.manager [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.237 257641 DEBUG nova.compute.manager [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing instance network info cache due to event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.237 257641 DEBUG oslo_concurrency.lockutils [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.475 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:27:05 np0005539550 nova_compute[257631]: 2025-11-29 08:27:05.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 811 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.0 MiB/s wr, 176 op/s
Nov 29 03:27:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:06.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.529 257641 DEBUG nova.network.neutron [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updating instance_info_cache with network_info: [{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.577 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.577 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance network_info: |[{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.578 257641 DEBUG oslo_concurrency.lockutils [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.578 257641 DEBUG nova.network.neutron [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.581 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start _get_guest_xml network_info=[{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.585 257641 WARNING nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.589 257641 DEBUG nova.virt.libvirt.host [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.590 257641 DEBUG nova.virt.libvirt.host [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.601 257641 DEBUG nova.virt.libvirt.host [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.602 257641 DEBUG nova.virt.libvirt.host [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.604 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.605 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.606 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.606 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.607 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.607 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.607 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.608 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.608 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.609 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.611 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.611 257641 DEBUG nova.virt.hardware [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:06 np0005539550 nova_compute[257631]: 2025-11-29 08:27:06.617 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:07.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607126926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.127 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.155 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.159 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:07 np0005539550 podman[346504]: 2025-11-29 08:27:07.335969695 +0000 UTC m=+0.072903052 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:27:07 np0005539550 podman[346505]: 2025-11-29 08:27:07.367773533 +0000 UTC m=+0.090306644 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.586 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740300371' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.621 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.622 257641 DEBUG nova.virt.libvirt.vif [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:02Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.623 257641 DEBUG nova.network.os_vif_util [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.624 257641 DEBUG nova.network.os_vif_util [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.625 257641 DEBUG nova.objects.instance [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.651 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <uuid>a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</uuid>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <name>instance-0000008e</name>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1118669844</nova:name>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:27:06</nova:creationTime>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <nova:port uuid="a85ee38f-1a42-4405-9db6-3b9430481c2c">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="serial">a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="uuid">a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d1:7f:8f"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <target dev="tapa85ee38f-1a"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/console.log" append="off"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:27:07 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:27:07 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:27:07 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:27:07 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.653 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Preparing to wait for external event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.653 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.654 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.654 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.655 257641 DEBUG nova.virt.libvirt.vif [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:02Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.656 257641 DEBUG nova.network.os_vif_util [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.657 257641 DEBUG nova.network.os_vif_util [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.657 257641 DEBUG os_vif [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.658 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.658 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.659 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.662 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.662 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa85ee38f-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.663 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa85ee38f-1a, col_values=(('external_ids', {'iface-id': 'a85ee38f-1a42-4405-9db6-3b9430481c2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:7f:8f', 'vm-uuid': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539550 NetworkManager[49039]: <info>  [1764404827.6657] manager: (tapa85ee38f-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.668 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.673 257641 INFO os_vif [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a')#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.747 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.747 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.747 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:d1:7f:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.748 257641 INFO nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Using config drive#033[00m
Nov 29 03:27:07 np0005539550 nova_compute[257631]: 2025-11-29 08:27:07.781 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 811 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 819 KiB/s rd, 1.0 MiB/s wr, 118 op/s
Nov 29 03:27:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.107 257641 INFO nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating config drive at /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.117 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5y_u7tg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.145 257641 DEBUG nova.network.neutron [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updated VIF entry in instance network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.146 257641 DEBUG nova.network.neutron [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updating instance_info_cache with network_info: [{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.190 257641 DEBUG oslo_concurrency.lockutils [req-81338c9d-384e-4fa9-bda5-8a996ea5346d req-29c555f8-a950-4d12-ad55-41461b8409f6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.253 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5y_u7tg" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.281 257641 DEBUG nova.storage.rbd_utils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.285 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:27:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.313 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.474 257641 DEBUG oslo_concurrency.processutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.475 257641 INFO nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deleting local config drive /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:27:08 np0005539550 kernel: tapa85ee38f-1a: entered promiscuous mode
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.5319] manager: (tapa85ee38f-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Nov 29 03:27:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:08Z|00642|binding|INFO|Claiming lport a85ee38f-1a42-4405-9db6-3b9430481c2c for this chassis.
Nov 29 03:27:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:08Z|00643|binding|INFO|a85ee38f-1a42-4405-9db6-3b9430481c2c: Claiming fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.534 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.541 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7f:8f 10.100.0.11'], port_security=['fa:16:3e:d1:7f:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24a46b48-0082-4428-9f80-5c459b7611cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0839f4fe-1173-477f-af5d-a589b6c94383', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ffd2286-10f4-4db8-b6f5-d8ec11414b0c, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a85ee38f-1a42-4405-9db6-3b9430481c2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.542 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a85ee38f-1a42-4405-9db6-3b9430481c2c in datapath 24a46b48-0082-4428-9f80-5c459b7611cb bound to our chassis#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.544 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24a46b48-0082-4428-9f80-5c459b7611cb#033[00m
Nov 29 03:27:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:08Z|00644|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c ovn-installed in OVS
Nov 29 03:27:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:08Z|00645|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c up in Southbound
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.556 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ebcae67e-0e36-4235-89f7-8fa1a50a9db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.557 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24a46b48-01 in ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.557 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.559 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24a46b48-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.559 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[02c5d9ad-9817-40de-a98f-350eefbcd187]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.560 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[697ee306-02a3-4cf9-81b3-ce9476cd5efc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.561 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 systemd-udevd[346636]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.572 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[793e250c-46e7-43be-8395-c149dec49bc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 systemd-machined[216673]: New machine qemu-76-instance-0000008e.
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.5797] device (tapa85ee38f-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.5804] device (tapa85ee38f-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:08 np0005539550 systemd[1]: Started Virtual Machine qemu-76-instance-0000008e.
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.585 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[42ad8ab5-cf33-484e-8543-db2aa6282bf1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.612 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0781112f-7786-4574-8e85-1d411e6abeb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.6195] manager: (tap24a46b48-00): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.618 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0769d107-535b-46df-a804-ab21ef31f2fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.649 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5a45b581-0361-43bd-a57c-f4ae1e2ffd2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.653 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[95341c7f-83e8-4104-a72e-39023513b5e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.6745] device (tap24a46b48-00): carrier: link connected
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.680 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[adeb7e7b-18f3-4b39-8411-12d78a38c5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.697 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[47c5bcf9-41c5-45fb-b4a3-b4d230aa6674]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24a46b48-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:e6:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787633, 'reachable_time': 24108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346670, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.715 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[91297831-ed89-4c3b-acdf-8164c1ea697f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:e67c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787633, 'tstamp': 787633}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346671, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.736 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[79b77421-ec9a-46f9-b964-b6647abffd95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24a46b48-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:e6:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787633, 'reachable_time': 24108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346672, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.764 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c032e37f-0dc6-4ff7-818e-3875e1a1fa1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.816 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a2a6fe-8e97-4054-9f4c-662b12ca7c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.817 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24a46b48-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.817 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.818 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24a46b48-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.819 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 NetworkManager[49039]: <info>  [1764404828.8202] manager: (tap24a46b48-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Nov 29 03:27:08 np0005539550 kernel: tap24a46b48-00: entered promiscuous mode
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.825 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24a46b48-00, col_values=(('external_ids', {'iface-id': '6c99cbbb-1c2a-4d06-b883-6a56e3365cd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.826 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:08Z|00646|binding|INFO|Releasing lport 6c99cbbb-1c2a-4d06-b883-6a56e3365cd3 from this chassis (sb_readonly=0)
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.830 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.835 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e33e1b9-007d-468d-a497-720482b60a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.836 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-24a46b48-0082-4428-9f80-5c459b7611cb
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 24a46b48-0082-4428-9f80-5c459b7611cb
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:08 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:08.836 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'env', 'PROCESS_TAG=haproxy-24a46b48-0082-4428-9f80-5c459b7611cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24a46b48-0082-4428-9f80-5c459b7611cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:08 np0005539550 nova_compute[257631]: 2025-11-29 08:27:08.843 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:27:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:27:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:27:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:27:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:27:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:09 np0005539550 podman[346740]: 2025-11-29 08:27:09.220249834 +0000 UTC m=+0.047116907 container create 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:27:09 np0005539550 systemd[1]: Started libpod-conmon-22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6.scope.
Nov 29 03:27:09 np0005539550 podman[346740]: 2025-11-29 08:27:09.196264615 +0000 UTC m=+0.023131708 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:09 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec3f0cfe8558de0b6deddf7bcf9938f9213d1c1635b7d60030fafe5b14a56cb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:09 np0005539550 podman[346740]: 2025-11-29 08:27:09.32205638 +0000 UTC m=+0.148923453 container init 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:09 np0005539550 podman[346740]: 2025-11-29 08:27:09.328111503 +0000 UTC m=+0.154978576 container start 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:27:09 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [NOTICE]   (346765) : New worker (346767) forked
Nov 29 03:27:09 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [NOTICE]   (346765) : Loading success.
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.429 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404829.4292226, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.430 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.510 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.515 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404829.432594, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.515 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.535 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.539 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:09 np0005539550 nova_compute[257631]: 2025-11-29 08:27:09.941 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 869 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 190 op/s
Nov 29 03:27:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.178 257641 DEBUG nova.compute.manager [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.178 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.179 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.179 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.179 257641 DEBUG nova.compute.manager [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Processing event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.180 257641 DEBUG nova.compute.manager [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.180 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.180 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.180 257641 DEBUG oslo_concurrency.lockutils [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.180 257641 DEBUG nova.compute.manager [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.181 257641 WARNING nova.compute.manager [req-ed642f47-81a3-478c-ba13-050c1f0107f0 req-882e49d5-fb76-4fdb-a993-8c688d7d33f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.181 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.185 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404830.184814, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.185 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.188 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.192 257641 INFO nova.virt.libvirt.driver [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance spawned successfully.#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.193 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.221 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.230 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.235 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.235 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.236 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.236 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.237 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.237 257641 DEBUG nova.virt.libvirt.driver [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.262 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.294 257641 INFO nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.295 257641 DEBUG nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.360 257641 INFO nova.compute.manager [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Took 9.32 seconds to build instance.#033[00m
Nov 29 03:27:10 np0005539550 nova_compute[257631]: 2025-11-29 08:27:10.376 257641 DEBUG oslo_concurrency.lockutils [None req-bf72680e-f2bb-4f16-ace6-2a6e8733bdb7 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:11.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 882 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 251 op/s
Nov 29 03:27:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.961 257641 DEBUG nova.compute.manager [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.961 257641 DEBUG nova.compute.manager [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing instance network info cache due to event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.961 257641 DEBUG oslo_concurrency.lockutils [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.961 257641 DEBUG oslo_concurrency.lockutils [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:12 np0005539550 nova_compute[257631]: 2025-11-29 08:27:12.962 257641 DEBUG nova.network.neutron [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:13.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:13 np0005539550 nova_compute[257631]: 2025-11-29 08:27:13.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 882 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 305 op/s
Nov 29 03:27:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:15.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 916 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.3 MiB/s rd, 5.1 MiB/s wr, 456 op/s
Nov 29 03:27:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:16.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:16 np0005539550 nova_compute[257631]: 2025-11-29 08:27:16.998 257641 DEBUG nova.network.neutron [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updated VIF entry in instance network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:17 np0005539550 nova_compute[257631]: 2025-11-29 08:27:17.000 257641 DEBUG nova.network.neutron [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updating instance_info_cache with network_info: [{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:17 np0005539550 nova_compute[257631]: 2025-11-29 08:27:17.320 257641 DEBUG oslo_concurrency.lockutils [req-00e267de-834b-468c-bf71-69c7caf09ef4 req-791a748f-5bc0-46cc-9b61-7df72cb5268c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:17 np0005539550 nova_compute[257631]: 2025-11-29 08:27:17.671 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 916 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.1 MiB/s wr, 368 op/s
Nov 29 03:27:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:18.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:18 np0005539550 nova_compute[257631]: 2025-11-29 08:27:18.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:18 np0005539550 podman[346783]: 2025-11-29 08:27:18.369754564 +0000 UTC m=+0.098708628 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:27:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:18.959 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:18.960 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:18.961 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:19.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 928 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.5 MiB/s rd, 4.4 MiB/s wr, 413 op/s
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.017003965138190955 of space, bias 1.0, pg target 5.101189541457287 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002680866717848502 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007032958484552392 of space, bias 1.0, pg target 2.0747227529429555 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017041224641727154 quantized to 16 (current 16)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021301530802158943 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018106301181835102 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:27:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042603061604317886 quantized to 32 (current 32)
Nov 29 03:27:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:20.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 2.1 MiB/s wr, 340 op/s
Nov 29 03:27:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:22 np0005539550 nova_compute[257631]: 2025-11-29 08:27:22.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:22Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:22Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:23 np0005539550 nova_compute[257631]: 2025-11-29 08:27:23.280 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 909 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 2.3 MiB/s wr, 322 op/s
Nov 29 03:27:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.9 MiB/s wr, 384 op/s
Nov 29 03:27:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:27 np0005539550 nova_compute[257631]: 2025-11-29 08:27:27.678 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 222 op/s
Nov 29 03:27:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:28.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:28 np0005539550 nova_compute[257631]: 2025-11-29 08:27:28.283 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:27:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:29.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2c75fa4f-cfc3-4c6c-97e9-dd86bb73cc6b does not exist
Nov 29 03:27:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f312445-7dfa-4a93-b535-76b09dc5e715 does not exist
Nov 29 03:27:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ebef7dfb-3823-4f52-ac8e-b146251da709 does not exist
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Nov 29 03:27:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Nov 29 03:27:29 np0005539550 podman[347261]: 2025-11-29 08:27:29.765673602 +0000 UTC m=+0.022051501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:29 np0005539550 podman[347261]: 2025-11-29 08:27:29.957609556 +0000 UTC m=+0.213987445 container create 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:27:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 915 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.6 MiB/s wr, 282 op/s
Nov 29 03:27:30 np0005539550 nova_compute[257631]: 2025-11-29 08:27:30.088 257641 INFO nova.compute.manager [None req-f6d83d9d-0e39-4111-8dcd-e93d7030a225 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Get console output#033[00m
Nov 29 03:27:30 np0005539550 nova_compute[257631]: 2025-11-29 08:27:30.096 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:27:30 np0005539550 systemd[1]: Started libpod-conmon-1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9.scope.
Nov 29 03:27:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:30.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:30 np0005539550 podman[347261]: 2025-11-29 08:27:30.155717807 +0000 UTC m=+0.412095686 container init 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:30 np0005539550 podman[347261]: 2025-11-29 08:27:30.166050439 +0000 UTC m=+0.422428338 container start 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:30 np0005539550 podman[347261]: 2025-11-29 08:27:30.17003003 +0000 UTC m=+0.426407989 container attach 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:27:30 np0005539550 vigorous_mirzakhani[347277]: 167 167
Nov 29 03:27:30 np0005539550 systemd[1]: libpod-1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9.scope: Deactivated successfully.
Nov 29 03:27:30 np0005539550 conmon[347277]: conmon 1e9b683a2d9067125a46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9.scope/container/memory.events
Nov 29 03:27:30 np0005539550 podman[347261]: 2025-11-29 08:27:30.176135386 +0000 UTC m=+0.432513265 container died 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:27:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-703e041674841af914ed8108535deecec2766b29bb45b93020bb581b899cfb6b-merged.mount: Deactivated successfully.
Nov 29 03:27:30 np0005539550 podman[347261]: 2025-11-29 08:27:30.650348988 +0000 UTC m=+0.906726867 container remove 1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:30 np0005539550 systemd[1]: libpod-conmon-1e9b683a2d9067125a462a2f49c2575896af1e85929fbdac97743f70b92b3eb9.scope: Deactivated successfully.
Nov 29 03:27:30 np0005539550 podman[347301]: 2025-11-29 08:27:30.891117943 +0000 UTC m=+0.105769757 container create 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:27:30 np0005539550 podman[347301]: 2025-11-29 08:27:30.808128085 +0000 UTC m=+0.022779899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:30 np0005539550 systemd[1]: Started libpod-conmon-26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9.scope.
Nov 29 03:27:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:31 np0005539550 podman[347301]: 2025-11-29 08:27:31.038507426 +0000 UTC m=+0.253159240 container init 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:27:31 np0005539550 podman[347301]: 2025-11-29 08:27:31.046887548 +0000 UTC m=+0.261539342 container start 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:27:31 np0005539550 podman[347301]: 2025-11-29 08:27:31.078607594 +0000 UTC m=+0.293259468 container attach 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.083 257641 INFO nova.compute.manager [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Rebuilding instance#033[00m
Nov 29 03:27:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.328 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.360 257641 DEBUG nova.compute.manager [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.409 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_requests' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.424 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.437 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.452 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'migration_context' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.464 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:27:31 np0005539550 nova_compute[257631]: 2025-11-29 08:27:31.470 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:27:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Nov 29 03:27:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Nov 29 03:27:31 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Nov 29 03:27:31 np0005539550 hopeful_proskuriakova[347318]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:27:31 np0005539550 hopeful_proskuriakova[347318]: --> relative data size: 1.0
Nov 29 03:27:31 np0005539550 hopeful_proskuriakova[347318]: --> All data devices are unavailable
Nov 29 03:27:31 np0005539550 systemd[1]: libpod-26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9.scope: Deactivated successfully.
Nov 29 03:27:31 np0005539550 podman[347301]: 2025-11-29 08:27:31.88255661 +0000 UTC m=+1.097208424 container died 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4f82c156a3e971bd43a5ba87ffccfb58eaf5e9070d6bea5807c31a42e6a82b2c-merged.mount: Deactivated successfully.
Nov 29 03:27:31 np0005539550 podman[347301]: 2025-11-29 08:27:31.950188418 +0000 UTC m=+1.164840232 container remove 26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:31 np0005539550 systemd[1]: libpod-conmon-26f6a014c7e3241b784111f454e578dd30eb5033472a7472db1f9fe7901425f9.scope: Deactivated successfully.
Nov 29 03:27:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 936 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.7 MiB/s wr, 314 op/s
Nov 29 03:27:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:32.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.576330489 +0000 UTC m=+0.041080504 container create 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:27:32 np0005539550 systemd[1]: Started libpod-conmon-96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b.scope.
Nov 29 03:27:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.55666221 +0000 UTC m=+0.021412255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:32 np0005539550 nova_compute[257631]: 2025-11-29 08:27:32.681 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.766461277 +0000 UTC m=+0.231211322 container init 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.779077448 +0000 UTC m=+0.243827473 container start 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:32 np0005539550 clever_lumiere[347504]: 167 167
Nov 29 03:27:32 np0005539550 systemd[1]: libpod-96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b.scope: Deactivated successfully.
Nov 29 03:27:32 np0005539550 conmon[347504]: conmon 96cbd7de72e2f3219772 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b.scope/container/memory.events
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.79768694 +0000 UTC m=+0.262436995 container attach 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.798484031 +0000 UTC m=+0.263234066 container died 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:27:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Nov 29 03:27:32 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Nov 29 03:27:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e9d269b7591599f2053077168e24b1f22dfceaf6a2cd8bdec540c4efdf61adbb-merged.mount: Deactivated successfully.
Nov 29 03:27:32 np0005539550 podman[347486]: 2025-11-29 08:27:32.841620646 +0000 UTC m=+0.306370671 container remove 96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lumiere, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:27:32 np0005539550 systemd[1]: libpod-conmon-96cbd7de72e2f3219772f25e84d5a0574bc083a16bd7c5552558882829bf154b.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539550 podman[347526]: 2025-11-29 08:27:33.020660693 +0000 UTC m=+0.042619953 container create 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:27:33 np0005539550 systemd[1]: Started libpod-conmon-80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3.scope.
Nov 29 03:27:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca95636ad43e2f0300a631da4bffcf40a7ebd3b331533af2bd1227a6040c89dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca95636ad43e2f0300a631da4bffcf40a7ebd3b331533af2bd1227a6040c89dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca95636ad43e2f0300a631da4bffcf40a7ebd3b331533af2bd1227a6040c89dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca95636ad43e2f0300a631da4bffcf40a7ebd3b331533af2bd1227a6040c89dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:33 np0005539550 podman[347526]: 2025-11-29 08:27:33.004129103 +0000 UTC m=+0.026088383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:33 np0005539550 podman[347526]: 2025-11-29 08:27:33.10796988 +0000 UTC m=+0.129929140 container init 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:27:33 np0005539550 podman[347526]: 2025-11-29 08:27:33.115512412 +0000 UTC m=+0.137471672 container start 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:27:33 np0005539550 podman[347526]: 2025-11-29 08:27:33.12016007 +0000 UTC m=+0.142119340 container attach 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:27:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:33.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.289 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539550 kernel: tapa85ee38f-1a (unregistering): left promiscuous mode
Nov 29 03:27:33 np0005539550 NetworkManager[49039]: <info>  [1764404853.7789] device (tapa85ee38f-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.842 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:33Z|00647|binding|INFO|Releasing lport a85ee38f-1a42-4405-9db6-3b9430481c2c from this chassis (sb_readonly=0)
Nov 29 03:27:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:33Z|00648|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c down in Southbound
Nov 29 03:27:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:33Z|00649|binding|INFO|Removing iface tapa85ee38f-1a ovn-installed in OVS
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.845 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:33.850 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7f:8f 10.100.0.11'], port_security=['fa:16:3e:d1:7f:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24a46b48-0082-4428-9f80-5c459b7611cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0839f4fe-1173-477f-af5d-a589b6c94383', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ffd2286-10f4-4db8-b6f5-d8ec11414b0c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a85ee38f-1a42-4405-9db6-3b9430481c2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:33.851 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a85ee38f-1a42-4405-9db6-3b9430481c2c in datapath 24a46b48-0082-4428-9f80-5c459b7611cb unbound from our chassis#033[00m
Nov 29 03:27:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:33.853 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24a46b48-0082-4428-9f80-5c459b7611cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:27:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:33.854 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f013637-eb4e-4f0b-9e7b-961d63947926]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:33 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:33.855 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb namespace which is not needed anymore#033[00m
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:33 np0005539550 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539550 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d0000008e.scope: Consumed 14.330s CPU time.
Nov 29 03:27:33 np0005539550 systemd-machined[216673]: Machine qemu-76-instance-0000008e terminated.
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:27:33 np0005539550 nova_compute[257631]: 2025-11-29 08:27:33.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]: {
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:    "0": [
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:        {
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "devices": [
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "/dev/loop3"
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            ],
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "lv_name": "ceph_lv0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "lv_size": "7511998464",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "name": "ceph_lv0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "tags": {
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.cluster_name": "ceph",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.crush_device_class": "",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.encrypted": "0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.osd_id": "0",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.type": "block",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:                "ceph.vdo": "0"
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            },
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "type": "block",
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:            "vg_name": "ceph_vg0"
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:        }
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]:    ]
Nov 29 03:27:33 np0005539550 gallant_hellman[347542]: }
Nov 29 03:27:33 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [NOTICE]   (346765) : haproxy version is 2.8.14-c23fe91
Nov 29 03:27:33 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [NOTICE]   (346765) : path to executable is /usr/sbin/haproxy
Nov 29 03:27:33 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [WARNING]  (346765) : Exiting Master process...
Nov 29 03:27:33 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [ALERT]    (346765) : Current worker (346767) exited with code 143 (Terminated)
Nov 29 03:27:33 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[346756]: [WARNING]  (346765) : All workers exited. Exiting... (0)
Nov 29 03:27:33 np0005539550 systemd[1]: libpod-22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6.scope: Deactivated successfully.
Nov 29 03:27:33 np0005539550 podman[347574]: 2025-11-29 08:27:33.986351226 +0000 UTC m=+0.047244151 container died 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:27:34 np0005539550 systemd[1]: libpod-80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3.scope: Deactivated successfully.
Nov 29 03:27:34 np0005539550 conmon[347542]: conmon 80accbac21654ed61686 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3.scope/container/memory.events
Nov 29 03:27:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ec3f0cfe8558de0b6deddf7bcf9938f9213d1c1635b7d60030fafe5b14a56cb7-merged.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539550 podman[347526]: 2025-11-29 08:27:34.01565789 +0000 UTC m=+1.037617180 container died 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 958 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.3 MiB/s wr, 276 op/s
Nov 29 03:27:34 np0005539550 podman[347574]: 2025-11-29 08:27:34.048870714 +0000 UTC m=+0.109763619 container cleanup 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:27:34 np0005539550 systemd[1]: libpod-conmon-22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6.scope: Deactivated successfully.
Nov 29 03:27:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ca95636ad43e2f0300a631da4bffcf40a7ebd3b331533af2bd1227a6040c89dd-merged.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539550 podman[347526]: 2025-11-29 08:27:34.096271867 +0000 UTC m=+1.118231137 container remove 80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:34 np0005539550 systemd[1]: libpod-conmon-80accbac21654ed6168614a5acc12eaaf3eca17f11b31f8c8c7bda6d10e846c3.scope: Deactivated successfully.
Nov 29 03:27:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:34.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:34 np0005539550 podman[347623]: 2025-11-29 08:27:34.124390771 +0000 UTC m=+0.043555007 container remove 22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.131 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcba15a-e5a3-40c2-bbfe-111686738cf0]: (4, ('Sat Nov 29 08:27:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb (22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6)\n22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6\nSat Nov 29 08:27:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb (22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6)\n22a88d2174e0d223a55af57a68ffe474838eb8a7b0c5865b6bb47b2f915c36b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.133 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8639f433-69e1-460a-beda-cfe2cd505d1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.134 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24a46b48-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539550 kernel: tap24a46b48-00: left promiscuous mode
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.156 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.160 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f56b942-4570-484b-89c8-6335f44d282b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.181 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[11636cb7-7e07-448b-af62-910453491126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.182 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e8820b-dae1-4b48-bd61-ab3f02475e43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.197 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.198 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.198 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.198 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.201 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc5bc1c-a8f9-477d-803c-eba58f947fe5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787627, 'reachable_time': 41943, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347663, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 systemd[1]: run-netns-ovnmeta\x2d24a46b48\x2d0082\x2d4428\x2d9f80\x2d5c459b7611cb.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.205 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:27:34 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:34.206 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[316597f7-d9a1-4d2c-82a3-2a05af404740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.494 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance shutdown successfully after 3 seconds.#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.499 257641 INFO nova.virt.libvirt.driver [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance destroyed successfully.#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.503 257641 INFO nova.virt.libvirt.driver [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance destroyed successfully.#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.506 257641 DEBUG nova.virt.libvirt.vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:30Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.507 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.507 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.508 257641 DEBUG os_vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.511 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.511 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa85ee38f-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.518 257641 INFO os_vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a')#033[00m
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.703384185 +0000 UTC m=+0.041366211 container create f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:27:34 np0005539550 systemd[1]: Started libpod-conmon-f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358.scope.
Nov 29 03:27:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.687005869 +0000 UTC m=+0.024987915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.787974523 +0000 UTC m=+0.125956599 container init f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.795121005 +0000 UTC m=+0.133103031 container start f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.798677535 +0000 UTC m=+0.136659561 container attach f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:27:34 np0005539550 systemd[1]: libpod-f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358.scope: Deactivated successfully.
Nov 29 03:27:34 np0005539550 sad_wilson[347818]: 167 167
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.801838055 +0000 UTC m=+0.139820081 container died f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:27:34 np0005539550 conmon[347818]: conmon f05f0b0fc6613c51575b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358.scope/container/memory.events
Nov 29 03:27:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-38c64002e28d09f90a324332f8e29052ad19769fe3aa6b0812a157a27c1f31f1-merged.mount: Deactivated successfully.
Nov 29 03:27:34 np0005539550 podman[347800]: 2025-11-29 08:27:34.840911608 +0000 UTC m=+0.178893634 container remove f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:27:34 np0005539550 systemd[1]: libpod-conmon-f05f0b0fc6613c51575b6f189327f97df9e7dd798ad5b442182ad4d2cd200358.scope: Deactivated successfully.
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.908 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deleting instance files /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_del#033[00m
Nov 29 03:27:34 np0005539550 nova_compute[257631]: 2025-11-29 08:27:34.909 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deletion of /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_del complete#033[00m
Nov 29 03:27:35 np0005539550 podman[347840]: 2025-11-29 08:27:35.007827506 +0000 UTC m=+0.045463825 container create 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:27:35 np0005539550 systemd[1]: Started libpod-conmon-8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac.scope.
Nov 29 03:27:35 np0005539550 podman[347840]: 2025-11-29 08:27:34.987838199 +0000 UTC m=+0.025474558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:27:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700d3b2766b2d566ef87ba10e54e8c5b05cc2ceba2507ca0a20ad3d2fb54c690/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700d3b2766b2d566ef87ba10e54e8c5b05cc2ceba2507ca0a20ad3d2fb54c690/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700d3b2766b2d566ef87ba10e54e8c5b05cc2ceba2507ca0a20ad3d2fb54c690/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700d3b2766b2d566ef87ba10e54e8c5b05cc2ceba2507ca0a20ad3d2fb54c690/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:35 np0005539550 podman[347840]: 2025-11-29 08:27:35.101621968 +0000 UTC m=+0.139258307 container init 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 03:27:35 np0005539550 podman[347840]: 2025-11-29 08:27:35.109427187 +0000 UTC m=+0.147063496 container start 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:27:35 np0005539550 podman[347840]: 2025-11-29 08:27:35.11429229 +0000 UTC m=+0.151928599 container attach 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 29 03:27:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.462 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.464 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating image(s)#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.492 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.519 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.546 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.549 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.617 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.618 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.619 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.619 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.647 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.650 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.937 257641 DEBUG nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.938 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.938 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.938 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.938 257641 DEBUG nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.939 257641 WARNING nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.939 257641 DEBUG nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.939 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.939 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.939 257641 DEBUG oslo_concurrency.lockutils [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.940 257641 DEBUG nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:35 np0005539550 nova_compute[257631]: 2025-11-29 08:27:35.940 257641 WARNING nova.compute.manager [req-74216142-ca49-4762-81d0-2ca114f320b8 req-57f53d39-defd-40d2-80b4-7de0f1d9b2dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]: {
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:        "osd_id": 0,
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:        "type": "bluestore"
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]:    }
Nov 29 03:27:35 np0005539550 musing_chaplygin[347857]: }
Nov 29 03:27:35 np0005539550 systemd[1]: libpod-8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac.scope: Deactivated successfully.
Nov 29 03:27:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 6.4 MiB/s wr, 387 op/s
Nov 29 03:27:36 np0005539550 podman[347972]: 2025-11-29 08:27:36.037913686 +0000 UTC m=+0.025091168 container died 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:27:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-700d3b2766b2d566ef87ba10e54e8c5b05cc2ceba2507ca0a20ad3d2fb54c690-merged.mount: Deactivated successfully.
Nov 29 03:27:36 np0005539550 podman[347972]: 2025-11-29 08:27:36.095509588 +0000 UTC m=+0.082687080 container remove 8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_chaplygin, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:27:36 np0005539550 systemd[1]: libpod-conmon-8eaea42f5271dfa9f16c84d6fb403fe51de0eb8445ba746f71135ed22260d5ac.scope: Deactivated successfully.
Nov 29 03:27:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:27:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.363 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [{"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.382 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-4702f4ee-458d-4146-b9b2-70ecf718176c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.383 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.383 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.383 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.384 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.384 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.405 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.406 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.406 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.406 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.406 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.466 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.816s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6d2105ed-8bb3-4329-94fd-4e05a6c6a3ae does not exist
Nov 29 03:27:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 442fb016-fa51-4233-a5dc-6ca992b7b4dd does not exist
Nov 29 03:27:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e9ad6368-2730-472f-8ffd-2f126833fac4 does not exist
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.579 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] resizing rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:27:37 np0005539550 podman[348085]: 2025-11-29 08:27:37.714044081 +0000 UTC m=+0.071933038 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:27:37 np0005539550 podman[348082]: 2025-11-29 08:27:37.725527692 +0000 UTC m=+0.082477785 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.813 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.814 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Ensure instance console log exists: /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.815 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.816 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.816 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.819 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start _get_guest_xml network_info=[{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.826 257641 WARNING nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.839 257641 DEBUG nova.virt.libvirt.host [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.841 257641 DEBUG nova.virt.libvirt.host [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.858 257641 DEBUG nova.virt.libvirt.host [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.860 257641 DEBUG nova.virt.libvirt.host [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.861 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.862 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.862 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.862 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.863 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.864 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.864 257641 DEBUG nova.virt.hardware [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.864 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.925 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:37 np0005539550 nova_compute[257631]: 2025-11-29 08:27:37.968 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 955 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 257 op/s
Nov 29 03:27:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:38.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.239 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.240 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.244 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.245 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000081 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381728125' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.423 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.447 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.452 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.498 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.500 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3837MB free_disk=20.63434600830078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.500 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.500 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.591 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 4702f4ee-458d-4146-b9b2-70ecf718176c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.591 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 346e849d-fa61-4451-b34c-d6165fea3aa4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.591 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.591 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.591 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.681 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112246227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.910 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.912 257641 DEBUG nova.virt.libvirt.vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:35Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.912 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.913 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.916 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <uuid>a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</uuid>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <name>instance-0000008e</name>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1118669844</nova:name>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:27:37</nova:creationTime>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <nova:port uuid="a85ee38f-1a42-4405-9db6-3b9430481c2c">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="serial">a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="uuid">a97c6d24-5d3c-40d0-a2d9-03e786a12cd8</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d1:7f:8f"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <target dev="tapa85ee38f-1a"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/console.log" append="off"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:27:38 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:27:38 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:27:38 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:27:38 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.917 257641 DEBUG nova.virt.libvirt.vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:35Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.917 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.918 257641 DEBUG nova.network.os_vif_util [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.918 257641 DEBUG os_vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.920 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.920 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.923 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa85ee38f-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.923 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa85ee38f-1a, col_values=(('external_ids', {'iface-id': 'a85ee38f-1a42-4405-9db6-3b9430481c2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:7f:8f', 'vm-uuid': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:38 np0005539550 NetworkManager[49039]: <info>  [1764404858.9262] manager: (tapa85ee38f-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.928 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:38 np0005539550 nova_compute[257631]: 2025-11-29 08:27:38.932 257641 INFO os_vif [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a')#033[00m
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188452853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:27:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188452853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.007 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.007 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.007 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:d1:7f:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.008 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Using config drive#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.033 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.050 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530557978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.145 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.153 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.171 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'keypairs' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.279 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.383 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.383 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.573 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.573 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.591 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.654 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.655 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.660 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.661 257641 INFO nova.compute.claims [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:27:39 np0005539550 nova_compute[257631]: 2025-11-29 08:27:39.900 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 985 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 8.0 MiB/s wr, 343 op/s
Nov 29 03:27:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:27:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542188798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.329 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.334 257641 DEBUG nova.compute.provider_tree [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.351 257641 DEBUG nova.scheduler.client.report [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.780 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.781 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.836 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.837 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.869 257641 INFO nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.902 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:27:40 np0005539550 nova_compute[257631]: 2025-11-29 08:27:40.969 257641 INFO nova.virt.block_device [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Booting with blank volume at /dev/vda#033[00m
Nov 29 03:27:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:41.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.332 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Creating config drive at /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.340 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf6ulxlp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.375 257641 DEBUG nova.policy [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '64b11a4dc36b4f55b85dbe846183be55', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae71059d02774857be85797a3be0e4e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.479 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppf6ulxlp" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.510 257641 DEBUG nova.storage.rbd_utils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] rbd image a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.515 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.711 257641 DEBUG oslo_concurrency.processutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.712 257641 INFO nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deleting local config drive /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:27:41 np0005539550 kernel: tapa85ee38f-1a: entered promiscuous mode
Nov 29 03:27:41 np0005539550 NetworkManager[49039]: <info>  [1764404861.7733] manager: (tapa85ee38f-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:41Z|00650|binding|INFO|Claiming lport a85ee38f-1a42-4405-9db6-3b9430481c2c for this chassis.
Nov 29 03:27:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:41Z|00651|binding|INFO|a85ee38f-1a42-4405-9db6-3b9430481c2c: Claiming fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.785 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7f:8f 10.100.0.11'], port_security=['fa:16:3e:d1:7f:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24a46b48-0082-4428-9f80-5c459b7611cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0839f4fe-1173-477f-af5d-a589b6c94383', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.189'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ffd2286-10f4-4db8-b6f5-d8ec11414b0c, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a85ee38f-1a42-4405-9db6-3b9430481c2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.787 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a85ee38f-1a42-4405-9db6-3b9430481c2c in datapath 24a46b48-0082-4428-9f80-5c459b7611cb bound to our chassis#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.789 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 24a46b48-0082-4428-9f80-5c459b7611cb#033[00m
Nov 29 03:27:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:41Z|00652|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c ovn-installed in OVS
Nov 29 03:27:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:41Z|00653|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c up in Southbound
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.804 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[80cc96d3-20d4-4fc1-a56a-1bc49a0cb979]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.805 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap24a46b48-01 in ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:41 np0005539550 systemd-udevd[348398]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.808 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap24a46b48-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.808 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fb922b9f-ceef-4028-b54e-7b9fafbabce8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.809 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4add74a5-06c6-4531-bc50-26445c554f29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 systemd-machined[216673]: New machine qemu-77-instance-0000008e.
Nov 29 03:27:41 np0005539550 NetworkManager[49039]: <info>  [1764404861.8200] device (tapa85ee38f-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:41 np0005539550 NetworkManager[49039]: <info>  [1764404861.8209] device (tapa85ee38f-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:41 np0005539550 systemd[1]: Started Virtual Machine qemu-77-instance-0000008e.
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.825 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccfa055-2baa-4142-87a0-4019a8e33975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.843 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[358457a2-7817-4cec-bcea-8285f21699e7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.882 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2fe810-bb6d-43b6-9b8b-96bcc7fe519b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.888 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eea849a3-1524-42f6-acbb-a2804c5f79a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 NetworkManager[49039]: <info>  [1764404861.8900] manager: (tap24a46b48-00): new Veth device (/org/freedesktop/NetworkManager/Devices/289)
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:41 np0005539550 nova_compute[257631]: 2025-11-29 08:27:41.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.933 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1a4451-90dd-4b58-aad6-e17495b952db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.936 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[73ddddf0-8565-4b70-9e64-3dc8d9096ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 NetworkManager[49039]: <info>  [1764404861.9615] device (tap24a46b48-00): carrier: link connected
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.967 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[92a9bb8d-f7f4-4047-9134-ce4be2e09e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:41.983 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f884e-71be-4f32-a899-c551d82d97ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24a46b48-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:e6:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790962, 'reachable_time': 34564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348431, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.001 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e9abe735-28a7-4241-84e1-bf7e0acdf6bb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:e67c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 790962, 'tstamp': 790962}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348432, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.025 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1f93e260-b02c-4e11-bc92-4855d80680c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap24a46b48-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:e6:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 186], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790962, 'reachable_time': 34564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348433, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 1007 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 8.0 MiB/s wr, 357 op/s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.070 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[05bff1c3-6e15-4fe7-9f80-f349e2c31fc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.146 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8243a9b7-0ced-4349-8a65-3979f6f24f8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.148 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24a46b48-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.148 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.149 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24a46b48-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.150 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:42 np0005539550 NetworkManager[49039]: <info>  [1764404862.1514] manager: (tap24a46b48-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/290)
Nov 29 03:27:42 np0005539550 kernel: tap24a46b48-00: entered promiscuous mode
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.155 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap24a46b48-00, col_values=(('external_ids', {'iface-id': '6c99cbbb-1c2a-4d06-b883-6a56e3365cd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.156 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:42Z|00654|binding|INFO|Releasing lport 6c99cbbb-1c2a-4d06-b883-6a56e3365cd3 from this chassis (sb_readonly=0)
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.174 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.177 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.178 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1c14f4-2a3a-4d3d-81c2-91f8f20a2f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.178 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-24a46b48-0082-4428-9f80-5c459b7611cb
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/24a46b48-0082-4428-9f80-5c459b7611cb.pid.haproxy
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 24a46b48-0082-4428-9f80-5c459b7611cb
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:42.179 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'env', 'PROCESS_TAG=haproxy-24a46b48-0082-4428-9f80-5c459b7611cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/24a46b48-0082-4428-9f80-5c459b7611cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Nov 29 03:27:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Nov 29 03:27:42 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.279 257641 DEBUG nova.compute.manager [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.280 257641 DEBUG oslo_concurrency.lockutils [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.281 257641 DEBUG oslo_concurrency.lockutils [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.281 257641 DEBUG oslo_concurrency.lockutils [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.282 257641 DEBUG nova.compute.manager [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.282 257641 WARNING nova.compute.manager [req-4639a944-d65f-48fc-ae49-18becbea3e00 req-60148aa2-61eb-42e1-986d-2d6248eb0836 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state active and task_state rebuild_spawning.#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.335 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Successfully created port: e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.365 257641 DEBUG os_brick.utils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.367 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.396 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.396 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac6c8a0-d863-4483-83c0-7e5f3f15d8fd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.398 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.405 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.405 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[85f1747d-3f51-4498-a93c-0b8f5d7c81dd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.407 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.410 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.411 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404862.4105005, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.411 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.415 257641 DEBUG nova.compute.manager [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.415 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.416 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.416 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[b363f9a0-a887-4d63-aa1f-5066c41dc885]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.418 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb4386a-1f01-4c55-9833-6e4d70b3e07f]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.419 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.449 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.452 257641 INFO nova.virt.libvirt.driver [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance spawned successfully.#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.453 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.455 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.458 257641 DEBUG os_brick.initiator.connectors.lightos [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.459 257641 DEBUG os_brick.initiator.connectors.lightos [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.459 257641 DEBUG os_brick.initiator.connectors.lightos [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.460 257641 DEBUG os_brick.utils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.460 257641 DEBUG nova.virt.block_device [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating existing volume attachment record: d17d63cc-1ab4-47ee-898a-6ce311191c86 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.465 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.491 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.491 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.492 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.493 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.493 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.494 257641 DEBUG nova.virt.libvirt.driver [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.545 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.545 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404862.414096, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.546 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.582 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.585 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.610 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.621 257641 DEBUG nova.compute.manager [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:42 np0005539550 podman[348514]: 2025-11-29 08:27:42.557601113 +0000 UTC m=+0.027344635 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.779 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.780 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.781 257641 DEBUG nova.objects.instance [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:27:42 np0005539550 nova_compute[257631]: 2025-11-29 08:27:42.849 257641 DEBUG oslo_concurrency.lockutils [None req-c05345ba-c14a-41de-b4ae-611dea987c45 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:43.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.238 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Successfully updated port: e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:27:43 np0005539550 podman[348514]: 2025-11-29 08:27:43.250264854 +0000 UTC m=+0.720008346 container create 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.253 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.253 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.254 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539550 systemd[1]: Started libpod-conmon-8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516.scope.
Nov 29 03:27:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f64d2a93e9b73e973e4d4cfc3f7a474ce10d86534fac193b4e5ff5df99bdecb8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:43.703 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539550 podman[348514]: 2025-11-29 08:27:43.764627316 +0000 UTC m=+1.234370848 container init 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:27:43 np0005539550 podman[348514]: 2025-11-29 08:27:43.770844814 +0000 UTC m=+1.240588346 container start 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:27:43 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [NOTICE]   (348535) : New worker (348537) forked
Nov 29 03:27:43 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [NOTICE]   (348535) : Loading success.
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.901 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:27:43 np0005539550 nova_compute[257631]: 2025-11-29 08:27:43.926 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:43.932 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:27:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 1011 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.1 MiB/s wr, 315 op/s
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.132 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:27:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:44.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.136 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.136 257641 INFO nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Creating image(s)#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.137 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.138 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Ensure instance console log exists: /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.139 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.140 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.140 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.407 257641 DEBUG nova.compute.manager [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.407 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.407 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.408 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.408 257641 DEBUG nova.compute.manager [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.408 257641 WARNING nova.compute.manager [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.409 257641 DEBUG nova.compute.manager [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-changed-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.409 257641 DEBUG nova.compute.manager [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Refreshing instance network info cache due to event network-changed-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.409 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Nov 29 03:27:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Nov 29 03:27:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.660 257641 DEBUG nova.network.neutron [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.827 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.827 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance network_info: |[{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.828 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.828 257641 DEBUG nova.network.neutron [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Refreshing network info cache for port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.831 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start _get_guest_xml network_info=[{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'd17d63cc-1ab4-47ee-898a-6ce311191c86', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a54a148c-90d3-4f80-83be-19de09d30ebc', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a54a148c-90d3-4f80-83be-19de09d30ebc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a0310268-d298-469e-9f04-6315f83c3f89', 'attached_at': '', 'detached_at': '', 'volume_id': 'a54a148c-90d3-4f80-83be-19de09d30ebc', 'serial': 'a54a148c-90d3-4f80-83be-19de09d30ebc'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.834 257641 WARNING nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.838 257641 DEBUG nova.virt.libvirt.host [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.839 257641 DEBUG nova.virt.libvirt.host [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.853 257641 DEBUG nova.virt.libvirt.host [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.854 257641 DEBUG nova.virt.libvirt.host [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.855 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.855 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.856 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.856 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.857 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.857 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.857 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.858 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.858 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.858 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.859 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.859 257641 DEBUG nova.virt.hardware [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.884 257641 DEBUG nova.storage.rbd_utils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.889 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:44 np0005539550 nova_compute[257631]: 2025-11-29 08:27:44.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:27:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:45.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:27:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71433708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.302 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.333 257641 DEBUG nova.virt.libvirt.vif [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1793211077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1793211077',id=145,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-mu4bcpio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:40Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0310268-d298-469e-9f04-6315f83c3f89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.334 257641 DEBUG nova.network.os_vif_util [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.334 257641 DEBUG nova.network.os_vif_util [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.335 257641 DEBUG nova.objects.instance [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.355 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <uuid>a0310268-d298-469e-9f04-6315f83c3f89</uuid>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <name>instance-00000091</name>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1793211077</nova:name>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:27:44</nova:creationTime>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <nova:port uuid="e1a8ada6-6584-4ef5-8f52-75c5e5de9d86">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="serial">a0310268-d298-469e-9f04-6315f83c3f89</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="uuid">a0310268-d298-469e-9f04-6315f83c3f89</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a0310268-d298-469e-9f04-6315f83c3f89_disk.config">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-a54a148c-90d3-4f80-83be-19de09d30ebc">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <serial>a54a148c-90d3-4f80-83be-19de09d30ebc</serial>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8d:56:a6"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <target dev="tape1a8ada6-65"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/console.log" append="off"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:27:45 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:27:45 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:27:45 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:27:45 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.360 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Preparing to wait for external event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.361 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.361 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.361 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.362 257641 DEBUG nova.virt.libvirt.vif [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:27:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1793211077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1793211077',id=145,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-mu4bcpio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:40Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0310268-d298-469e-9f04-6315f83c3f89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.362 257641 DEBUG nova.network.os_vif_util [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.363 257641 DEBUG nova.network.os_vif_util [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.364 257641 DEBUG os_vif [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.365 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.365 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.366 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.368 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.368 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1a8ada6-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.368 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape1a8ada6-65, col_values=(('external_ids', {'iface-id': 'e1a8ada6-6584-4ef5-8f52-75c5e5de9d86', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8d:56:a6', 'vm-uuid': 'a0310268-d298-469e-9f04-6315f83c3f89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.370 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:45 np0005539550 NetworkManager[49039]: <info>  [1764404865.3713] manager: (tape1a8ada6-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.373 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.378 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.379 257641 INFO os_vif [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65')#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.582 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.583 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.584 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:8d:56:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.586 257641 INFO nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Using config drive#033[00m
Nov 29 03:27:45 np0005539550 nova_compute[257631]: 2025-11-29 08:27:45.670 257641 DEBUG nova.storage.rbd_utils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 1011 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.9 MiB/s wr, 317 op/s
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.053 257641 INFO nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Creating config drive at /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.060 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqywgps56 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:46.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.195 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqywgps56" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.248 257641 DEBUG nova.storage.rbd_utils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.254 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config a0310268-d298-469e-9f04-6315f83c3f89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.873 257641 DEBUG nova.network.neutron [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updated VIF entry in instance network info cache for port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.874 257641 DEBUG nova.network.neutron [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.898 257641 DEBUG oslo_concurrency.lockutils [req-63631372-962c-45d2-8be6-b9a2378a39d2 req-9ca77513-ca4e-4c39-b78e-62ca59da660d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.965 257641 DEBUG oslo_concurrency.processutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config a0310268-d298-469e-9f04-6315f83c3f89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:27:46 np0005539550 nova_compute[257631]: 2025-11-29 08:27:46.966 257641 INFO nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Deleting local config drive /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config because it was imported into RBD.#033[00m
Nov 29 03:27:47 np0005539550 kernel: tape1a8ada6-65: entered promiscuous mode
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.0115] manager: (tape1a8ada6-65): new Tun device (/org/freedesktop/NetworkManager/Devices/292)
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:47Z|00655|binding|INFO|Claiming lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for this chassis.
Nov 29 03:27:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:47Z|00656|binding|INFO|e1a8ada6-6584-4ef5-8f52-75c5e5de9d86: Claiming fa:16:3e:8d:56:a6 10.100.0.14
Nov 29 03:27:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:47Z|00657|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 ovn-installed in OVS
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.042 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:47Z|00658|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 up in Southbound
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.045 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:27:47 np0005539550 systemd-udevd[348662]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:47 np0005539550 systemd-machined[216673]: New machine qemu-78-instance-00000091.
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.051 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.054 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.058 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.0600] device (tape1a8ada6-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.0616] device (tape1a8ada6-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:27:47 np0005539550 systemd[1]: Started Virtual Machine qemu-78-instance-00000091.
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.077 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f304ea-0c60-43f2-ae1a-4a5647d6956a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.078 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9d41f0a-11 in ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.081 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9d41f0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.081 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a430946c-4012-43b8-baca-d5aeba20be29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.082 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c8734c-c9c6-4c65-8cc1-7725cce041c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.096 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[31219425-c7b9-4de9-b038-6894b36c8550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0864252f-eeab-47c0-826c-fa417a81e87d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.141 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[924c0953-b9da-4177-b41b-7d5b3474ee7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.1496] manager: (tapd9d41f0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/293)
Nov 29 03:27:47 np0005539550 systemd-udevd[348664]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.148 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c41c47e8-036e-4f72-9224-176197246492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:47.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.186 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fba3306d-3cbd-453c-bd05-4bc2e8d5d7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.189 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2005d4-0fc0-4c28-8aea-b6d69f488ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.2130] device (tapd9d41f0a-10): carrier: link connected
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.217 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6d472a1d-df64-400c-8134-6bf922bc0ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.233 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f05221-39f5-43cb-81a9-9d371db3a6c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791487, 'reachable_time': 30047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348695, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.247 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2a6a8c53-669c-4a92-abf5-6bbd3489ff74]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:2887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 791487, 'tstamp': 791487}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348696, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.264 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4f3fc6-f396-495d-a24b-2bff87a0e745]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791487, 'reachable_time': 30047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348697, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.265 257641 DEBUG nova.compute.manager [req-b8cc3cc3-7cfb-4768-86fd-41da4098d54e req-d02ca40a-62a6-491e-9ecc-4ca2f4fc3931 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.265 257641 DEBUG oslo_concurrency.lockutils [req-b8cc3cc3-7cfb-4768-86fd-41da4098d54e req-d02ca40a-62a6-491e-9ecc-4ca2f4fc3931 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.266 257641 DEBUG oslo_concurrency.lockutils [req-b8cc3cc3-7cfb-4768-86fd-41da4098d54e req-d02ca40a-62a6-491e-9ecc-4ca2f4fc3931 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.266 257641 DEBUG oslo_concurrency.lockutils [req-b8cc3cc3-7cfb-4768-86fd-41da4098d54e req-d02ca40a-62a6-491e-9ecc-4ca2f4fc3931 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.266 257641 DEBUG nova.compute.manager [req-b8cc3cc3-7cfb-4768-86fd-41da4098d54e req-d02ca40a-62a6-491e-9ecc-4ca2f4fc3931 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Processing event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.296 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e917bbe-ccbc-4444-8424-6af6b0cf9554]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.353 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b808085-8455-434f-b9e1-8e40b18b3ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.354 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.355 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.355 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539550 NetworkManager[49039]: <info>  [1764404867.3653] manager: (tapd9d41f0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Nov 29 03:27:47 np0005539550 kernel: tapd9d41f0a-10: entered promiscuous mode
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.369 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:47Z|00659|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.393 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.395 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.396 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4b13172c-31d5-4539-9e65-63cd8e31e30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.397 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:27:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:47.397 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'env', 'PROCESS_TAG=haproxy-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9d41f0a-17f9-4df4-a453-04da996d63b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.687 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.688 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404867.6870267, a0310268-d298-469e-9f04-6315f83c3f89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.689 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Started (Lifecycle Event)#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.692 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.695 257641 INFO nova.virt.libvirt.driver [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance spawned successfully.#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.695 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.720 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.724 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.725 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.725 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.725 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.726 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.726 257641 DEBUG nova.virt.libvirt.driver [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.730 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.773 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.773 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404867.6881664, a0310268-d298-469e-9f04-6315f83c3f89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.773 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.811 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.813 257641 INFO nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Took 3.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.813 257641 DEBUG nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.818 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404867.6919498, a0310268-d298-469e-9f04-6315f83c3f89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.818 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.842 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.845 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:27:47 np0005539550 podman[348769]: 2025-11-29 08:27:47.756107279 +0000 UTC m=+0.042498591 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.869 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.884 257641 INFO nova.compute.manager [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Took 8.25 seconds to build instance.#033[00m
Nov 29 03:27:47 np0005539550 nova_compute[257631]: 2025-11-29 08:27:47.899 257641 DEBUG oslo_concurrency.lockutils [None req-5c28a423-7f75-4368-8fea-756c8119bfde 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 1011 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.7 MiB/s wr, 166 op/s
Nov 29 03:27:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:48.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:48 np0005539550 nova_compute[257631]: 2025-11-29 08:27:48.293 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:48 np0005539550 podman[348769]: 2025-11-29 08:27:48.639299408 +0000 UTC m=+0.925690690 container create 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:27:48 np0005539550 systemd[1]: Started libpod-conmon-5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889.scope.
Nov 29 03:27:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:27:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a063f24442f4469311b645017f5cc7c496bf5f2b829eb388e8054788932c53a0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:27:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:49 np0005539550 podman[348769]: 2025-11-29 08:27:49.195303947 +0000 UTC m=+1.481695249 container init 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:27:49 np0005539550 podman[348769]: 2025-11-29 08:27:49.202941401 +0000 UTC m=+1.489332723 container start 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:27:49 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [NOTICE]   (348817) : New worker (348819) forked
Nov 29 03:27:49 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [NOTICE]   (348817) : Loading success.
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.334 257641 DEBUG nova.compute.manager [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.334 257641 DEBUG oslo_concurrency.lockutils [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.335 257641 DEBUG oslo_concurrency.lockutils [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.335 257641 DEBUG oslo_concurrency.lockutils [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.336 257641 DEBUG nova.compute.manager [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:27:49 np0005539550 nova_compute[257631]: 2025-11-29 08:27:49.336 257641 WARNING nova.compute.manager [req-aaaa84db-3c2a-4788-ba90-1d17b7193eed req-1b186441-2f5f-4b93-8cef-3725ef216c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:27:49 np0005539550 podman[348784]: 2025-11-29 08:27:49.622290221 +0000 UTC m=+0.932124173 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:27:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 925 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 73 KiB/s wr, 193 op/s
Nov 29 03:27:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:50 np0005539550 nova_compute[257631]: 2025-11-29 08:27:50.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:27:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:27:51 np0005539550 nova_compute[257631]: 2025-11-29 08:27:51.852 257641 INFO nova.compute.manager [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Rescuing#033[00m
Nov 29 03:27:51 np0005539550 nova_compute[257631]: 2025-11-29 08:27:51.855 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:27:51 np0005539550 nova_compute[257631]: 2025-11-29 08:27:51.856 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:27:51 np0005539550 nova_compute[257631]: 2025-11-29 08:27:51.856 257641 DEBUG nova.network.neutron [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:27:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 873 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 60 KiB/s wr, 182 op/s
Nov 29 03:27:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:27:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:52.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Nov 29 03:27:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Nov 29 03:27:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 29 03:27:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:53.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:53 np0005539550 nova_compute[257631]: 2025-11-29 08:27:53.295 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:53 np0005539550 nova_compute[257631]: 2025-11-29 08:27:53.923 257641 DEBUG nova.network.neutron [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:27:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:27:53.934 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:27:53 np0005539550 nova_compute[257631]: 2025-11-29 08:27:53.994 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:27:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 856 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 765 KiB/s wr, 201 op/s
Nov 29 03:27:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:54.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:54 np0005539550 nova_compute[257631]: 2025-11-29 08:27:54.215 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:27:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:55.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:55 np0005539550 nova_compute[257631]: 2025-11-29 08:27:55.375 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 892 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 221 op/s
Nov 29 03:27:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:56.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:56Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:27:56Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:7f:8f 10.100.0.11
Nov 29 03:27:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:57.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 892 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 221 op/s
Nov 29 03:27:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:27:58.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:27:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:27:58 np0005539550 nova_compute[257631]: 2025-11-29 08:27:58.297 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:27:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:27:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:27:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:27:59.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:27:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:27:59
Nov 29 03:27:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:27:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:27:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 03:27:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:28:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 912 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 903 KiB/s rd, 5.1 MiB/s wr, 190 op/s
Nov 29 03:28:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:00.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:00 np0005539550 nova_compute[257631]: 2025-11-29 08:28:00.379 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.461711) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880461839, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 2253, "num_deletes": 255, "total_data_size": 3788207, "memory_usage": 3855936, "flush_reason": "Manual Compaction"}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880481133, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 3687227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48614, "largest_seqno": 50866, "table_properties": {"data_size": 3676999, "index_size": 6466, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22074, "raw_average_key_size": 20, "raw_value_size": 3656249, "raw_average_value_size": 3475, "num_data_blocks": 279, "num_entries": 1052, "num_filter_entries": 1052, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404695, "oldest_key_time": 1764404695, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 19443 microseconds, and 8394 cpu microseconds.
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.481168) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 3687227 bytes OK
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.481186) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.483128) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.483142) EVENT_LOG_v1 {"time_micros": 1764404880483137, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.483158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3778865, prev total WAL file size 3778865, number of live WAL files 2.
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.484300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(3600KB)], [107(10MB)]
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880484366, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 15009701, "oldest_snapshot_seqno": -1}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 8702 keys, 13126961 bytes, temperature: kUnknown
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880563525, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 13126961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13068447, "index_size": 35682, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21765, "raw_key_size": 225574, "raw_average_key_size": 25, "raw_value_size": 12912936, "raw_average_value_size": 1483, "num_data_blocks": 1397, "num_entries": 8702, "num_filter_entries": 8702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764404880, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.563815) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13126961 bytes
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.565154) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.4 rd, 165.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(7.6) write-amplify(3.6) OK, records in: 9230, records dropped: 528 output_compression: NoCompression
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.565173) EVENT_LOG_v1 {"time_micros": 1764404880565164, "job": 64, "event": "compaction_finished", "compaction_time_micros": 79243, "compaction_time_cpu_micros": 37436, "output_level": 6, "num_output_files": 1, "total_output_size": 13126961, "num_input_records": 9230, "num_output_records": 8702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880565903, "job": 64, "event": "table_file_deletion", "file_number": 109}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764404880568184, "job": 64, "event": "table_file_deletion", "file_number": 107}
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.484189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.568273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.568280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.568282) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.568284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:28:00.568286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:28:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 917 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 889 KiB/s rd, 5.1 MiB/s wr, 179 op/s
Nov 29 03:28:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:02.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Nov 29 03:28:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Nov 29 03:28:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Nov 29 03:28:02 np0005539550 nova_compute[257631]: 2025-11-29 08:28:02.823 257641 INFO nova.compute.manager [None req-1be76c5c-a022-4a97-ac99-39f56ad79a48 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Get console output#033[00m
Nov 29 03:28:02 np0005539550 nova_compute[257631]: 2025-11-29 08:28:02.830 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:28:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.663 257641 DEBUG nova.compute.manager [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.664 257641 DEBUG nova.compute.manager [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing instance network info cache due to event network-changed-a85ee38f-1a42-4405-9db6-3b9430481c2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.664 257641 DEBUG oslo_concurrency.lockutils [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.664 257641 DEBUG oslo_concurrency.lockutils [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.665 257641 DEBUG nova.network.neutron [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Refreshing network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.720 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.721 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.721 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.722 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.722 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.724 257641 INFO nova.compute.manager [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Terminating instance#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.725 257641 DEBUG nova.compute.manager [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:03 np0005539550 kernel: tapa85ee38f-1a (unregistering): left promiscuous mode
Nov 29 03:28:03 np0005539550 NetworkManager[49039]: <info>  [1764404883.7700] device (tapa85ee38f-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:03Z|00660|binding|INFO|Releasing lport a85ee38f-1a42-4405-9db6-3b9430481c2c from this chassis (sb_readonly=0)
Nov 29 03:28:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:03Z|00661|binding|INFO|Setting lport a85ee38f-1a42-4405-9db6-3b9430481c2c down in Southbound
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:03Z|00662|binding|INFO|Removing iface tapa85ee38f-1a ovn-installed in OVS
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:03.834 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:7f:8f 10.100.0.11'], port_security=['fa:16:3e:d1:7f:8f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a97c6d24-5d3c-40d0-a2d9-03e786a12cd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-24a46b48-0082-4428-9f80-5c459b7611cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0839f4fe-1173-477f-af5d-a589b6c94383', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ffd2286-10f4-4db8-b6f5-d8ec11414b0c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a85ee38f-1a42-4405-9db6-3b9430481c2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:03.835 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a85ee38f-1a42-4405-9db6-3b9430481c2c in datapath 24a46b48-0082-4428-9f80-5c459b7611cb unbound from our chassis#033[00m
Nov 29 03:28:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:03.837 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 24a46b48-0082-4428-9f80-5c459b7611cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:28:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:03.838 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fdcd3970-a4a8-4cbc-a123-4fd79936a546]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:03.839 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb namespace which is not needed anymore#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.853 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Nov 29 03:28:03 np0005539550 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d0000008e.scope: Consumed 14.098s CPU time.
Nov 29 03:28:03 np0005539550 systemd-machined[216673]: Machine qemu-77-instance-0000008e terminated.
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.952 257641 INFO nova.virt.libvirt.driver [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Instance destroyed successfully.#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.952 257641 DEBUG nova.objects.instance [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.966 257641 DEBUG nova.virt.libvirt.vif [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:26:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1118669844',display_name='tempest-TestNetworkAdvancedServerOps-server-1118669844',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1118669844',id=142,image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPG2eAiOSV+krh4MXr8iP8j+dBCGT+k//J/ZPyuE+tL6P1nNcuxcGptaUVBmmJaKLf8QKVBIz4As7JZx61PNWY6qLI1vbGYGzdph76djN/9WJ/AVdkR50U+0UbdGR7HVTA==',key_name='tempest-TestNetworkAdvancedServerOps-1278954502',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-k8t1s0vz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='93eccffb-bacd-407f-af6f-64451dee7b21',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:27:42Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=a97c6d24-5d3c-40d0-a2d9-03e786a12cd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.966 257641 DEBUG nova.network.os_vif_util [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.967 257641 DEBUG nova.network.os_vif_util [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.967 257641 DEBUG os_vif [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.969 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.970 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa85ee38f-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.973 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:03 np0005539550 nova_compute[257631]: 2025-11-29 08:28:03.975 257641 INFO os_vif [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:7f:8f,bridge_name='br-int',has_traffic_filtering=True,id=a85ee38f-1a42-4405-9db6-3b9430481c2c,network=Network(24a46b48-0082-4428-9f80-5c459b7611cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa85ee38f-1a')#033[00m
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [NOTICE]   (348535) : haproxy version is 2.8.14-c23fe91
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [NOTICE]   (348535) : path to executable is /usr/sbin/haproxy
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [WARNING]  (348535) : Exiting Master process...
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [WARNING]  (348535) : Exiting Master process...
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [ALERT]    (348535) : Current worker (348537) exited with code 143 (Terminated)
Nov 29 03:28:03 np0005539550 neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb[348531]: [WARNING]  (348535) : All workers exited. Exiting... (0)
Nov 29 03:28:03 np0005539550 systemd[1]: libpod-8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516.scope: Deactivated successfully.
Nov 29 03:28:03 np0005539550 podman[348906]: 2025-11-29 08:28:03.993122638 +0000 UTC m=+0.047825765 container died 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:28:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516-userdata-shm.mount: Deactivated successfully.
Nov 29 03:28:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f64d2a93e9b73e973e4d4cfc3f7a474ce10d86534fac193b4e5ff5df99bdecb8-merged.mount: Deactivated successfully.
Nov 29 03:28:04 np0005539550 podman[348906]: 2025-11-29 08:28:04.029757609 +0000 UTC m=+0.084460736 container cleanup 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:28:04 np0005539550 systemd[1]: libpod-conmon-8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516.scope: Deactivated successfully.
Nov 29 03:28:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 906 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 4.5 MiB/s wr, 164 op/s
Nov 29 03:28:04 np0005539550 podman[348961]: 2025-11-29 08:28:04.097768126 +0000 UTC m=+0.046169124 container remove 8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.104 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ac96835c-f4a1-4259-a6e6-599f8766060f]: (4, ('Sat Nov 29 08:28:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb (8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516)\n8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516\nSat Nov 29 08:28:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb (8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516)\n8bb0c0ce08cb9ca78c766905bf815ae26cbe21439ff7d85fa5b56584f6ccb516\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.107 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7796b399-0a80-43ed-9ba9-3843dea9a7df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.108 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24a46b48-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:04 np0005539550 kernel: tap24a46b48-00: left promiscuous mode
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.113 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.116 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e440fb5a-507a-43d9-9e1f-921a11cbcd5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.127 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.132 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a9ce70-ba43-4044-8b7e-bfd8c8338eef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.133 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a9330399-f8c7-45ab-b4b5-5630483fee88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.153 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[25750e6c-378b-4845-b489-7a8afacd07f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 790954, 'reachable_time': 24307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348976, 'error': None, 'target': 'ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.156 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-24a46b48-0082-4428-9f80-5c459b7611cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:28:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:04.156 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[79bf7f69-3083-4740-89b5-9b1176f12cae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:04 np0005539550 systemd[1]: run-netns-ovnmeta\x2d24a46b48\x2d0082\x2d4428\x2d9f80\x2d5c459b7611cb.mount: Deactivated successfully.
Nov 29 03:28:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.261 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.333 257641 DEBUG nova.compute.manager [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.333 257641 DEBUG oslo_concurrency.lockutils [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.333 257641 DEBUG oslo_concurrency.lockutils [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.334 257641 DEBUG oslo_concurrency.lockutils [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.334 257641 DEBUG nova.compute.manager [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.334 257641 DEBUG nova.compute.manager [req-2615bd87-af0d-4835-9739-9b5c8f3e01a3 req-40ac1efd-817a-4f1d-bd29-df820d253579 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-unplugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.392 257641 INFO nova.virt.libvirt.driver [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deleting instance files /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_del#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.393 257641 INFO nova.virt.libvirt.driver [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deletion of /var/lib/nova/instances/a97c6d24-5d3c-40d0-a2d9-03e786a12cd8_del complete#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.447 257641 INFO nova.compute.manager [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.448 257641 DEBUG oslo.service.loopingcall [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.448 257641 DEBUG nova.compute.manager [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.448 257641 DEBUG nova.network.neutron [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.942 257641 DEBUG nova.network.neutron [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updated VIF entry in instance network info cache for port a85ee38f-1a42-4405-9db6-3b9430481c2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.943 257641 DEBUG nova.network.neutron [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updating instance_info_cache with network_info: [{"id": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "address": "fa:16:3e:d1:7f:8f", "network": {"id": "24a46b48-0082-4428-9f80-5c459b7611cb", "bridge": "br-int", "label": "tempest-network-smoke--218330487", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa85ee38f-1a", "ovs_interfaceid": "a85ee38f-1a42-4405-9db6-3b9430481c2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:04 np0005539550 nova_compute[257631]: 2025-11-29 08:28:04.959 257641 DEBUG oslo_concurrency.lockutils [req-28e6b832-8925-42a8-b946-332f2ad2961e req-7c472e66-fd76-412d-8f23-e7d7463dd8cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.056 257641 DEBUG nova.network.neutron [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.090 257641 INFO nova.compute.manager [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.142 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.142 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:05.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.233 257641 DEBUG oslo_concurrency.processutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513449410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.678 257641 DEBUG oslo_concurrency.processutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.685 257641 DEBUG nova.compute.provider_tree [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.708 257641 DEBUG nova.scheduler.client.report [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.733 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:05 np0005539550 nova_compute[257631]: 2025-11-29 08:28:05.757 257641 INFO nova.scheduler.client.report [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance a97c6d24-5d3c-40d0-a2d9-03e786a12cd8#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.034 257641 DEBUG oslo_concurrency.lockutils [None req-a166f52e-477a-40b3-b23f-bed922de39eb fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 760 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 557 KiB/s rd, 905 KiB/s wr, 150 op/s
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.082 257641 DEBUG nova.compute.manager [req-f4aca9dc-a398-4499-b137-934d00168990 req-47556e7b-db9d-4df6-9ad8-ecd2a41bed32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-deleted-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:06.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.428 257641 DEBUG nova.compute.manager [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.428 257641 DEBUG oslo_concurrency.lockutils [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.429 257641 DEBUG oslo_concurrency.lockutils [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.429 257641 DEBUG oslo_concurrency.lockutils [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a97c6d24-5d3c-40d0-a2d9-03e786a12cd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.429 257641 DEBUG nova.compute.manager [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] No waiting events found dispatching network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:06 np0005539550 nova_compute[257631]: 2025-11-29 08:28:06.429 257641 WARNING nova.compute.manager [req-9202c0df-a3dd-4613-81e3-c54e96bfd62d req-30a323ac-f393-4b31-94f2-6096c220d027 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Received unexpected event network-vif-plugged-a85ee38f-1a42-4405-9db6-3b9430481c2c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:28:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Nov 29 03:28:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Nov 29 03:28:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 760 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 143 KiB/s wr, 112 op/s
Nov 29 03:28:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:08.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:28:08 np0005539550 nova_compute[257631]: 2025-11-29 08:28:08.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:28:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:28:08 np0005539550 podman[349003]: 2025-11-29 08:28:08.335631886 +0000 UTC m=+0.057828730 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:08 np0005539550 podman[349002]: 2025-11-29 08:28:08.33735176 +0000 UTC m=+0.064792807 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:08 np0005539550 nova_compute[257631]: 2025-11-29 08:28:08.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:28:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:28:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:28:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:28:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:28:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:09.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.028 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.029 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.029 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.029 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.029 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.030 257641 INFO nova.compute.manager [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Terminating instance#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.031 257641 DEBUG nova.compute.manager [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 709 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 1.5 MiB/s wr, 158 op/s
Nov 29 03:28:10 np0005539550 kernel: tapc5532a42-8b (unregistering): left promiscuous mode
Nov 29 03:28:10 np0005539550 NetworkManager[49039]: <info>  [1764404890.0855] device (tapc5532a42-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:10Z|00663|binding|INFO|Releasing lport c5532a42-8b51-4dea-bf3e-e272409f89f4 from this chassis (sb_readonly=0)
Nov 29 03:28:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:10Z|00664|binding|INFO|Setting lport c5532a42-8b51-4dea-bf3e-e272409f89f4 down in Southbound
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.095 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:10Z|00665|binding|INFO|Removing iface tapc5532a42-8b ovn-installed in OVS
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.097 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.102 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:bd:45 10.100.0.10'], port_security=['fa:16:3e:ae:bd:45 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '346e849d-fa61-4451-b34c-d6165fea3aa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532a42-8b51-4dea-bf3e-e272409f89f4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.103 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532a42-8b51-4dea-bf3e-e272409f89f4 in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.105 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.117 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.119 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[72d5438c-9f62-4cc4-828f-96c5bf75e5a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000085.scope: Deactivated successfully.
Nov 29 03:28:10 np0005539550 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000085.scope: Consumed 20.483s CPU time.
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.150 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[67f755fc-abca-4c8d-9b70-736f0aca6759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 systemd-machined[216673]: Machine qemu-73-instance-00000085 terminated.
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.153 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[716eeb7e-6b4f-4bb4-9b1e-445709f98eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:10.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.179 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[64dbabec-a0cb-4604-ab67-b016a5f9c4a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.194 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a05fa65-01e5-46fd-ac85-5b49b1bd6f4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5da19f7d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:18:8e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772792, 'reachable_time': 17116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349050, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.208 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c50bc566-f0c3-4b54-a1dd-69084e5ecda5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772805, 'tstamp': 772805}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349051, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5da19f7d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 772807, 'tstamp': 772807}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349051, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.210 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.252 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.253 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5da19f7d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.253 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.253 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5da19f7d-30, col_values=(('external_ids', {'iface-id': 'd4f0104e-3913-4399-9086-37cf4d16e7c7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:10.254 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.261 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.265 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.280 257641 INFO nova.virt.libvirt.driver [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Instance destroyed successfully.#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.280 257641 DEBUG nova.objects.instance [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 346e849d-fa61-4451-b34c-d6165fea3aa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.300 257641 DEBUG nova.virt.libvirt.vif [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:24:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-1614242741',display_name='tempest-ServerStableDeviceRescueTest-server-1614242741',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-1614242741',id=133,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:25:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-v7xvcml6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:25:23Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=346e849d-fa61-4451-b34c-d6165fea3aa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.300 257641 DEBUG nova.network.os_vif_util [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "address": "fa:16:3e:ae:bd:45", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532a42-8b", "ovs_interfaceid": "c5532a42-8b51-4dea-bf3e-e272409f89f4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.301 257641 DEBUG nova.network.os_vif_util [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.301 257641 DEBUG os_vif [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.303 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.303 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5532a42-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.304 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.306 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.308 257641 INFO os_vif [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:bd:45,bridge_name='br-int',has_traffic_filtering=True,id=c5532a42-8b51-4dea-bf3e-e272409f89f4,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532a42-8b')#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.338 257641 DEBUG nova.compute.manager [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.339 257641 DEBUG oslo_concurrency.lockutils [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.339 257641 DEBUG oslo_concurrency.lockutils [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.339 257641 DEBUG oslo_concurrency.lockutils [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.339 257641 DEBUG nova.compute.manager [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:10 np0005539550 nova_compute[257631]: 2025-11-29 08:28:10.340 257641 DEBUG nova.compute.manager [req-2a70164f-e85f-4a55-be71-6c94527503bc req-5d05a6e5-f69b-400f-a4a9-20cf556557a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-unplugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:28:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:11.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.528 257641 INFO nova.virt.libvirt.driver [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Deleting instance files /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4_del#033[00m
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.529 257641 INFO nova.virt.libvirt.driver [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Deletion of /var/lib/nova/instances/346e849d-fa61-4451-b34c-d6165fea3aa4_del complete#033[00m
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.597 257641 INFO nova.compute.manager [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.597 257641 DEBUG oslo.service.loopingcall [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.598 257641 DEBUG nova.compute.manager [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.598 257641 DEBUG nova.network.neutron [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:11Z|00666|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:11Z|00667|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.750 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:11Z|00668|binding|INFO|Releasing lport d4f0104e-3913-4399-9086-37cf4d16e7c7 from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:11Z|00669|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:11 np0005539550 nova_compute[257631]: 2025-11-29 08:28:11.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 703 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 1.7 MiB/s wr, 139 op/s
Nov 29 03:28:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:12.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.467 257641 DEBUG nova.compute.manager [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.468 257641 DEBUG oslo_concurrency.lockutils [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.468 257641 DEBUG oslo_concurrency.lockutils [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.469 257641 DEBUG oslo_concurrency.lockutils [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.469 257641 DEBUG nova.compute.manager [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] No waiting events found dispatching network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:12 np0005539550 nova_compute[257631]: 2025-11-29 08:28:12.469 257641 WARNING nova.compute.manager [req-6032536e-6c9e-4a39-8bfd-8e13b4dc074a req-2615f3f0-7201-4dc3-b83f-827c520a5074 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received unexpected event network-vif-plugged-c5532a42-8b51-4dea-bf3e-e272409f89f4 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:28:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.604 257641 DEBUG nova.network.neutron [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.633 257641 INFO nova.compute.manager [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Took 2.03 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.678 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.678 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.696 257641 DEBUG nova.compute.manager [req-f3f0933b-c0b7-454f-a621-98e607e50457 req-b7d88d8f-24ea-4a22-80ea-41ad6b177fc4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Received event network-vif-deleted-c5532a42-8b51-4dea-bf3e-e272409f89f4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:13 np0005539550 nova_compute[257631]: 2025-11-29 08:28:13.851 257641 DEBUG oslo_concurrency.processutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 676 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 103 KiB/s rd, 2.2 MiB/s wr, 157 op/s
Nov 29 03:28:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:14.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4272636472' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.280 257641 DEBUG oslo_concurrency.processutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.286 257641 DEBUG nova.compute.provider_tree [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.319 257641 DEBUG nova.scheduler.client.report [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.346 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.376 257641 INFO nova.scheduler.client.report [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Deleted allocations for instance 346e849d-fa61-4451-b34c-d6165fea3aa4#033[00m
Nov 29 03:28:14 np0005539550 nova_compute[257631]: 2025-11-29 08:28:14.458 257641 DEBUG oslo_concurrency.lockutils [None req-353dc7de-50ec-46f9-b249-f129ad3571f6 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "346e849d-fa61-4451-b34c-d6165fea3aa4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.429s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:15 np0005539550 nova_compute[257631]: 2025-11-29 08:28:15.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:15 np0005539550 nova_compute[257631]: 2025-11-29 08:28:15.308 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:28:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 629 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 2.2 MiB/s wr, 122 op/s
Nov 29 03:28:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:16.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:17.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Nov 29 03:28:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 629 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 2.0 MiB/s wr, 115 op/s
Nov 29 03:28:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:18.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539550 nova_compute[257631]: 2025-11-29 08:28:18.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Nov 29 03:28:18 np0005539550 nova_compute[257631]: 2025-11-29 08:28:18.952 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404883.9502985, a97c6d24-5d3c-40d0-a2d9-03e786a12cd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:18 np0005539550 nova_compute[257631]: 2025-11-29 08:28:18.953 257641 INFO nova.compute.manager [-] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:18.960 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:18.960 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:18.961 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:18 np0005539550 nova_compute[257631]: 2025-11-29 08:28:18.980 257641 DEBUG nova.compute.manager [None req-a04d5e1a-78b8-4e53-8f02-7da76fc70426 - - - - - -] [instance: a97c6d24-5d3c-40d0-a2d9-03e786a12cd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:19.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Nov 29 03:28:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Nov 29 03:28:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 621 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 801 KiB/s wr, 152 op/s
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01285670976409827 of space, bias 1.0, pg target 3.857012929229481 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.000377865891959799 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0038337302833612507 of space, bias 1.0, pg target 1.1386178941582914 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:28:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:28:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:20.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.273 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.274 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.274 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.274 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.275 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.275 257641 INFO nova.compute.manager [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Terminating instance#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.276 257641 DEBUG nova.compute.manager [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 kernel: tap0d1cf0d1-b3 (unregistering): left promiscuous mode
Nov 29 03:28:20 np0005539550 NetworkManager[49039]: <info>  [1764404900.3339] device (tap0d1cf0d1-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:20Z|00670|binding|INFO|Releasing lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b from this chassis (sb_readonly=0)
Nov 29 03:28:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:20Z|00671|binding|INFO|Setting lport 0d1cf0d1-b379-4a62-8413-831aa8ff906b down in Southbound
Nov 29 03:28:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:20Z|00672|binding|INFO|Removing iface tap0d1cf0d1-b3 ovn-installed in OVS
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 podman[349112]: 2025-11-29 08:28:20.352254577 +0000 UTC m=+0.092614453 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.356 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:e3:35 10.100.0.5'], port_security=['fa:16:3e:8e:e3:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4702f4ee-458d-4146-b9b2-70ecf718176c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a9a83f8d8d7f4d08890407f978c05166', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1d1bf0bb-aa3c-4461-8a1e-ba1daa172e77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5d0d36bf-5f41-4d6e-9e1b-1a2b5a9220ce, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=0d1cf0d1-b379-4a62-8413-831aa8ff906b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.358 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 0d1cf0d1-b379-4a62-8413-831aa8ff906b in datapath 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 unbound from our chassis#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.359 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.360 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fed6f999-3bf7-4511-8e6c-b04d608ad8e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.361 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 namespace which is not needed anymore#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000081.scope: Deactivated successfully.
Nov 29 03:28:20 np0005539550 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d00000081.scope: Consumed 22.779s CPU time.
Nov 29 03:28:20 np0005539550 systemd-machined[216673]: Machine qemu-70-instance-00000081 terminated.
Nov 29 03:28:20 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [NOTICE]   (341353) : haproxy version is 2.8.14-c23fe91
Nov 29 03:28:20 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [NOTICE]   (341353) : path to executable is /usr/sbin/haproxy
Nov 29 03:28:20 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [WARNING]  (341353) : Exiting Master process...
Nov 29 03:28:20 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [ALERT]    (341353) : Current worker (341355) exited with code 143 (Terminated)
Nov 29 03:28:20 np0005539550 neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445[341349]: [WARNING]  (341353) : All workers exited. Exiting... (0)
Nov 29 03:28:20 np0005539550 systemd[1]: libpod-81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef.scope: Deactivated successfully.
Nov 29 03:28:20 np0005539550 podman[349161]: 2025-11-29 08:28:20.493620128 +0000 UTC m=+0.041657639 container died 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:28:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef-userdata-shm.mount: Deactivated successfully.
Nov 29 03:28:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5fa2cd0b888db859ab32c2082a68ded59b92cd3c001d1af44de99bb318f45792-merged.mount: Deactivated successfully.
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.524 257641 INFO nova.virt.libvirt.driver [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Instance destroyed successfully.#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.525 257641 DEBUG nova.objects.instance [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lazy-loading 'resources' on Instance uuid 4702f4ee-458d-4146-b9b2-70ecf718176c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:20 np0005539550 podman[349161]: 2025-11-29 08:28:20.530828722 +0000 UTC m=+0.078866233 container cleanup 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.540 257641 DEBUG nova.virt.libvirt.vif [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:23:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-2014288875',display_name='tempest-ServerStableDeviceRescueTest-server-2014288875',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-2014288875',id=129,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a9a83f8d8d7f4d08890407f978c05166',ramdisk_id='',reservation_id='r-t1kduh0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-507673154',owner_user_name='tempest-ServerStableDeviceRescueTest-507673154-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:24:42Z,user_data=None,user_id='873186539acb4bf9b90513e0e1beb56f',uuid=4702f4ee-458d-4146-b9b2-70ecf718176c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.540 257641 DEBUG nova.network.os_vif_util [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converting VIF {"id": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "address": "fa:16:3e:8e:e3:35", "network": {"id": "5da19f7d-3aa0-41e7-88b0-b9ef17fa4445", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-18499305-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a9a83f8d8d7f4d08890407f978c05166", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d1cf0d1-b3", "ovs_interfaceid": "0d1cf0d1-b379-4a62-8413-831aa8ff906b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.541 257641 DEBUG nova.network.os_vif_util [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.542 257641 DEBUG os_vif [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.544 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.544 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d1cf0d1-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.547 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:28:20 np0005539550 systemd[1]: libpod-conmon-81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef.scope: Deactivated successfully.
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.550 257641 INFO os_vif [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8e:e3:35,bridge_name='br-int',has_traffic_filtering=True,id=0d1cf0d1-b379-4a62-8413-831aa8ff906b,network=Network(5da19f7d-3aa0-41e7-88b0-b9ef17fa4445),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d1cf0d1-b3')#033[00m
Nov 29 03:28:20 np0005539550 podman[349199]: 2025-11-29 08:28:20.59175731 +0000 UTC m=+0.038894469 container remove 81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.599 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7614e00-6bb5-40d2-a396-6f9fb9b56f76]: (4, ('Sat Nov 29 08:28:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef)\n81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef\nSat Nov 29 08:28:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 (81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef)\n81fe43b93b6820050a7ce2a0e1ca5f9ebe65d490297582daccea03e2705ccfef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.600 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbfbc73-53bf-496d-894a-b29b52312c13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.601 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5da19f7d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:20 np0005539550 kernel: tap5da19f7d-30: left promiscuous mode
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.618 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.621 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[79e481e0-d1b8-4868-a0e4-851d6e63bf36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.635 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[215e5c4d-f0e3-4abb-ab3c-0c8f08183d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.636 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e65aafce-f8e7-4e00-a863-7d1427bf5db6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.643 257641 DEBUG nova.compute.manager [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.643 257641 DEBUG oslo_concurrency.lockutils [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.643 257641 DEBUG oslo_concurrency.lockutils [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.644 257641 DEBUG oslo_concurrency.lockutils [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.644 257641 DEBUG nova.compute.manager [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:20 np0005539550 nova_compute[257631]: 2025-11-29 08:28:20.644 257641 DEBUG nova.compute.manager [req-37d02c40-a93a-4d29-8631-fa7f91332571 req-fa18f07c-fad3-4efb-a7e3-9ff230570132 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-unplugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.654 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfe709a-a33b-4b3c-ba25-a062235d7e1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 772785, 'reachable_time': 19270, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349233, 'error': None, 'target': 'ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.657 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5da19f7d-3aa0-41e7-88b0-b9ef17fa4445 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:28:20 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:20.657 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[8445156d-27dc-4abe-9eaa-e7b95fe51665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:20 np0005539550 systemd[1]: run-netns-ovnmeta\x2d5da19f7d\x2d3aa0\x2d41e7\x2d88b0\x2db9ef17fa4445.mount: Deactivated successfully.
Nov 29 03:28:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:21.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.498 257641 INFO nova.virt.libvirt.driver [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Deleting instance files /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c_del#033[00m
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.499 257641 INFO nova.virt.libvirt.driver [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Deletion of /var/lib/nova/instances/4702f4ee-458d-4146-b9b2-70ecf718176c_del complete#033[00m
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.576 257641 INFO nova.compute.manager [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.577 257641 DEBUG oslo.service.loopingcall [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.577 257641 DEBUG nova.compute.manager [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:28:21 np0005539550 nova_compute[257631]: 2025-11-29 08:28:21.577 257641 DEBUG nova.network.neutron [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:28:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 576 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 27 KiB/s wr, 142 op/s
Nov 29 03:28:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:22.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.270 257641 DEBUG nova.network.neutron [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.291 257641 INFO nova.compute.manager [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Took 0.71 seconds to deallocate network for instance.#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.339 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.340 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.350 257641 DEBUG nova.compute.manager [req-6e78b56c-5e02-47d4-97dc-f6492d78193b req-39b29e36-755c-459a-a213-b5621b709161 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-deleted-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.458 257641 DEBUG oslo_concurrency.processutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69910893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.922 257641 DEBUG oslo_concurrency.processutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.928 257641 DEBUG nova.compute.provider_tree [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.962 257641 DEBUG nova.compute.manager [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.962 257641 DEBUG oslo_concurrency.lockutils [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.963 257641 DEBUG oslo_concurrency.lockutils [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.963 257641 DEBUG oslo_concurrency.lockutils [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.963 257641 DEBUG nova.compute.manager [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] No waiting events found dispatching network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.963 257641 WARNING nova.compute.manager [req-ce8d9a27-d843-4228-8150-3d7acd0c4448 req-2403f31c-fce2-4580-9740-44ff347d3822 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Received unexpected event network-vif-plugged-0d1cf0d1-b379-4a62-8413-831aa8ff906b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:28:22 np0005539550 nova_compute[257631]: 2025-11-29 08:28:22.981 257641 DEBUG nova.scheduler.client.report [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:23 np0005539550 nova_compute[257631]: 2025-11-29 08:28:23.003 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:23 np0005539550 nova_compute[257631]: 2025-11-29 08:28:23.049 257641 INFO nova.scheduler.client.report [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Deleted allocations for instance 4702f4ee-458d-4146-b9b2-70ecf718176c#033[00m
Nov 29 03:28:23 np0005539550 nova_compute[257631]: 2025-11-29 08:28:23.133 257641 DEBUG oslo_concurrency.lockutils [None req-4ad67eb1-d101-43a0-a8f8-30244e2d3023 873186539acb4bf9b90513e0e1beb56f a9a83f8d8d7f4d08890407f978c05166 - - default default] Lock "4702f4ee-458d-4146-b9b2-70ecf718176c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:23.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:23 np0005539550 nova_compute[257631]: 2025-11-29 08:28:23.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 549 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 KiB/s wr, 164 op/s
Nov 29 03:28:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:24.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:25 np0005539550 nova_compute[257631]: 2025-11-29 08:28:25.279 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404890.278619, 346e849d-fa61-4451-b34c-d6165fea3aa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:25 np0005539550 nova_compute[257631]: 2025-11-29 08:28:25.280 257641 INFO nova.compute.manager [-] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:25 np0005539550 nova_compute[257631]: 2025-11-29 08:28:25.308 257641 DEBUG nova.compute.manager [None req-eebd3484-1325-4f71-a2f0-05da9c283fd3 - - - - - -] [instance: 346e849d-fa61-4451-b34c-d6165fea3aa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:25 np0005539550 nova_compute[257631]: 2025-11-29 08:28:25.549 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.7 KiB/s wr, 308 op/s
Nov 29 03:28:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:26.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:26 np0005539550 nova_compute[257631]: 2025-11-29 08:28:26.365 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:28:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:26Z|00673|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:26 np0005539550 nova_compute[257631]: 2025-11-29 08:28:26.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:27.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Nov 29 03:28:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Nov 29 03:28:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Nov 29 03:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 473 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.3 KiB/s wr, 279 op/s
Nov 29 03:28:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:28.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:28 np0005539550 nova_compute[257631]: 2025-11-29 08:28:28.309 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:29.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 13 KiB/s wr, 205 op/s
Nov 29 03:28:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:30.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:30 np0005539550 nova_compute[257631]: 2025-11-29 08:28:30.553 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:31.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 455 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 13 KiB/s wr, 177 op/s
Nov 29 03:28:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:32.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:33.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:33 np0005539550 nova_compute[257631]: 2025-11-29 08:28:33.311 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:33 np0005539550 nova_compute[257631]: 2025-11-29 08:28:33.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 879 KiB/s wr, 130 op/s
Nov 29 03:28:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:34.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.518 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404900.5162246, 4702f4ee-458d-4146-b9b2-70ecf718176c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.518 257641 INFO nova.compute.manager [-] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.542 257641 DEBUG nova.compute.manager [None req-414889e6-59c4-48ab-8c56-bf2c1913e663 - - - - - -] [instance: 4702f4ee-458d-4146-b9b2-70ecf718176c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.554 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:28:35 np0005539550 nova_compute[257631]: 2025-11-29 08:28:35.966 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:28:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.2 MiB/s wr, 94 op/s
Nov 29 03:28:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:36.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.952 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:28:36 np0005539550 nova_compute[257631]: 2025-11-29 08:28:36.953 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:37.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478089204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.408 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.412 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.467 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.468 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.627 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.630 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4237MB free_disk=20.77667999267578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.631 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.631 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.761 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance a0310268-d298-469e-9f04-6315f83c3f89 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.761 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.761 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:28:37 np0005539550 nova_compute[257631]: 2025-11-29 08:28:37.802 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 501 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 506 KiB/s rd, 2.0 MiB/s wr, 88 op/s
Nov 29 03:28:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/980233985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.236 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.241 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.256 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.280 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.280 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:38 np0005539550 nova_compute[257631]: 2025-11-29 08:28:38.314 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5967b320-7e11-414d-868b-3ebc5baf45dc does not exist
Nov 29 03:28:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 07de5f3f-d20b-4329-be57-4c89a387f01e does not exist
Nov 29 03:28:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 34ec9cf8-9070-4a0b-a36a-ed9fa653d27e does not exist
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:28:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:28:39 np0005539550 podman[349515]: 2025-11-29 08:28:39.115624413 +0000 UTC m=+0.058404714 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:28:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:28:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:28:39 np0005539550 podman[349516]: 2025-11-29 08:28:39.135684333 +0000 UTC m=+0.078103535 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:28:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:39.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:39 np0005539550 nova_compute[257631]: 2025-11-29 08:28:39.280 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.516074263 +0000 UTC m=+0.043895636 container create 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:28:39 np0005539550 systemd[1]: Started libpod-conmon-9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8.scope.
Nov 29 03:28:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.587166028 +0000 UTC m=+0.114987391 container init 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.594149216 +0000 UTC m=+0.121970589 container start 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.500165889 +0000 UTC m=+0.027987282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.597171683 +0000 UTC m=+0.124993076 container attach 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:28:39 np0005539550 jovial_bhabha[349682]: 167 167
Nov 29 03:28:39 np0005539550 systemd[1]: libpod-9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8.scope: Deactivated successfully.
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.599769599 +0000 UTC m=+0.127590972 container died 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6446181397c0317a5de47681400f9958e679c401ab1b199036dbebb2e23b03fd-merged.mount: Deactivated successfully.
Nov 29 03:28:39 np0005539550 podman[349666]: 2025-11-29 08:28:39.635494556 +0000 UTC m=+0.163315929 container remove 9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:39 np0005539550 systemd[1]: libpod-conmon-9624484e1d8e1157c14d93b096aa1a274a9a9a79b99316a678335b1a818ed8e8.scope: Deactivated successfully.
Nov 29 03:28:39 np0005539550 podman[349707]: 2025-11-29 08:28:39.804637941 +0000 UTC m=+0.053319085 container create aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:28:39 np0005539550 systemd[1]: Started libpod-conmon-aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997.scope.
Nov 29 03:28:39 np0005539550 podman[349707]: 2025-11-29 08:28:39.776483336 +0000 UTC m=+0.025164500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:39 np0005539550 podman[349707]: 2025-11-29 08:28:39.923239343 +0000 UTC m=+0.171920507 container init aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:28:39 np0005539550 podman[349707]: 2025-11-29 08:28:39.930535318 +0000 UTC m=+0.179216462 container start aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:28:39 np0005539550 podman[349707]: 2025-11-29 08:28:39.945361985 +0000 UTC m=+0.194043149 container attach aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:28:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 503 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 582 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Nov 29 03:28:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:40.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:40 np0005539550 nova_compute[257631]: 2025-11-29 08:28:40.556 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:40 np0005539550 zealous_khorana[349724]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:28:40 np0005539550 zealous_khorana[349724]: --> relative data size: 1.0
Nov 29 03:28:40 np0005539550 zealous_khorana[349724]: --> All data devices are unavailable
Nov 29 03:28:40 np0005539550 systemd[1]: libpod-aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997.scope: Deactivated successfully.
Nov 29 03:28:40 np0005539550 podman[349707]: 2025-11-29 08:28:40.792548739 +0000 UTC m=+1.041229883 container died aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:28:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6a736835d85b003a93a04e940a180f2a093f881caf85d508eaeee920a9771d30-merged.mount: Deactivated successfully.
Nov 29 03:28:40 np0005539550 podman[349707]: 2025-11-29 08:28:40.841698378 +0000 UTC m=+1.090379522 container remove aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_khorana, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Nov 29 03:28:40 np0005539550 systemd[1]: libpod-conmon-aaec9f0b6c65037fa0354b7f710363d71b084b9ec92b7575e490659665434997.scope: Deactivated successfully.
Nov 29 03:28:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:41.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.489520068 +0000 UTC m=+0.040719165 container create 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:41 np0005539550 systemd[1]: Started libpod-conmon-346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643.scope.
Nov 29 03:28:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.472762633 +0000 UTC m=+0.023961750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.571467179 +0000 UTC m=+0.122666286 container init 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.578912758 +0000 UTC m=+0.130111885 container start 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.583129955 +0000 UTC m=+0.134329062 container attach 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:41 np0005539550 zealous_ellis[349947]: 167 167
Nov 29 03:28:41 np0005539550 systemd[1]: libpod-346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643.scope: Deactivated successfully.
Nov 29 03:28:41 np0005539550 conmon[349947]: conmon 346edb57f7de319d169c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643.scope/container/memory.events
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.58565237 +0000 UTC m=+0.136851467 container died 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 29 03:28:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-86216178a8e0b9cf44b25b4ca5555f17389cfd719db6f7ec9f3879d09b4b7b3a-merged.mount: Deactivated successfully.
Nov 29 03:28:41 np0005539550 podman[349895]: 2025-11-29 08:28:41.631055332 +0000 UTC m=+0.182254429 container remove 346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:41 np0005539550 systemd[1]: libpod-conmon-346edb57f7de319d169c07cf8d67650f598ac98ab8b871f1e94e3d77c6315643.scope: Deactivated successfully.
Nov 29 03:28:41 np0005539550 podman[349982]: 2025-11-29 08:28:41.822487734 +0000 UTC m=+0.045950188 container create 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:28:41 np0005539550 systemd[1]: Started libpod-conmon-99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf.scope.
Nov 29 03:28:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:41 np0005539550 podman[349982]: 2025-11-29 08:28:41.803585344 +0000 UTC m=+0.027047808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef615edaf36a3230d59fd2e991a7e0a191f813e902414a3b3dc9299b31199af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef615edaf36a3230d59fd2e991a7e0a191f813e902414a3b3dc9299b31199af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef615edaf36a3230d59fd2e991a7e0a191f813e902414a3b3dc9299b31199af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef615edaf36a3230d59fd2e991a7e0a191f813e902414a3b3dc9299b31199af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:41 np0005539550 podman[349982]: 2025-11-29 08:28:41.91723752 +0000 UTC m=+0.140699954 container init 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:28:41 np0005539550 nova_compute[257631]: 2025-11-29 08:28:41.917 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539550 nova_compute[257631]: 2025-11-29 08:28:41.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539550 nova_compute[257631]: 2025-11-29 08:28:41.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:41 np0005539550 nova_compute[257631]: 2025-11-29 08:28:41.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:28:41 np0005539550 podman[349982]: 2025-11-29 08:28:41.930564579 +0000 UTC m=+0.154027003 container start 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:28:41 np0005539550 podman[349982]: 2025-11-29 08:28:41.933938104 +0000 UTC m=+0.157400548 container attach 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:28:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 503 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 549 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Nov 29 03:28:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:42.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:42 np0005539550 angry_borg[349999]: {
Nov 29 03:28:42 np0005539550 angry_borg[349999]:    "0": [
Nov 29 03:28:42 np0005539550 angry_borg[349999]:        {
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "devices": [
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "/dev/loop3"
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            ],
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "lv_name": "ceph_lv0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "lv_size": "7511998464",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "name": "ceph_lv0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "tags": {
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.cluster_name": "ceph",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.crush_device_class": "",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.encrypted": "0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.osd_id": "0",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.type": "block",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:                "ceph.vdo": "0"
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            },
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "type": "block",
Nov 29 03:28:42 np0005539550 angry_borg[349999]:            "vg_name": "ceph_vg0"
Nov 29 03:28:42 np0005539550 angry_borg[349999]:        }
Nov 29 03:28:42 np0005539550 angry_borg[349999]:    ]
Nov 29 03:28:42 np0005539550 angry_borg[349999]: }
Nov 29 03:28:42 np0005539550 systemd[1]: libpod-99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf.scope: Deactivated successfully.
Nov 29 03:28:42 np0005539550 podman[349982]: 2025-11-29 08:28:42.771961696 +0000 UTC m=+0.995424100 container died 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2ef615edaf36a3230d59fd2e991a7e0a191f813e902414a3b3dc9299b31199af-merged.mount: Deactivated successfully.
Nov 29 03:28:42 np0005539550 podman[349982]: 2025-11-29 08:28:42.8284062 +0000 UTC m=+1.051868614 container remove 99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_borg, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:28:42 np0005539550 systemd[1]: libpod-conmon-99e7971a3469f149c47e87186ed5288d7e391e20d7f6ed0ece75bb252b5139cf.scope: Deactivated successfully.
Nov 29 03:28:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:43.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:43 np0005539550 nova_compute[257631]: 2025-11-29 08:28:43.316 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.468658369 +0000 UTC m=+0.035521363 container create c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:28:43 np0005539550 systemd[1]: Started libpod-conmon-c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0.scope.
Nov 29 03:28:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.453474443 +0000 UTC m=+0.020337457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.555376331 +0000 UTC m=+0.122239345 container init c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.565222481 +0000 UTC m=+0.132085495 container start c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.568881194 +0000 UTC m=+0.135744298 container attach c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:43 np0005539550 quizzical_visvesvaraya[350184]: 167 167
Nov 29 03:28:43 np0005539550 systemd[1]: libpod-c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0.scope: Deactivated successfully.
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.570895415 +0000 UTC m=+0.137758409 container died c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:28:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-82055d17be4e2d4afdd1ad76a39a9aa9d5e8f7f2ff430d130326b43c8e9d82da-merged.mount: Deactivated successfully.
Nov 29 03:28:43 np0005539550 podman[350167]: 2025-11-29 08:28:43.606052748 +0000 UTC m=+0.172915762 container remove c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:28:43 np0005539550 systemd[1]: libpod-conmon-c77ad603ce9e73ce14db70fddd5e746011c62f68fd6744e969c57c54e38033d0.scope: Deactivated successfully.
Nov 29 03:28:43 np0005539550 podman[350207]: 2025-11-29 08:28:43.768658428 +0000 UTC m=+0.038395547 container create 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:28:43 np0005539550 systemd[1]: Started libpod-conmon-84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60.scope.
Nov 29 03:28:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:28:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e96baacde5b055b49ad0d6df46feaf3465bdc91234b2f06a32053fb32d2dc46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e96baacde5b055b49ad0d6df46feaf3465bdc91234b2f06a32053fb32d2dc46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e96baacde5b055b49ad0d6df46feaf3465bdc91234b2f06a32053fb32d2dc46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e96baacde5b055b49ad0d6df46feaf3465bdc91234b2f06a32053fb32d2dc46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:28:43 np0005539550 podman[350207]: 2025-11-29 08:28:43.84513688 +0000 UTC m=+0.114874029 container init 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:43 np0005539550 podman[350207]: 2025-11-29 08:28:43.752556549 +0000 UTC m=+0.022293688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:28:43 np0005539550 podman[350207]: 2025-11-29 08:28:43.854656851 +0000 UTC m=+0.124393970 container start 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:28:43 np0005539550 podman[350207]: 2025-11-29 08:28:43.858428837 +0000 UTC m=+0.128165976 container attach 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:28:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 503 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 985 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Nov 29 03:28:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:44.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:44.273 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:44 np0005539550 nova_compute[257631]: 2025-11-29 08:28:44.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:44.275 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]: {
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:        "osd_id": 0,
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:        "type": "bluestore"
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]:    }
Nov 29 03:28:44 np0005539550 vibrant_pasteur[350223]: }
Nov 29 03:28:44 np0005539550 systemd[1]: libpod-84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60.scope: Deactivated successfully.
Nov 29 03:28:44 np0005539550 podman[350207]: 2025-11-29 08:28:44.762215179 +0000 UTC m=+1.031952308 container died 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:28:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5e96baacde5b055b49ad0d6df46feaf3465bdc91234b2f06a32053fb32d2dc46-merged.mount: Deactivated successfully.
Nov 29 03:28:44 np0005539550 podman[350207]: 2025-11-29 08:28:44.833553181 +0000 UTC m=+1.103290320 container remove 84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:28:44 np0005539550 systemd[1]: libpod-conmon-84611b6d78756ea87ca1a909aa15a457b00c5149925c187cde012209e30e9f60.scope: Deactivated successfully.
Nov 29 03:28:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:28:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:28:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c9a2ee38-79b6-47ee-ad15-9dc3fbd0a1f1 does not exist
Nov 29 03:28:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 43a4327f-0e17-4845-ba15-639998bd289f does not exist
Nov 29 03:28:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9e406a2f-dfb0-48d8-87e8-ace412444c02 does not exist
Nov 29 03:28:44 np0005539550 nova_compute[257631]: 2025-11-29 08:28:44.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:45.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:45 np0005539550 nova_compute[257631]: 2025-11-29 08:28:45.560 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:28:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 443 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.1 MiB/s wr, 156 op/s
Nov 29 03:28:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:46.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:47.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 443 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 26 KiB/s wr, 90 op/s
Nov 29 03:28:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:48.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:48 np0005539550 nova_compute[257631]: 2025-11-29 08:28:48.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:48 np0005539550 nova_compute[257631]: 2025-11-29 08:28:48.458 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:28:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:49.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:49 np0005539550 nova_compute[257631]: 2025-11-29 08:28:49.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:28:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 26 KiB/s wr, 109 op/s
Nov 29 03:28:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:28:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:50.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:28:50 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:50.277 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:50 np0005539550 nova_compute[257631]: 2025-11-29 08:28:50.564 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:51 np0005539550 podman[350312]: 2025-11-29 08:28:51.378760746 +0000 UTC m=+0.113516934 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:28:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 102 op/s
Nov 29 03:28:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:52.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:53 np0005539550 nova_compute[257631]: 2025-11-29 08:28:53.319 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 422 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 108 op/s
Nov 29 03:28:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:54.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.483 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance failed to shutdown in 60 seconds.#033[00m
Nov 29 03:28:54 np0005539550 kernel: tape1a8ada6-65 (unregistering): left promiscuous mode
Nov 29 03:28:54 np0005539550 NetworkManager[49039]: <info>  [1764404934.5596] device (tape1a8ada6-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:28:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:54Z|00674|binding|INFO|Releasing lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 from this chassis (sb_readonly=0)
Nov 29 03:28:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:54Z|00675|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 down in Southbound
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.569 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:54Z|00676|binding|INFO|Removing iface tape1a8ada6-65 ovn-installed in OVS
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.571 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.581 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.582 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.583 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.585 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99cc2d2a-ad9e-4844-b819-9d4b2ab58e7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.585 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace which is not needed anymore#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539550 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000091.scope: Consumed 1.433s CPU time.
Nov 29 03:28:54 np0005539550 systemd-machined[216673]: Machine qemu-78-instance-00000091 terminated.
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.700 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.713 257641 INFO nova.virt.libvirt.driver [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance destroyed successfully.#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.713 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [NOTICE]   (348817) : haproxy version is 2.8.14-c23fe91
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [NOTICE]   (348817) : path to executable is /usr/sbin/haproxy
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [WARNING]  (348817) : Exiting Master process...
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [WARNING]  (348817) : Exiting Master process...
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [ALERT]    (348817) : Current worker (348819) exited with code 143 (Terminated)
Nov 29 03:28:54 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[348804]: [WARNING]  (348817) : All workers exited. Exiting... (0)
Nov 29 03:28:54 np0005539550 systemd[1]: libpod-5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539550 conmon[348804]: conmon 5a8f2cfc23586fa5ffc3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889.scope/container/memory.events
Nov 29 03:28:54 np0005539550 podman[350365]: 2025-11-29 08:28:54.730243357 +0000 UTC m=+0.046893142 container died 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.754 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Attempting a stable device rescue#033[00m
Nov 29 03:28:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889-userdata-shm.mount: Deactivated successfully.
Nov 29 03:28:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a063f24442f4469311b645017f5cc7c496bf5f2b829eb388e8054788932c53a0-merged.mount: Deactivated successfully.
Nov 29 03:28:54 np0005539550 podman[350365]: 2025-11-29 08:28:54.775416504 +0000 UTC m=+0.092066249 container cleanup 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 03:28:54 np0005539550 systemd[1]: libpod-conmon-5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889.scope: Deactivated successfully.
Nov 29 03:28:54 np0005539550 podman[350408]: 2025-11-29 08:28:54.839111541 +0000 UTC m=+0.041026563 container remove 5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.845 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[630cabc1-509a-4356-8645-8cfa9ba575a0]: (4, ('Sat Nov 29 08:28:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889)\n5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889\nSat Nov 29 08:28:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889)\n5a8f2cfc23586fa5ffc3ed5579043d5f81465029cea1625a86e32aa97f108889\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.848 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2dcbf3-c7f4-4f80-ad2e-14625a2a6307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.849 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.851 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 kernel: tapd9d41f0a-10: left promiscuous mode
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.869 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 nova_compute[257631]: 2025-11-29 08:28:54.870 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.874 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[612d0513-5b3a-4924-a4c0-37e17f3d7730]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.892 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5b19abef-5138-4804-b679-89406d597649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.893 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbf72ef-4e7c-4792-917c-715e48237860]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.910 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3639001f-857b-408f-8c9e-347f8b2eaf6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 791480, 'reachable_time': 31010, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350427, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.912 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:28:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:54.912 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[4518d094-a304-49ec-baa1-0af38b335908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:54 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd9d41f0a\x2d17f9\x2d4df4\x2da453\x2d04da996d63b6.mount: Deactivated successfully.
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.100 257641 DEBUG nova.compute.manager [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.100 257641 DEBUG oslo_concurrency.lockutils [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.101 257641 DEBUG oslo_concurrency.lockutils [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.101 257641 DEBUG oslo_concurrency.lockutils [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.101 257641 DEBUG nova.compute.manager [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.102 257641 WARNING nova.compute.manager [req-9e4381ab-c0dd-45f6-a34f-63e424b08c5d req-0b3e29e1-c461-450c-9ab2-9c438c4f0249 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:55.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.417 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.422 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.423 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Creating image(s)#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.449 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.453 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.515 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.623 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.628 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "6508c86e45fad21b80988254f99b9fb2c5c46075" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.629 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "6508c86e45fad21b80988254f99b9fb2c5c46075" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:55 np0005539550 nova_compute[257631]: 2025-11-29 08:28:55.632 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 153 op/s
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.185 257641 DEBUG nova.virt.libvirt.imagebackend [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:28:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:56.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.259 257641 DEBUG nova.virt.libvirt.imagebackend [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.259 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] cloning images/ff39bd0f-b544-46e3-a2c3-0aed51f8ff44@snap to None/a0310268-d298-469e-9f04-6315f83c3f89_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.386 257641 DEBUG oslo_concurrency.lockutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "6508c86e45fad21b80988254f99b9fb2c5c46075" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.435 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'migration_context' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.681 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.685 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start _get_guest_xml network_info=[{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:8d:56:a6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'ff39bd0f-b544-46e3-a2c3-0aed51f8ff44', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'd17d63cc-1ab4-47ee-898a-6ce311191c86', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a54a148c-90d3-4f80-83be-19de09d30ebc', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a54a148c-90d3-4f80-83be-19de09d30ebc', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a0310268-d298-469e-9f04-6315f83c3f89', 'attached_at': '', 'detached_at': '', 'volume_id': 'a54a148c-90d3-4f80-83be-19de09d30ebc', 'serial': 'a54a148c-90d3-4f80-83be-19de09d30ebc'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.685 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.726 257641 WARNING nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.734 257641 DEBUG nova.virt.libvirt.host [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.735 257641 DEBUG nova.virt.libvirt.host [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.739 257641 DEBUG nova.virt.libvirt.host [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.740 257641 DEBUG nova.virt.libvirt.host [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.741 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.741 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.742 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.742 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.743 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.743 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.743 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.744 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.744 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.744 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.745 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.745 257641 DEBUG nova.virt.hardware [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.745 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:56 np0005539550 nova_compute[257631]: 2025-11-29 08:28:56.857 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:28:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:57.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:28:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4208714410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.307 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.341 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:28:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:28:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719484194' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.770 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.772 257641 DEBUG nova.virt.libvirt.vif [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1793211077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1793211077',id=145,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:27:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-mu4bcpio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:27:47Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0310268-d298-469e-9f04-6315f83c3f89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:8d:56:a6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.773 257641 DEBUG nova.network.os_vif_util [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "vif_mac": "fa:16:3e:8d:56:a6"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.774 257641 DEBUG nova.network.os_vif_util [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.776 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.800 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <uuid>a0310268-d298-469e-9f04-6315f83c3f89</uuid>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <name>instance-00000091</name>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-1793211077</nova:name>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:28:56</nova:creationTime>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:user uuid="64b11a4dc36b4f55b85dbe846183be55">tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member</nova:user>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:project uuid="ae71059d02774857be85797a3be0e4e6">tempest-ServerBootFromVolumeStableRescueTest-1715153470</nova:project>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <nova:port uuid="e1a8ada6-6584-4ef5-8f52-75c5e5de9d86">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="serial">a0310268-d298-469e-9f04-6315f83c3f89</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="uuid">a0310268-d298-469e-9f04-6315f83c3f89</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a0310268-d298-469e-9f04-6315f83c3f89_disk.config">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-a54a148c-90d3-4f80-83be-19de09d30ebc">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <serial>a54a148c-90d3-4f80-83be-19de09d30ebc</serial>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/a0310268-d298-469e-9f04-6315f83c3f89_disk.rescue">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <boot order="1"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8d:56:a6"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <target dev="tape1a8ada6-65"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/console.log" append="off"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:28:57 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:28:57 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:28:57 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:28:57 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.807 257641 INFO nova.virt.libvirt.driver [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance destroyed successfully.#033[00m
Nov 29 03:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.902 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.902 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.903 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.903 257641 DEBUG nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] No VIF found with MAC fa:16:3e:8d:56:a6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.903 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Using config drive#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.928 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:57 np0005539550 nova_compute[257631]: 2025-11-29 08:28:57.967 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.000 257641 DEBUG nova.objects.instance [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'keypairs' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:28:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 2.6 MiB/s wr, 87 op/s
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.175 257641 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.175 257641 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.176 257641 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.176 257641 DEBUG oslo_concurrency.lockutils [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.176 257641 DEBUG nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.176 257641 WARNING nova.compute.manager [req-b214920d-56e5-44f6-b8af-66a9a01c841f req-f6da463e-b309-42f1-81d8-e558e80bfa9b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state rescuing.#033[00m
Nov 29 03:28:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:28:58.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:28:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.356 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.678 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Creating config drive at /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.685 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg8ukdnf8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.824 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg8ukdnf8" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.859 257641 DEBUG nova.storage.rbd_utils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] rbd image a0310268-d298-469e-9f04-6315f83c3f89_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:28:58 np0005539550 nova_compute[257631]: 2025-11-29 08:28:58.864 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue a0310268-d298-469e-9f04-6315f83c3f89_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.073 257641 DEBUG oslo_concurrency.processutils [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue a0310268-d298-469e-9f04-6315f83c3f89_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.075 257641 INFO nova.virt.libvirt.driver [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Deleting local config drive /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89/disk.config.rescue because it was imported into RBD.#033[00m
Nov 29 03:28:59 np0005539550 kernel: tape1a8ada6-65: entered promiscuous mode
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.1388] manager: (tape1a8ada6-65): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Nov 29 03:28:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:59Z|00677|binding|INFO|Claiming lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for this chassis.
Nov 29 03:28:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:59Z|00678|binding|INFO|e1a8ada6-6584-4ef5-8f52-75c5e5de9d86: Claiming fa:16:3e:8d:56:a6 10.100.0.14
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.140 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:59Z|00679|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 ovn-installed in OVS
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.159 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.161 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.163 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:28:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:59Z|00680|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 up in Southbound
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.164 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 bound to our chassis#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.165 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:28:59 np0005539550 systemd-udevd[350702]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.1781] device (tape1a8ada6-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.1792] device (tape1a8ada6-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6997fa39-b9dd-48f5-b182-4208bde4a653]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.181 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9d41f0a-11 in ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.183 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9d41f0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.184 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ab824b-b037-47c6-8c50-7fa70f64eb03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.184 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a76d7fe1-a5f0-4c2d-8527-b97cd83bb86d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.196 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6e541e5e-95fd-4dbb-bbaa-9a6a657f6de5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.206 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[674a68b5-e11d-4ec7-961b-95c078b4f893]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 systemd-machined[216673]: New machine qemu-79-instance-00000091.
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.237 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e38ea1c0-5ca9-4137-813b-513a8afccb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.242 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[01f037bd-44f2-4aa6-8080-e9b0669af327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.2435] manager: (tapd9d41f0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Nov 29 03:28:59 np0005539550 systemd[1]: Started Virtual Machine qemu-79-instance-00000091.
Nov 29 03:28:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:28:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:28:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:28:59.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.272 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ac652bcf-fe5b-419d-ba8a-71de482c4bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.274 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3a003c-6591-4fa6-8dad-93a5b8d37e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.2929] device (tapd9d41f0a-10): carrier: link connected
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.299 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3f2dd8-ad99-4284-a282-3180d91331ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.315 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba6aa0c-8956-43b3-b21d-afa5b7a05ebd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798695, 'reachable_time': 34827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350737, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.331 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2307a629-10a0-4a4d-b477-b2aee018e9f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:2887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 798695, 'tstamp': 798695}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350739, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.348 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8a9c2e-0358-4107-9a82-83d8b5c19cd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 194], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798695, 'reachable_time': 34827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 350740, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.375 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[98b58b5a-ef59-43e1-a465-b2c4cc83d3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:28:59
Nov 29 03:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', '.mgr']
Nov 29 03:28:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.428 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f793881-47c8-41e4-9154-703fa7863d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.429 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.429 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.429 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539550 NetworkManager[49039]: <info>  [1764404939.4402] manager: (tapd9d41f0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Nov 29 03:28:59 np0005539550 kernel: tapd9d41f0a-10: entered promiscuous mode
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.439 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.442 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.444 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:28:59Z|00681|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.447 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.448 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec03494-225a-4148-9856-160989ab915b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.448 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:28:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:28:59.449 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'env', 'PROCESS_TAG=haproxy-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9d41f0a-17f9-4df4-a453-04da996d63b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:28:59 np0005539550 nova_compute[257631]: 2025-11-29 08:28:59.466 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:28:59 np0005539550 podman[350772]: 2025-11-29 08:28:59.79401317 +0000 UTC m=+0.032209299 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 700 KiB/s rd, 3.9 MiB/s wr, 196 op/s
Nov 29 03:29:00 np0005539550 podman[350772]: 2025-11-29 08:29:00.151908619 +0000 UTC m=+0.390104718 container create 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:29:00 np0005539550 systemd[1]: Started libpod-conmon-8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811.scope.
Nov 29 03:29:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e114c59b8711d54b77ee7ed707c14ee5f09b71aa251c70e47dda6ad279f2fbf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:00.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:00 np0005539550 podman[350772]: 2025-11-29 08:29:00.271425024 +0000 UTC m=+0.509621153 container init 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:29:00 np0005539550 podman[350772]: 2025-11-29 08:29:00.277919289 +0000 UTC m=+0.516115388 container start 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:29:00 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [NOTICE]   (350852) : New worker (350854) forked
Nov 29 03:29:00 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [NOTICE]   (350852) : Loading success.
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.332 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for a0310268-d298-469e-9f04-6315f83c3f89 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.333 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404940.3317409, a0310268-d298-469e-9f04-6315f83c3f89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.333 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.339 257641 DEBUG nova.compute.manager [None req-5be3ce65-7116-4be0-89bf-74ba5c4bb58d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.383 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.386 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.417 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.418 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404940.332163, a0310268-d298-469e-9f04-6315f83c3f89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.418 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.442 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.445 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.450 257641 DEBUG nova.compute.manager [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.450 257641 DEBUG oslo_concurrency.lockutils [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.451 257641 DEBUG oslo_concurrency.lockutils [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.451 257641 DEBUG oslo_concurrency.lockutils [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.451 257641 DEBUG nova.compute.manager [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.452 257641 WARNING nova.compute.manager [req-51fbf194-abcc-42ae-869f-634f33f58445 req-817564d5-06d4-43f7-a89c-8eb40127f8de 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:29:00 np0005539550 nova_compute[257631]: 2025-11-29 08:29:00.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:01.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 374 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 484 KiB/s rd, 3.9 MiB/s wr, 199 op/s
Nov 29 03:29:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:02.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.899 257641 DEBUG nova.compute.manager [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.900 257641 DEBUG oslo_concurrency.lockutils [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.900 257641 DEBUG oslo_concurrency.lockutils [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.901 257641 DEBUG oslo_concurrency.lockutils [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.901 257641 DEBUG nova.compute.manager [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.901 257641 WARNING nova.compute.manager [req-905ecb7e-a136-4b74-87ac-580bea8f098a req-880accf1-0809-48da-a2bb-41f8998c8142 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state rescued and task_state None.#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.964 257641 INFO nova.compute.manager [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Unrescuing#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.965 257641 DEBUG oslo_concurrency.lockutils [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.965 257641 DEBUG oslo_concurrency.lockutils [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquired lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:02 np0005539550 nova_compute[257631]: 2025-11-29 08:29:02.966 257641 DEBUG nova.network.neutron [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:03 np0005539550 nova_compute[257631]: 2025-11-29 08:29:03.362 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 359 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 232 op/s
Nov 29 03:29:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:04.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.258 257641 DEBUG nova.network.neutron [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.278 257641 DEBUG oslo_concurrency.lockutils [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Releasing lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.279 257641 DEBUG nova.objects.instance [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'flavor' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539550 kernel: tape1a8ada6-65 (unregistering): left promiscuous mode
Nov 29 03:29:05 np0005539550 NetworkManager[49039]: <info>  [1764404945.4026] device (tape1a8ada6-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00682|binding|INFO|Releasing lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 from this chassis (sb_readonly=0)
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00683|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 down in Southbound
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.411 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00684|binding|INFO|Removing iface tape1a8ada6-65 ovn-installed in OVS
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.413 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.422 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.423 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.425 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.426 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdb048b-ea1a-4850-81da-ca0b14302480]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.427 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace which is not needed anymore#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 29 03:29:05 np0005539550 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000091.scope: Consumed 5.804s CPU time.
Nov 29 03:29:05 np0005539550 systemd-machined[216673]: Machine qemu-79-instance-00000091 terminated.
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.541 257641 INFO nova.virt.libvirt.driver [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance destroyed successfully.#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.542 257641 DEBUG nova.objects.instance [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.607 257641 DEBUG nova.compute.manager [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:29:05 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [NOTICE]   (350852) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:05 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [NOTICE]   (350852) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:05 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [WARNING]  (350852) : Exiting Master process...
Nov 29 03:29:05 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [ALERT]    (350852) : Current worker (350854) exited with code 143 (Terminated)
Nov 29 03:29:05 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[350843]: [WARNING]  (350852) : All workers exited. Exiting... (0)
Nov 29 03:29:05 np0005539550 systemd[1]: libpod-8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811.scope: Deactivated successfully.
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 podman[350942]: 2025-11-29 08:29:05.641059887 +0000 UTC m=+0.118615174 container died 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:29:05 np0005539550 kernel: tape1a8ada6-65: entered promiscuous mode
Nov 29 03:29:05 np0005539550 NetworkManager[49039]: <info>  [1764404945.6761] manager: (tape1a8ada6-65): new Tun device (/org/freedesktop/NetworkManager/Devices/298)
Nov 29 03:29:05 np0005539550 systemd-udevd[350921]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.676 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00685|binding|INFO|Claiming lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for this chassis.
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00686|binding|INFO|e1a8ada6-6584-4ef5-8f52-75c5e5de9d86: Claiming fa:16:3e:8d:56:a6 10.100.0.14
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.686 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.687 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:05 np0005539550 NetworkManager[49039]: <info>  [1764404945.6890] device (tape1a8ada6-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:05 np0005539550 NetworkManager[49039]: <info>  [1764404945.6901] device (tape1a8ada6-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00687|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 ovn-installed in OVS
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:05 np0005539550 systemd-machined[216673]: New machine qemu-80-instance-00000091.
Nov 29 03:29:05 np0005539550 systemd[1]: Started Virtual Machine qemu-80-instance-00000091.
Nov 29 03:29:05 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:05Z|00688|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 up in Southbound
Nov 29 03:29:05 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:05.800 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.824 257641 DEBUG nova.objects.instance [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_requests' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.852 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.853 257641 INFO nova.compute.claims [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.853 257641 DEBUG nova.objects.instance [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.865 257641 DEBUG nova.objects.instance [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'pci_devices' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.919 257641 INFO nova.compute.resource_tracker [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating resource usage from migration 54acf1b9-5683-4fed-b810-5ba387fc478b#033[00m
Nov 29 03:29:05 np0005539550 nova_compute[257631]: 2025-11-29 08:29:05.919 257641 DEBUG nova.compute.resource_tracker [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Starting to track incoming migration 54acf1b9-5683-4fed-b810-5ba387fc478b with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:29:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7e114c59b8711d54b77ee7ed707c14ee5f09b71aa251c70e47dda6ad279f2fbf-merged.mount: Deactivated successfully.
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.013 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 295 op/s
Nov 29 03:29:06 np0005539550 podman[350942]: 2025-11-29 08:29:06.138884259 +0000 UTC m=+0.616439556 container cleanup 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:29:06 np0005539550 systemd[1]: libpod-conmon-8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811.scope: Deactivated successfully.
Nov 29 03:29:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:06.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.265 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for a0310268-d298-469e-9f04-6315f83c3f89 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.266 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404946.2646077, a0310268-d298-469e-9f04-6315f83c3f89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.267 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.313 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.332 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:06 np0005539550 podman[351039]: 2025-11-29 08:29:06.358249509 +0000 UTC m=+0.188406255 container remove 8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.363 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cea1cdc3-b092-443e-bc1f-6d36f8ce7440]: (4, ('Sat Nov 29 08:29:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811)\n8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811\nSat Nov 29 08:29:06 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811)\n8feb0680706bd143d607b6091c49cefdea8366237a3c5d9a9cdc60f264de0811\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.365 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f71e31dc-957f-4783-bb42-eab6db0e8bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.366 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.367 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.367 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404946.2869492, a0310268-d298-469e-9f04-6315f83c3f89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.367 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.397 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539550 kernel: tapd9d41f0a-10: left promiscuous mode
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.433 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.448 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cca09464-740c-41cf-829b-dd61e4b98dee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.454 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.467 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6bb527-33d8-4ac6-b89a-5473191b1ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.468 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[701c2db9-5ee9-494c-9790-2170dc039c86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.483 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d670fe-9c34-4866-b286-818ccd58936c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 798689, 'reachable_time': 26079, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351097, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd9d41f0a\x2d17f9\x2d4df4\x2da453\x2d04da996d63b6.mount: Deactivated successfully.
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.486 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.487 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[348fe4b6-5a82-4664-90e7-bb73cdd33a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.490 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.492 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d9d41f0a-17f9-4df4-a453-04da996d63b6#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.504 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2568da4-cc31-42b3-a46a-2cf36d50d176]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.506 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd9d41f0a-11 in ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.508 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd9d41f0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.508 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5a91cd83-4c18-49d0-ac50-d5d027a2674d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.509 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cefb9e4e-00b7-478b-9c25-f5bca0ce5434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.519 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0fb9e728-3fe0-4241-9996-baf5cf247a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.533 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58190ca1-1751-413b-8d5a-786903914477]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356311441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.566 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[31f71152-9a08-4b67-8f1f-0c73d725d7b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 NetworkManager[49039]: <info>  [1764404946.5746] manager: (tapd9d41f0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/299)
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.572 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e633ec3c-6376-4e2d-b567-f23557a98879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.601 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.604 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[78292d1b-f478-48fe-8a13-928fc72d83d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.607 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[cc545a30-5872-432f-9ada-1d15a3482fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.608 257641 DEBUG nova.compute.provider_tree [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.627 257641 DEBUG nova.scheduler.client.report [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:06 np0005539550 NetworkManager[49039]: <info>  [1764404946.6336] device (tapd9d41f0a-10): carrier: link connected
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.641 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0deaaa-0462-4763-830c-59917098dd43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.659 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ae67b3bc-f89d-42f5-90e7-648251871d83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799429, 'reachable_time': 21584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351126, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.665 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.665 257641 INFO nova.compute.manager [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Migrating#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.673 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[98a8531c-75b4-4d26-b684-6febbf402675]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:2887'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 799429, 'tstamp': 799429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351127, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.689 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e73e9d53-b71e-478a-83f2-583d86012783]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd9d41f0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:28:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 197], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799429, 'reachable_time': 21584, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351128, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.721 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e25b5bd5-5cb5-4630-9d5f-37e3dbd741e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.774 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[47a3ae9a-61ec-4649-b796-d68127442c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.776 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.776 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.777 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd9d41f0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539550 NetworkManager[49039]: <info>  [1764404946.7805] manager: (tapd9d41f0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/300)
Nov 29 03:29:06 np0005539550 kernel: tapd9d41f0a-10: entered promiscuous mode
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.784 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd9d41f0a-10, col_values=(('external_ids', {'iface-id': 'f2118d1b-0f35-4211-8508-64237a2d816e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:06Z|00689|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.788 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.789 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45c34db9-b51b-4359-938a-8712209bb889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.790 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d9d41f0a-17f9-4df4-a453-04da996d63b6.pid.haproxy
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d9d41f0a-17f9-4df4-a453-04da996d63b6
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:06.791 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'env', 'PROCESS_TAG=haproxy-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d9d41f0a-17f9-4df4-a453-04da996d63b6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:06 np0005539550 nova_compute[257631]: 2025-11-29 08:29:06.884 257641 DEBUG nova.compute.manager [None req-96fd9caf-6614-4e2a-a75b-e1b88ceee1c5 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:07 np0005539550 podman[351160]: 2025-11-29 08:29:07.136198185 +0000 UTC m=+0.054994718 container create cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:29:07 np0005539550 systemd[1]: Started libpod-conmon-cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400.scope.
Nov 29 03:29:07 np0005539550 podman[351160]: 2025-11-29 08:29:07.102958261 +0000 UTC m=+0.021754814 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a8f979fcd574cf5b384194a10058ade2ed5b8dc8cdb7e26f8593dfe78b8bd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:07.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.305 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.306 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.306 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.306 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.306 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.306 257641 WARNING nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.307 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.307 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.307 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.307 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.307 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.308 257641 WARNING nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.308 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.308 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.308 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.308 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 WARNING nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.309 257641 DEBUG oslo_concurrency.lockutils [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.310 257641 DEBUG nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:07 np0005539550 nova_compute[257631]: 2025-11-29 08:29:07.310 257641 WARNING nova.compute.manager [req-160b9a51-36f8-4810-8bc9-87efa4c16e69 req-8216f81b-3034-421b-85e0-39de6088fe09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:07 np0005539550 podman[351160]: 2025-11-29 08:29:07.430187491 +0000 UTC m=+0.348984054 container init cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:07 np0005539550 podman[351160]: 2025-11-29 08:29:07.435474425 +0000 UTC m=+0.354270958 container start cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:29:07 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [NOTICE]   (351180) : New worker (351182) forked
Nov 29 03:29:07 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [NOTICE]   (351180) : Loading success.
Nov 29 03:29:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 234 op/s
Nov 29 03:29:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:08.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:29:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:29:08 np0005539550 nova_compute[257631]: 2025-11-29 08:29:08.362 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:29:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:29:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:29:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:29:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:29:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:09.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:09 np0005539550 podman[351193]: 2025-11-29 08:29:09.311127858 +0000 UTC m=+0.044054460 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:29:09 np0005539550 podman[351192]: 2025-11-29 08:29:09.317941001 +0000 UTC m=+0.054677050 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:29:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 283 op/s
Nov 29 03:29:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:10.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:10 np0005539550 nova_compute[257631]: 2025-11-29 08:29:10.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:11.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:11 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:29:11 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:29:11 np0005539550 systemd-logind[788]: New session 66 of user nova.
Nov 29 03:29:11 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:29:11 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:29:11 np0005539550 systemd[351240]: Queued start job for default target Main User Target.
Nov 29 03:29:11 np0005539550 systemd[351240]: Created slice User Application Slice.
Nov 29 03:29:11 np0005539550 systemd[351240]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:29:11 np0005539550 systemd[351240]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:29:11 np0005539550 systemd[351240]: Reached target Paths.
Nov 29 03:29:11 np0005539550 systemd[351240]: Reached target Timers.
Nov 29 03:29:11 np0005539550 systemd[351240]: Starting D-Bus User Message Bus Socket...
Nov 29 03:29:11 np0005539550 systemd[351240]: Starting Create User's Volatile Files and Directories...
Nov 29 03:29:11 np0005539550 systemd[351240]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:29:11 np0005539550 systemd[351240]: Reached target Sockets.
Nov 29 03:29:11 np0005539550 systemd[351240]: Finished Create User's Volatile Files and Directories.
Nov 29 03:29:11 np0005539550 systemd[351240]: Reached target Basic System.
Nov 29 03:29:11 np0005539550 systemd[351240]: Reached target Main User Target.
Nov 29 03:29:11 np0005539550 systemd[351240]: Startup finished in 158ms.
Nov 29 03:29:11 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:29:11 np0005539550 systemd[1]: Started Session 66 of User nova.
Nov 29 03:29:11 np0005539550 systemd[1]: session-66.scope: Deactivated successfully.
Nov 29 03:29:11 np0005539550 systemd-logind[788]: Session 66 logged out. Waiting for processes to exit.
Nov 29 03:29:11 np0005539550 systemd-logind[788]: Removed session 66.
Nov 29 03:29:11 np0005539550 systemd-logind[788]: New session 68 of user nova.
Nov 29 03:29:11 np0005539550 systemd[1]: Started Session 68 of User nova.
Nov 29 03:29:11 np0005539550 systemd[1]: session-68.scope: Deactivated successfully.
Nov 29 03:29:11 np0005539550 systemd-logind[788]: Session 68 logged out. Waiting for processes to exit.
Nov 29 03:29:11 np0005539550 systemd-logind[788]: Removed session 68.
Nov 29 03:29:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 54 KiB/s wr, 226 op/s
Nov 29 03:29:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:12.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:13.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:13 np0005539550 nova_compute[257631]: 2025-11-29 08:29:13.364 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 293 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 204 op/s
Nov 29 03:29:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:14.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.998 257641 DEBUG nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.998 257641 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.998 257641 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.998 257641 DEBUG oslo_concurrency.lockutils [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.998 257641 DEBUG nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:14 np0005539550 nova_compute[257631]: 2025-11-29 08:29:14.999 257641 WARNING nova.compute.manager [req-90933001-0cf5-4193-9131-01ae2f4e22f1 req-82a1d742-2392-43ef-a960-bf567285e1b0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:29:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:15.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:15 np0005539550 nova_compute[257631]: 2025-11-29 08:29:15.665 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:15 np0005539550 nova_compute[257631]: 2025-11-29 08:29:15.674 257641 INFO nova.network.neutron [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating port f455cc42-f497-49e9-84f6-0713ec25f786 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:29:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 323 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 916 KiB/s wr, 200 op/s
Nov 29 03:29:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:16.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.422 257641 DEBUG nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.423 257641 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.423 257641 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.424 257641 DEBUG oslo_concurrency.lockutils [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.424 257641 DEBUG nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.425 257641 WARNING nova.compute.manager [req-e533a10d-b498-4e33-b729-4c0d89c97b79 req-f0b343ef-863e-49a9-8da2-ee2c5fd1172f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.589 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.590 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:16 np0005539550 nova_compute[257631]: 2025-11-29 08:29:16.590 257641 DEBUG nova.network.neutron [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.181 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.181 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.182 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.182 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.182 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.182 257641 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.183 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.183 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.183 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.183 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.184 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.184 257641 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.184 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.184 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.185 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.185 257641 DEBUG oslo_concurrency.lockutils [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.185 257641 DEBUG nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:17 np0005539550 nova_compute[257631]: 2025-11-29 08:29:17.185 257641 WARNING nova.compute.manager [req-ecc30fe8-fb28-4245-be88-194035240e32 req-0d2fd405-8785-4621-bc76-24fa596d09bd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:29:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:17.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 323 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 902 KiB/s wr, 131 op/s
Nov 29 03:29:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:18.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.365 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.459 257641 DEBUG nova.network.neutron [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.512 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.649 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.652 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.653 257641 INFO nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Creating image(s)#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.701 257641 DEBUG nova.compute.manager [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.702 257641 DEBUG nova.compute.manager [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing instance network info cache due to event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.703 257641 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.703 257641 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.704 257641 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:18 np0005539550 nova_compute[257631]: 2025-11-29 08:29:18.713 257641 DEBUG nova.storage.rbd_utils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] creating snapshot(nova-resize) on rbd image(d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:18.961 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:18.962 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:18.963 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:19.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.338 257641 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.338 257641 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.339 257641 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.339 257641 DEBUG oslo_concurrency.lockutils [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.339 257641 DEBUG nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.339 257641 WARNING nova.compute.manager [req-ed3e4c58-9127-43a0-b7c4-4a332d276b61 req-6d99b662-d713-457e-8ced-525f217b7410 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:29:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Nov 29 03:29:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Nov 29 03:29:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.752 257641 DEBUG nova.objects.instance [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.871 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.871 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Ensure instance console log exists: /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.872 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.872 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.872 257641 DEBUG oslo_concurrency.lockutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.874 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start _get_guest_xml network_info=[{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.878 257641 WARNING nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.884 257641 DEBUG nova.virt.libvirt.host [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.884 257641 DEBUG nova.virt.libvirt.host [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.892 257641 DEBUG nova.virt.libvirt.host [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.893 257641 DEBUG nova.virt.libvirt.host [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.894 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.894 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.894 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.894 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.895 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.895 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.895 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.895 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.896 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.896 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.896 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.896 257641 DEBUG nova.virt.hardware [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.897 257641 DEBUG nova.objects.instance [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:19 np0005539550 nova_compute[257631]: 2025-11-29 08:29:19.924 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.7 MiB/s wr, 130 op/s
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0070416826609901355 of space, bias 1.0, pg target 2.112504798297041 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8618352177554439 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:29:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:29:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:20.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759129725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.391 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.440 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.668 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707635682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.991 257641 DEBUG oslo_concurrency.processutils [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.993 257641 DEBUG nova.virt.libvirt.vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:15Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.993 257641 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.994 257641 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:20 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.997 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <uuid>d4258b43-e73e-47f3-b1d1-f169bcaf4534</uuid>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <name>instance-00000092</name>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1524442594</nova:name>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:29:19</nova:creationTime>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:user uuid="fed6803a835e471f9bd60e3236e78e5d">tempest-TestNetworkAdvancedServerOps-274367929-project-member</nova:user>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:project uuid="4145ed6cde61439ebcc12fae2609b724">tempest-TestNetworkAdvancedServerOps-274367929</nova:project>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <nova:port uuid="f455cc42-f497-49e9-84f6-0713ec25f786">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="serial">d4258b43-e73e-47f3-b1d1-f169bcaf4534</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="uuid">d4258b43-e73e-47f3-b1d1-f169bcaf4534</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk">
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:29:20 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d4258b43-e73e-47f3-b1d1-f169bcaf4534_disk.config">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a5:a7:0b"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <target dev="tapf455cc42-f4"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534/console.log" append="off"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:29:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:29:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:29:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:29:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.999 257641 DEBUG nova.virt.libvirt.vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:28:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:15Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.999 257641 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1722283643", "vif_mac": "fa:16:3e:a5:a7:0b"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:20.999 257641 DEBUG nova.network.os_vif_util [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.000 257641 DEBUG os_vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.001 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.001 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.004 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.005 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf455cc42-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.005 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf455cc42-f4, col_values=(('external_ids', {'iface-id': 'f455cc42-f497-49e9-84f6-0713ec25f786', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:a7:0b', 'vm-uuid': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.006 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.0075] manager: (tapf455cc42-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.013 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.014 257641 INFO os_vif [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4')#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.089 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.089 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.090 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] No VIF found with MAC fa:16:3e:a5:a7:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.090 257641 INFO nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Using config drive#033[00m
Nov 29 03:29:21 np0005539550 kernel: tapf455cc42-f4: entered promiscuous mode
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.1841] manager: (tapf455cc42-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.185 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00690|binding|INFO|Claiming lport f455cc42-f497-49e9-84f6-0713ec25f786 for this chassis.
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00691|binding|INFO|f455cc42-f497-49e9-84f6-0713ec25f786: Claiming fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.1983] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.197 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.1989] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.205 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.222'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.207 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 bound to our chassis#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.210 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.227 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0effd1d-55c5-4f0e-9e8b-7f7079e8a154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.229 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8094e12d-21 in ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.232 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8094e12d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.232 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cb941383-712d-4d97-a430-8bfe0d990ada]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.234 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9148ac81-4dd2-42b6-9fa1-12347d7745e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 systemd-machined[216673]: New machine qemu-81-instance-00000092.
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.242 257641 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updated VIF entry in instance network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.243 257641 DEBUG nova.network.neutron [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.249 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe22fa4-4c17-44ce-8a7a-997ec784ff13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 systemd[1]: Started Virtual Machine qemu-81-instance-00000092.
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.265 257641 DEBUG oslo_concurrency.lockutils [req-60c6da1c-3225-4365-b31e-ffb2fc86c0c8 req-28f38664-4c16-428b-b111-92456e1efb82 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:21 np0005539550 systemd-udevd[351439]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.276 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a53112-9cb3-4f74-bc09-d609151ac806]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.2858] device (tapf455cc42-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.2865] device (tapf455cc42-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.312 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[248f707f-f8bc-4b77-ab8f-d2a154e700d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.3227] manager: (tap8094e12d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/305)
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.322 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bcf7ee-dd8b-4abd-be76-6837a99cd975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.361 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5394bf2b-5c30-4ee3-afaf-c08c8e92f5a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.364 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3de7bf56-e000-426a-8fce-024c2eb81363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.378 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00692|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.3883] device (tap8094e12d-20): carrier: link connected
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.395 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.397 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9271fdb2-b65e-4c7d-980f-7b3c3ae1a71e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00693|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 ovn-installed in OVS
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00694|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 up in Southbound
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.418 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[46e506c0-e102-4404-9166-ef5dff027be2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8094e12d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800905, 'reachable_time': 32905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351469, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.436 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e621af-da78-4c6a-8be0-f14c448e736b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe80:7afe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800905, 'tstamp': 800905}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351470, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.454 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbabdcf-6fd7-4c1b-b351-bbe5e3a518ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8094e12d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:80:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800905, 'reachable_time': 32905, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351471, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.485 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63852eb8-cfc2-46d2-a7aa-eae90860682e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.546 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f064cab-4d31-46ae-94b0-2845599e35eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.548 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8094e12d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.548 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.548 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8094e12d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 NetworkManager[49039]: <info>  [1764404961.5881] manager: (tap8094e12d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/306)
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.588 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 kernel: tap8094e12d-20: entered promiscuous mode
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.590 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8094e12d-20, col_values=(('external_ids', {'iface-id': '5843a205-5cee-4ff2-89f1-ae1b369cf659'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.592 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:21Z|00695|binding|INFO|Releasing lport 5843a205-5cee-4ff2-89f1-ae1b369cf659 from this chassis (sb_readonly=0)
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.606 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.607 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.608 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[446d1ee7-063e-449e-a426-534e846439d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.609 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-8094e12d-22b9-4e7c-bcb5-2de20ab6e675
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.pid.haproxy
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 8094e12d-22b9-4e7c-bcb5-2de20ab6e675
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:21.610 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'env', 'PROCESS_TAG=haproxy-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8094e12d-22b9-4e7c-bcb5-2de20ab6e675.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.848 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404961.8481393, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.848 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.850 257641 DEBUG nova.compute.manager [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.853 257641 INFO nova.virt.libvirt.driver [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance running successfully.#033[00m
Nov 29 03:29:21 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.856 257641 DEBUG nova.virt.libvirt.guest [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.856 257641 DEBUG nova.virt.libvirt.driver [None req-df7f0ccd-560e-4161-a566-ba04493cf548 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.873 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.877 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.915 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.916 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404961.8502376, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.916 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.945 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:21 np0005539550 nova_compute[257631]: 2025-11-29 08:29:21.949 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:22 np0005539550 podman[351557]: 2025-11-29 08:29:22.007682026 +0000 UTC m=+0.095744992 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:22 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:29:22 np0005539550 systemd[351240]: Activating special unit Exit the Session...
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped target Main User Target.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped target Basic System.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped target Paths.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped target Sockets.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped target Timers.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:29:22 np0005539550 systemd[351240]: Closed D-Bus User Message Bus Socket.
Nov 29 03:29:22 np0005539550 systemd[351240]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:29:22 np0005539550 systemd[351240]: Removed slice User Application Slice.
Nov 29 03:29:22 np0005539550 systemd[351240]: Reached target Shutdown.
Nov 29 03:29:22 np0005539550 systemd[351240]: Finished Exit the Session.
Nov 29 03:29:22 np0005539550 systemd[351240]: Reached target Exit the Session.
Nov 29 03:29:22 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:29:22 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:29:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 393 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 377 KiB/s rd, 5.2 MiB/s wr, 123 op/s
Nov 29 03:29:22 np0005539550 podman[351599]: 2025-11-29 08:29:21.990584092 +0000 UTC m=+0.030576807 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:22 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:29:22 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:29:22 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:29:22 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:29:22 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:29:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:22.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:22 np0005539550 podman[351599]: 2025-11-29 08:29:22.315026161 +0000 UTC m=+0.355018826 container create d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:22 np0005539550 systemd[1]: Started libpod-conmon-d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723.scope.
Nov 29 03:29:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f07dbe057ab7eda0f7614d77ec4913e21b01a07fff250820030a123b3e2fedf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:22 np0005539550 podman[351599]: 2025-11-29 08:29:22.963042038 +0000 UTC m=+1.003034703 container init d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:29:22 np0005539550 podman[351599]: 2025-11-29 08:29:22.96902439 +0000 UTC m=+1.009017055 container start d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:22 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [NOTICE]   (351642) : New worker (351644) forked
Nov 29 03:29:22 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [NOTICE]   (351642) : Loading success.
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.281 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:23.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.367 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.721 257641 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.722 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.722 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.722 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.723 257641 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.723 257641 WARNING nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.724 257641 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.724 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.724 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.725 257641 DEBUG oslo_concurrency.lockutils [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.725 257641 DEBUG nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:23 np0005539550 nova_compute[257631]: 2025-11-29 08:29:23.725 257641 WARNING nova.compute.manager [req-3d4bb323-258d-4a43-bd66-73c8ff075750 req-f8e8df40-89f0-4d03-886c-1d4d8027c29e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:29:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 416 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 6.8 MiB/s wr, 179 op/s
Nov 29 03:29:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:24.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:26 np0005539550 nova_compute[257631]: 2025-11-29 08:29:26.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.8 MiB/s wr, 246 op/s
Nov 29 03:29:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:26.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Nov 29 03:29:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Nov 29 03:29:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Nov 29 03:29:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:27 np0005539550 nova_compute[257631]: 2025-11-29 08:29:27.836 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 293 op/s
Nov 29 03:29:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Nov 29 03:29:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Nov 29 03:29:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Nov 29 03:29:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:28.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:28 np0005539550 nova_compute[257631]: 2025-11-29 08:29:28.369 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Nov 29 03:29:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:29.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Nov 29 03:29:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Nov 29 03:29:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 433 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 1.5 MiB/s wr, 349 op/s
Nov 29 03:29:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Nov 29 03:29:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Nov 29 03:29:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.330 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.331 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.365 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.464 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.464 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.472 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.473 257641 INFO nova.compute.claims [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:29:31 np0005539550 nova_compute[257631]: 2025-11-29 08:29:31.625 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:31 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Nov 29 03:29:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647261702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.082 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 2.7 MiB/s wr, 383 op/s
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.089 257641 DEBUG nova.compute.provider_tree [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.109 257641 DEBUG nova.scheduler.client.report [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.144 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.145 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.207 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.208 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.232 257641 INFO nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.257 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:29:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:32.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.382 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.385 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.385 257641 INFO nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Creating image(s)#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.421 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.449 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.482 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.486 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.518 257641 DEBUG nova.policy [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '795ef5ce2edb4c1987378419f19947a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '39da21e61faa40aa979d498a98e394c9', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.556 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.556 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.557 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:32 np0005539550 nova_compute[257631]: 2025-11-29 08:29:32.557 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:33.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:33 np0005539550 nova_compute[257631]: 2025-11-29 08:29:33.745 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:33 np0005539550 nova_compute[257631]: 2025-11-29 08:29:33.751 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 95241eca-3cb6-49da-89d4-b172e76cb35c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:33 np0005539550 nova_compute[257631]: 2025-11-29 08:29:33.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:33 np0005539550 nova_compute[257631]: 2025-11-29 08:29:33.785 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Successfully created port: 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:29:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.8 MiB/s rd, 3.6 MiB/s wr, 381 op/s
Nov 29 03:29:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:34.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.751 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 95241eca-3cb6-49da-89d4-b172e76cb35c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.000s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.831 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] resizing rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.957 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.963 257641 DEBUG nova.objects.instance [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lazy-loading 'migration_context' on Instance uuid 95241eca-3cb6-49da-89d4-b172e76cb35c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.985 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.985 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Ensure instance console log exists: /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.986 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.986 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:34 np0005539550 nova_compute[257631]: 2025-11-29 08:29:34.987 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:35.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.595 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Successfully updated port: 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.715 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.715 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquired lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.716 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.935 257641 DEBUG nova.compute.manager [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-changed-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.936 257641 DEBUG nova.compute.manager [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Refreshing instance network info cache due to event network-changed-63c2e7d2-fc25-43ac-bc72-e8d14b736a69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:35 np0005539550 nova_compute[257631]: 2025-11-29 08:29:35.937 257641 DEBUG oslo_concurrency.lockutils [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.017 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 459 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 5.0 MiB/s wr, 333 op/s
Nov 29 03:29:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:36Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:a7:0b 10.100.0.6
Nov 29 03:29:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:36.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:29:36 np0005539550 nova_compute[257631]: 2025-11-29 08:29:36.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:29:37 np0005539550 nova_compute[257631]: 2025-11-29 08:29:37.140 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:37 np0005539550 nova_compute[257631]: 2025-11-29 08:29:37.141 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:37 np0005539550 nova_compute[257631]: 2025-11-29 08:29:37.141 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:29:37 np0005539550 nova_compute[257631]: 2025-11-29 08:29:37.141 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 459 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 4.5 MiB/s wr, 299 op/s
Nov 29 03:29:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:38.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:38 np0005539550 nova_compute[257631]: 2025-11-29 08:29:38.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Nov 29 03:29:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430918170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430918170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.098 257641 DEBUG nova.network.neutron [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Updating instance_info_cache with network_info: [{"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.163 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Releasing lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.163 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Instance network_info: |[{"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.164 257641 DEBUG oslo_concurrency.lockutils [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.164 257641 DEBUG nova.network.neutron [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Refreshing network info cache for port 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.167 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Start _get_guest_xml network_info=[{"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.172 257641 WARNING nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.181 257641 DEBUG nova.virt.libvirt.host [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.182 257641 DEBUG nova.virt.libvirt.host [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.190 257641 DEBUG nova.virt.libvirt.host [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.191 257641 DEBUG nova.virt.libvirt.host [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.192 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.192 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.193 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.193 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.193 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.194 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.194 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.194 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.194 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.195 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.195 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.195 257641 DEBUG nova.virt.hardware [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.198 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220587557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.661 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.684 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.688 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.799 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [{"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.822 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-a0310268-d298-469e-9f04-6315f83c3f89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.822 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.823 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.823 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.823 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.877 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.877 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.878 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.878 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:29:39 np0005539550 nova_compute[257631]: 2025-11-29 08:29:39.878 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 486 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 293 op/s
Nov 29 03:29:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:29:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597882530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.156 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.158 257641 DEBUG nova.virt.libvirt.vif [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-589487472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-589487472',id=150,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39da21e61faa40aa979d498a98e394c9',ramdisk_id='',reservation_id='r-jley4o2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1666930967',owner_user_name='tempest-ServerTagsTestJSON-1666930967-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:32Z,user_data=None,user_id='795ef5ce2edb4c1987378419f19947a2',uuid=95241eca-3cb6-49da-89d4-b172e76cb35c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.158 257641 DEBUG nova.network.os_vif_util [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converting VIF {"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.159 257641 DEBUG nova.network.os_vif_util [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.161 257641 DEBUG nova.objects.instance [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 95241eca-3cb6-49da-89d4-b172e76cb35c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:40.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:40 np0005539550 podman[351935]: 2025-11-29 08:29:40.325769164 +0000 UTC m=+0.050001210 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:29:40 np0005539550 podman[351936]: 2025-11-29 08:29:40.327499848 +0000 UTC m=+0.048089252 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible)
Nov 29 03:29:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955955014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:40 np0005539550 nova_compute[257631]: 2025-11-29 08:29:40.376 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.047 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.160 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <uuid>95241eca-3cb6-49da-89d4-b172e76cb35c</uuid>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <name>instance-00000096</name>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerTagsTestJSON-server-589487472</nova:name>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:29:39</nova:creationTime>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:user uuid="795ef5ce2edb4c1987378419f19947a2">tempest-ServerTagsTestJSON-1666930967-project-member</nova:user>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:project uuid="39da21e61faa40aa979d498a98e394c9">tempest-ServerTagsTestJSON-1666930967</nova:project>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <nova:port uuid="63c2e7d2-fc25-43ac-bc72-e8d14b736a69">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="serial">95241eca-3cb6-49da-89d4-b172e76cb35c</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="uuid">95241eca-3cb6-49da-89d4-b172e76cb35c</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/95241eca-3cb6-49da-89d4-b172e76cb35c_disk">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d0:82:69"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <target dev="tap63c2e7d2-fc"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/console.log" append="off"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:29:41 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:29:41 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:29:41 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:29:41 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.160 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Preparing to wait for external event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.161 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.161 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.161 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.162 257641 DEBUG nova.virt.libvirt.vif [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-589487472',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-589487472',id=150,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='39da21e61faa40aa979d498a98e394c9',ramdisk_id='',reservation_id='r-jley4o2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-1666930967',owner_user_name='tempest-ServerTagsTestJSON-1666930967-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:29:32Z,user_data=None,user_id='795ef5ce2edb4c1987378419f19947a2',uuid=95241eca-3cb6-49da-89d4-b172e76cb35c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.162 257641 DEBUG nova.network.os_vif_util [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converting VIF {"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.163 257641 DEBUG nova.network.os_vif_util [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.163 257641 DEBUG os_vif [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.164 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.164 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.165 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.169 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63c2e7d2-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.169 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63c2e7d2-fc, col_values=(('external_ids', {'iface-id': '63c2e7d2-fc25-43ac-bc72-e8d14b736a69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d0:82:69', 'vm-uuid': '95241eca-3cb6-49da-89d4-b172e76cb35c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.171 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539550 NetworkManager[49039]: <info>  [1764404981.1719] manager: (tap63c2e7d2-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.173 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.179 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.181 257641 INFO os_vif [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc')#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.272 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.272 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.278 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.278 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:29:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.358 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.359 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.359 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] No VIF found with MAC fa:16:3e:d0:82:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.360 257641 INFO nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Using config drive#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.387 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.510 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.511 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4034MB free_disk=20.79550552368164GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.512 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:41 np0005539550 nova_compute[257631]: 2025-11-29 08:29:41.512 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 495 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.5 MiB/s wr, 249 op/s
Nov 29 03:29:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.289 257641 INFO nova.compute.manager [None req-e78f8a77-aaac-4531-9233-1194dc2b15a2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Get console output#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.295 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:29:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:43.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.377 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.391 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance a0310268-d298-469e-9f04-6315f83c3f89 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.391 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance d4258b43-e73e-47f3-b1d1-f169bcaf4534 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.392 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 95241eca-3cb6-49da-89d4-b172e76cb35c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.392 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.392 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.397 257641 DEBUG nova.network.neutron [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Updated VIF entry in instance network info cache for port 63c2e7d2-fc25-43ac-bc72-e8d14b736a69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.397 257641 DEBUG nova.network.neutron [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Updating instance_info_cache with network_info: [{"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:43 np0005539550 nova_compute[257631]: 2025-11-29 08:29:43.474 257641 DEBUG oslo_concurrency.lockutils [req-fd2375e9-105a-454e-9158-378c30bee903 req-1d8a5e5b-3f7e-4f8c-af9b-14b49583dafb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-95241eca-3cb6-49da-89d4-b172e76cb35c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 202 op/s
Nov 29 03:29:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.691 257641 INFO nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Creating config drive at /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config#033[00m
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.696 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuupvn3eb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.778 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.836 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuupvn3eb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.866 257641 DEBUG nova.storage.rbd_utils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] rbd image 95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:29:44 np0005539550 nova_compute[257631]: 2025-11-29 08:29:44.870 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config 95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:29:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521619498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.212 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.219 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.679 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.742 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.744 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.744 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.745 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.747 257641 DEBUG oslo_concurrency.processutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config 95241eca-3cb6-49da-89d4-b172e76cb35c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.878s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.748 257641 INFO nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Deleting local config drive /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:29:45 np0005539550 NetworkManager[49039]: <info>  [1764404985.8037] manager: (tap63c2e7d2-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Nov 29 03:29:45 np0005539550 kernel: tap63c2e7d2-fc: entered promiscuous mode
Nov 29 03:29:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:45Z|00696|binding|INFO|Claiming lport 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 for this chassis.
Nov 29 03:29:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:45Z|00697|binding|INFO|63c2e7d2-fc25-43ac-bc72-e8d14b736a69: Claiming fa:16:3e:d0:82:69 10.100.0.13
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.805 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:45Z|00698|binding|INFO|Setting lport 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 ovn-installed in OVS
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539550 nova_compute[257631]: 2025-11-29 08:29:45.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:45 np0005539550 systemd-udevd[352239]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:45 np0005539550 systemd-machined[216673]: New machine qemu-82-instance-00000096.
Nov 29 03:29:45 np0005539550 NetworkManager[49039]: <info>  [1764404985.8511] device (tap63c2e7d2-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:29:45 np0005539550 NetworkManager[49039]: <info>  [1764404985.8520] device (tap63c2e7d2-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:29:45 np0005539550 systemd[1]: Started Virtual Machine qemu-82-instance-00000096.
Nov 29 03:29:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:45Z|00699|binding|INFO|Setting lport 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 up in Southbound
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.863 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:82:69 10.100.0.13'], port_security=['fa:16:3e:d0:82:69 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '95241eca-3cb6-49da-89d4-b172e76cb35c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39da21e61faa40aa979d498a98e394c9', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd874a9ce-9675-4251-a517-4ee6c9b8bc77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d38eaf6-b450-4e6c-a289-9250cb0b8ce0, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=63c2e7d2-fc25-43ac-bc72-e8d14b736a69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.865 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 in datapath e0f1a1c4-7b51-4506-bd45-263dbe20ab0f bound to our chassis#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.866 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0f1a1c4-7b51-4506-bd45-263dbe20ab0f#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.880 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[08868907-c05d-4199-98cc-1b7fdb799b90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.880 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0f1a1c4-71 in ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.884 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0f1a1c4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.884 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff16280-1fb5-445b-ab63-3c87e2328378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.885 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6e55b9a5-272c-4bb5-8edc-ae529b3d1b70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.894 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1bc192-d496-4009-a257-5309fad891f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.919 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aee9475b-e2df-4e86-96c3-6d29a8d217c1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.945 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ddf44b93-9f62-4262-93d9-03738faa4e7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 systemd-udevd[352242]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:29:45 np0005539550 NetworkManager[49039]: <info>  [1764404985.9560] manager: (tape0f1a1c4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.954 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[88996b6c-69d0-4319-bd94-de0ca32ee989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.987 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bd572861-7b00-4e75-84f9-00274216e999]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:45.990 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[264255a2-00b2-4e9b-9775-860241ba0e36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 NetworkManager[49039]: <info>  [1764404986.0153] device (tape0f1a1c4-70): carrier: link connected
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.022 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[79cc71c4-c272-4022-b72f-1ef15c69067b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.041 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fb101141-5f69-4b21-9b40-14040d3caf43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0f1a1c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:8e:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803367, 'reachable_time': 23037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352278, 'error': None, 'target': 'ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.055 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2c0966-a060-4a37-9d9d-e93dc429def7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:8e98'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 803367, 'tstamp': 803367}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352279, 'error': None, 'target': 'ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.071 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[32dc6a5a-a350-4691-8b6c-f539c8935884]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0f1a1c4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:8e:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 201], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803367, 'reachable_time': 23037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352280, 'error': None, 'target': 'ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 742 KiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[75d23002-39f5-46c2-91c4-66343e80814f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.172 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.175 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[68c60418-466c-4ec8-b833-c89823d94a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.177 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f1a1c4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.177 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.178 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0f1a1c4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.179 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 NetworkManager[49039]: <info>  [1764404986.1797] manager: (tape0f1a1c4-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Nov 29 03:29:46 np0005539550 kernel: tape0f1a1c4-70: entered promiscuous mode
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.183 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.183 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0f1a1c4-70, col_values=(('external_ids', {'iface-id': 'cb485465-da7f-4835-ae78-f94c40be4ccc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.184 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:46Z|00700|binding|INFO|Releasing lport cb485465-da7f-4835-ae78-f94c40be4ccc from this chassis (sb_readonly=0)
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.199 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.200 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.201 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0f1a1c4-7b51-4506-bd45-263dbe20ab0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0f1a1c4-7b51-4506-bd45-263dbe20ab0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.202 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e82d9ba1-0a9f-467e-a524-01621a569819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.203 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/e0f1a1c4-7b51-4506-bd45-263dbe20ab0f.pid.haproxy
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID e0f1a1c4-7b51-4506-bd45-263dbe20ab0f
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:29:46 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:46.204 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'env', 'PROCESS_TAG=haproxy-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0f1a1c4-7b51-4506-bd45-263dbe20ab0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:29:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:46.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.509 257641 DEBUG nova.compute.manager [req-49c38907-e581-4999-9463-17cc5ab0e933 req-8d8241b8-b4d3-4865-819f-9dbca66ffc73 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.510 257641 DEBUG oslo_concurrency.lockutils [req-49c38907-e581-4999-9463-17cc5ab0e933 req-8d8241b8-b4d3-4865-819f-9dbca66ffc73 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.510 257641 DEBUG oslo_concurrency.lockutils [req-49c38907-e581-4999-9463-17cc5ab0e933 req-8d8241b8-b4d3-4865-819f-9dbca66ffc73 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.511 257641 DEBUG oslo_concurrency.lockutils [req-49c38907-e581-4999-9463-17cc5ab0e933 req-8d8241b8-b4d3-4865-819f-9dbca66ffc73 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.511 257641 DEBUG nova.compute.manager [req-49c38907-e581-4999-9463-17cc5ab0e933 req-8d8241b8-b4d3-4865-819f-9dbca66ffc73 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Processing event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:29:46 np0005539550 podman[352324]: 2025-11-29 08:29:46.587543952 +0000 UTC m=+0.028817053 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.688 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.689 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.690 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.690 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.690 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.692 257641 INFO nova.compute.manager [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Terminating instance#033[00m
Nov 29 03:29:46 np0005539550 nova_compute[257631]: 2025-11-29 08:29:46.693 257641 DEBUG nova.compute.manager [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:47 np0005539550 kernel: tapf455cc42-f4 (unregistering): left promiscuous mode
Nov 29 03:29:47 np0005539550 NetworkManager[49039]: <info>  [1764404987.0695] device (tapf455cc42-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:47Z|00701|binding|INFO|Releasing lport f455cc42-f497-49e9-84f6-0713ec25f786 from this chassis (sb_readonly=0)
Nov 29 03:29:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:47Z|00702|binding|INFO|Setting lport f455cc42-f497-49e9-84f6-0713ec25f786 down in Southbound
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.080 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:47Z|00703|binding|INFO|Removing iface tapf455cc42-f4 ovn-installed in OVS
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.083 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539550 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000092.scope: Deactivated successfully.
Nov 29 03:29:47 np0005539550 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000092.scope: Consumed 13.863s CPU time.
Nov 29 03:29:47 np0005539550 systemd-machined[216673]: Machine qemu-81-instance-00000092 terminated.
Nov 29 03:29:47 np0005539550 podman[352324]: 2025-11-29 08:29:47.23205116 +0000 UTC m=+0.673324241 container create 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.331 257641 INFO nova.virt.libvirt.driver [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Instance destroyed successfully.#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.332 257641 DEBUG nova.objects.instance [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lazy-loading 'resources' on Instance uuid d4258b43-e73e-47f3-b1d1-f169bcaf4534 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:47.352 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a7:0b 10.100.0.6'], port_security=['fa:16:3e:a5:a7:0b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'd4258b43-e73e-47f3-b1d1-f169bcaf4534', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4145ed6cde61439ebcc12fae2609b724', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e716463c-b057-4e72-86b1-45cce498de54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f97b85de-e617-4358-b258-b884e5e43079, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f455cc42-f497-49e9-84f6-0713ec25f786) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.413 257641 DEBUG nova.virt.libvirt.vif [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:28:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1524442594',display_name='tempest-TestNetworkAdvancedServerOps-server-1524442594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1524442594',id=146,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAhjCe5Dwqr90TZZDpDYSff23Y0Y+CPvWtwZC2gBvEddB0vd/KfzjYxDz13s1SQbnkCRhnyyKQ9iihlf4eHmWUZGjirLU1hZLSoBtjbwWoOTCzNCv4qInFtsgJKPWKzRFw==',key_name='tempest-TestNetworkAdvancedServerOps-964699803',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:21Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4145ed6cde61439ebcc12fae2609b724',ramdisk_id='',reservation_id='r-b8gw4bee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-274367929',owner_user_name='tempest-TestNetworkAdvancedServerOps-274367929-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:28Z,user_data=None,user_id='fed6803a835e471f9bd60e3236e78e5d',uuid=d4258b43-e73e-47f3-b1d1-f169bcaf4534,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.413 257641 DEBUG nova.network.os_vif_util [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converting VIF {"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.222", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.414 257641 DEBUG nova.network.os_vif_util [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.414 257641 DEBUG os_vif [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.417 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.417 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf455cc42-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.418 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404987.4165256, 95241eca-3cb6-49da-89d4-b172e76cb35c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.418 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.420 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.424 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.425 257641 INFO os_vif [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a7:0b,bridge_name='br-int',has_traffic_filtering=True,id=f455cc42-f497-49e9-84f6-0713ec25f786,network=Network(8094e12d-22b9-4e7c-bcb5-2de20ab6e675),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf455cc42-f4')#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.524 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.527 257641 INFO nova.virt.libvirt.driver [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Instance spawned successfully.#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.528 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.529 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:47 np0005539550 systemd[1]: Started libpod-conmon-08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6.scope.
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.572 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.573 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.573 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.573 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.574 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.575 257641 DEBUG nova.virt.libvirt.driver [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:29:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f31909aef509926deafa627518e6bc370ceb6d10570a2c4051f9a3f9e28bf0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.646 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.646 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404987.4183772, 95241eca-3cb6-49da-89d4-b172e76cb35c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.647 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:29:47 np0005539550 podman[352324]: 2025-11-29 08:29:47.694696739 +0000 UTC m=+1.135969840 container init 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:29:47 np0005539550 podman[352324]: 2025-11-29 08:29:47.701018289 +0000 UTC m=+1.142291370 container start 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:29:47 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [NOTICE]   (352420) : New worker (352422) forked
Nov 29 03:29:47 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [NOTICE]   (352420) : Loading success.
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.725 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.729 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764404987.4232981, 95241eca-3cb6-49da-89d4-b172e76cb35c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.729 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.769 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.774 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.794 257641 INFO nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Took 15.41 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.794 257641 DEBUG nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.812 257641 DEBUG nova.compute.manager [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.812 257641 DEBUG oslo_concurrency.lockutils [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.812 257641 DEBUG oslo_concurrency.lockutils [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.814 257641 DEBUG oslo_concurrency.lockutils [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.814 257641 DEBUG nova.compute.manager [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.814 257641 DEBUG nova.compute.manager [req-e59cc59c-7332-47bc-9ae4-020b5801aab2 req-d2fda489-770d-4d10-b919-9859130203d1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-unplugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:47.840 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f455cc42-f497-49e9-84f6-0713ec25f786 in datapath 8094e12d-22b9-4e7c-bcb5-2de20ab6e675 unbound from our chassis#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.843 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:29:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:47.844 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:47.846 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[34e354a9-bfda-4285-97ff-5ced9581bcba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:47.846 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 namespace which is not needed anymore#033[00m
Nov 29 03:29:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.911 257641 INFO nova.compute.manager [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Took 16.48 seconds to build instance.#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.955 257641 DEBUG oslo_concurrency.lockutils [None req-daae0ebd-7249-44c6-b6c0-c1568a590e55 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.969 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.969 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.969 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.970 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:47 np0005539550 nova_compute[257631]: 2025-11-29 08:29:47.970 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:29:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 742 KiB/s rd, 2.9 MiB/s wr, 121 op/s
Nov 29 03:29:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:29:48 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [NOTICE]   (351642) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:48 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [NOTICE]   (351642) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:48 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [WARNING]  (351642) : Exiting Master process...
Nov 29 03:29:48 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [ALERT]    (351642) : Current worker (351644) exited with code 143 (Terminated)
Nov 29 03:29:48 np0005539550 neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675[351638]: [WARNING]  (351642) : All workers exited. Exiting... (0)
Nov 29 03:29:48 np0005539550 systemd[1]: libpod-d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723.scope: Deactivated successfully.
Nov 29 03:29:48 np0005539550 podman[352448]: 2025-11-29 08:29:48.155924892 +0000 UTC m=+0.220174993 container died d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:29:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9f07dbe057ab7eda0f7614d77ec4913e21b01a07fff250820030a123b3e2fedf-merged.mount: Deactivated successfully.
Nov 29 03:29:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:48.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.379 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:48 np0005539550 podman[352448]: 2025-11-29 08:29:48.405577092 +0000 UTC m=+0.469827163 container cleanup d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:29:48 np0005539550 systemd[1]: libpod-conmon-d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723.scope: Deactivated successfully.
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.541 257641 DEBUG nova.compute.manager [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.542 257641 DEBUG nova.compute.manager [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing instance network info cache due to event network-changed-f455cc42-f497-49e9-84f6-0713ec25f786. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.542 257641 DEBUG oslo_concurrency.lockutils [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.543 257641 DEBUG oslo_concurrency.lockutils [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.543 257641 DEBUG nova.network.neutron [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Refreshing network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:29:48 np0005539550 podman[352476]: 2025-11-29 08:29:48.578336269 +0000 UTC m=+0.147148968 container remove d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:29:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.590 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[64da6219-629e-4964-b640-0e5488780b47]: (4, ('Sat Nov 29 08:29:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 (d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723)\nd8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723\nSat Nov 29 08:29:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 (d8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723)\nd8ac41b25843bcf09e7c967402df534b19b81a4f4cc977b02569493b8ff7a723\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.591 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63bbd238-8c60-4488-ade4-6b5fc2eebb7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.593 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8094e12d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.595 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:48 np0005539550 kernel: tap8094e12d-20: left promiscuous mode
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.612 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.616 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9b506d13-b575-4949-b501-130020d9409c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.627 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8f26d8f7-8377-4ddd-884a-5013d7017a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.630 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[34369262-c195-4823-a5b1-3206dbb9296a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.647 257641 DEBUG nova.compute.manager [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.648 257641 DEBUG oslo_concurrency.lockutils [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.648 257641 DEBUG oslo_concurrency.lockutils [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.648 257641 DEBUG oslo_concurrency.lockutils [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.648 257641 DEBUG nova.compute.manager [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] No waiting events found dispatching network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:48 np0005539550 nova_compute[257631]: 2025-11-29 08:29:48.648 257641 WARNING nova.compute.manager [req-f0c1fc1f-d8aa-4aca-8a9a-3d27eef4c75d req-4eb191bc-1c06-4f70-ac31-38ec93e1646b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received unexpected event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.652 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2be913c0-46a9-4843-be71-9f0d3612074a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800897, 'reachable_time': 40378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352494, 'error': None, 'target': 'ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.654 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8094e12d-22b9-4e7c-bcb5-2de20ab6e675 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:48.654 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0af37ef9-ea25-477b-8067-ddcab551a694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:48 np0005539550 systemd[1]: run-netns-ovnmeta\x2d8094e12d\x2d22b9\x2d4e7c\x2dbcb5\x2d2de20ab6e675.mount: Deactivated successfully.
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3300eb28-9bc2-4571-b12c-597ed45e4e7b does not exist
Nov 29 03:29:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 25c94a03-908b-46ad-8c56-c17a7e9f8947 does not exist
Nov 29 03:29:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5cb38979-516b-4ab9-bf26-98b2185ec581 does not exist
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:29:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.953 257641 DEBUG nova.compute.manager [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.955 257641 DEBUG oslo_concurrency.lockutils [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.955 257641 DEBUG oslo_concurrency.lockutils [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.956 257641 DEBUG oslo_concurrency.lockutils [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.956 257641 DEBUG nova.compute.manager [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] No waiting events found dispatching network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:49 np0005539550 nova_compute[257631]: 2025-11-29 08:29:49.956 257641 WARNING nova.compute.manager [req-529e6d22-4f22-4014-a312-ef6efe60cd57 req-66f2918d-481f-45f4-82ce-13b69cc61ac5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received unexpected event network-vif-plugged-f455cc42-f497-49e9-84f6-0713ec25f786 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:49.954968208 +0000 UTC m=+0.023636541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 495 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.2 MiB/s wr, 152 op/s
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.10153193 +0000 UTC m=+0.170200243 container create 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:29:50 np0005539550 systemd[1]: Started libpod-conmon-826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884.scope.
Nov 29 03:29:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:29:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.597973707 +0000 UTC m=+0.666642040 container init 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.611664315 +0000 UTC m=+0.680332628 container start 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:29:50 np0005539550 pensive_khayyam[352653]: 167 167
Nov 29 03:29:50 np0005539550 systemd[1]: libpod-826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884.scope: Deactivated successfully.
Nov 29 03:29:50 np0005539550 conmon[352653]: conmon 826249bfcd793c45a0cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884.scope/container/memory.events
Nov 29 03:29:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:29:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:29:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.738063665 +0000 UTC m=+0.806731998 container attach 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.739136682 +0000 UTC m=+0.807804995 container died 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:29:50 np0005539550 nova_compute[257631]: 2025-11-29 08:29:50.740 257641 DEBUG nova.network.neutron [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updated VIF entry in instance network info cache for port f455cc42-f497-49e9-84f6-0713ec25f786. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:29:50 np0005539550 nova_compute[257631]: 2025-11-29 08:29:50.740 257641 DEBUG nova.network.neutron [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [{"id": "f455cc42-f497-49e9-84f6-0713ec25f786", "address": "fa:16:3e:a5:a7:0b", "network": {"id": "8094e12d-22b9-4e7c-bcb5-2de20ab6e675", "bridge": "br-int", "label": "tempest-network-smoke--1722283643", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4145ed6cde61439ebcc12fae2609b724", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf455cc42-f4", "ovs_interfaceid": "f455cc42-f497-49e9-84f6-0713ec25f786", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:50 np0005539550 nova_compute[257631]: 2025-11-29 08:29:50.768 257641 DEBUG oslo_concurrency.lockutils [req-fd6ad5ca-b2b2-4895-8b34-7b5d0233cf73 req-e63755b9-d5f9-43b3-affa-9f96109cfdf0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d4258b43-e73e-47f3-b1d1-f169bcaf4534" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:29:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-47fac362152d1ec91675c73dea26040d3280b34ec3af955d1f12844678ffcc0a-merged.mount: Deactivated successfully.
Nov 29 03:29:50 np0005539550 podman[352635]: 2025-11-29 08:29:50.930097481 +0000 UTC m=+0.998765804 container remove 826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_khayyam, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:29:50 np0005539550 systemd[1]: libpod-conmon-826249bfcd793c45a0cb1936439c8b324242135f5a584f4cef1e30586eb20884.scope: Deactivated successfully.
Nov 29 03:29:51 np0005539550 podman[352677]: 2025-11-29 08:29:51.171273966 +0000 UTC m=+0.094820979 container create 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:29:51 np0005539550 podman[352677]: 2025-11-29 08:29:51.101595277 +0000 UTC m=+0.025142310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.213 257641 INFO nova.virt.libvirt.driver [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Deleting instance files /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534_del#033[00m
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.214 257641 INFO nova.virt.libvirt.driver [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Deletion of /var/lib/nova/instances/d4258b43-e73e-47f3-b1d1-f169bcaf4534_del complete#033[00m
Nov 29 03:29:51 np0005539550 systemd[1]: Started libpod-conmon-25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd.scope.
Nov 29 03:29:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:51 np0005539550 podman[352677]: 2025-11-29 08:29:51.317148081 +0000 UTC m=+0.240695114 container init 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:29:51 np0005539550 podman[352677]: 2025-11-29 08:29:51.324048136 +0000 UTC m=+0.247595149 container start 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.332 257641 INFO nova.compute.manager [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Took 4.64 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.332 257641 DEBUG oslo.service.loopingcall [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.333 257641 DEBUG nova.compute.manager [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:51 np0005539550 nova_compute[257631]: 2025-11-29 08:29:51.333 257641 DEBUG nova.network.neutron [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:51.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:51 np0005539550 podman[352677]: 2025-11-29 08:29:51.353246547 +0000 UTC m=+0.276793560 container attach 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.3 MiB/s wr, 133 op/s
Nov 29 03:29:52 np0005539550 determined_villani[352693]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:29:52 np0005539550 determined_villani[352693]: --> relative data size: 1.0
Nov 29 03:29:52 np0005539550 determined_villani[352693]: --> All data devices are unavailable
Nov 29 03:29:52 np0005539550 systemd[1]: libpod-25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd.scope: Deactivated successfully.
Nov 29 03:29:52 np0005539550 podman[352677]: 2025-11-29 08:29:52.119065476 +0000 UTC m=+1.042612499 container died 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:29:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2cc669ce7fc94fa0e49206f0b3afa3714592ad13d3e9072e4eab4f0f9f1cc327-merged.mount: Deactivated successfully.
Nov 29 03:29:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:52.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:52 np0005539550 podman[352677]: 2025-11-29 08:29:52.362071807 +0000 UTC m=+1.285618820 container remove 25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:29:52 np0005539550 systemd[1]: libpod-conmon-25f56918cfdee51095fba3f1eea2d15197cea2e753b86b0d93abed41829eb8cd.scope: Deactivated successfully.
Nov 29 03:29:52 np0005539550 podman[352709]: 2025-11-29 08:29:52.38465586 +0000 UTC m=+0.237492232 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:52 np0005539550 nova_compute[257631]: 2025-11-29 08:29:52.419 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:52 np0005539550 podman[352886]: 2025-11-29 08:29:52.987052047 +0000 UTC m=+0.068703665 container create e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:29:53 np0005539550 systemd[1]: Started libpod-conmon-e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe.scope.
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:52.942201608 +0000 UTC m=+0.023853256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:53.102330405 +0000 UTC m=+0.183982053 container init e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:53.110932173 +0000 UTC m=+0.192583791 container start e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:53 np0005539550 tender_hugle[352903]: 167 167
Nov 29 03:29:53 np0005539550 systemd[1]: libpod-e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe.scope: Deactivated successfully.
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:53.122164478 +0000 UTC m=+0.203816116 container attach e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:53.122592869 +0000 UTC m=+0.204244487 container died e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:29:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2fd7c349610c51de5648b51f054bd4868d30dd3969c5f5227d4a250bf32a288c-merged.mount: Deactivated successfully.
Nov 29 03:29:53 np0005539550 podman[352886]: 2025-11-29 08:29:53.327268917 +0000 UTC m=+0.408920545 container remove e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hugle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:29:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:29:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:53.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:29:53 np0005539550 systemd[1]: libpod-conmon-e66a936cb548454368b54378694fadc889b56dd93d7e922473554f5228f618fe.scope: Deactivated successfully.
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.370 257641 DEBUG nova.network.neutron [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.392 257641 INFO nova.compute.manager [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Took 2.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.440 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.441 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:53 np0005539550 nova_compute[257631]: 2025-11-29 08:29:53.535 257641 DEBUG oslo_concurrency.processutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:29:53 np0005539550 podman[352929]: 2025-11-29 08:29:53.577009509 +0000 UTC m=+0.109178883 container create e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:29:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:53 np0005539550 podman[352929]: 2025-11-29 08:29:53.493016886 +0000 UTC m=+0.025186290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:53 np0005539550 systemd[1]: Started libpod-conmon-e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed.scope.
Nov 29 03:29:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d921275e03f6f4f50df603aa38ed51d89297524c85d2984210793bfa04afb33d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d921275e03f6f4f50df603aa38ed51d89297524c85d2984210793bfa04afb33d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d921275e03f6f4f50df603aa38ed51d89297524c85d2984210793bfa04afb33d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d921275e03f6f4f50df603aa38ed51d89297524c85d2984210793bfa04afb33d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:53 np0005539550 podman[352929]: 2025-11-29 08:29:53.749311405 +0000 UTC m=+0.281480789 container init e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:29:53 np0005539550 podman[352929]: 2025-11-29 08:29:53.7569889 +0000 UTC m=+0.289158274 container start e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:29:53 np0005539550 podman[352929]: 2025-11-29 08:29:53.818698517 +0000 UTC m=+0.350867921 container attach e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.056 257641 DEBUG oslo_concurrency.processutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.063 257641 DEBUG nova.compute.provider_tree [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.084 257641 DEBUG nova.scheduler.client.report [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:29:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 476 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 132 op/s
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.110 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.139 257641 INFO nova.scheduler.client.report [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Deleted allocations for instance d4258b43-e73e-47f3-b1d1-f169bcaf4534#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.221 257641 DEBUG oslo_concurrency.lockutils [None req-c1f0c6b1-b243-4cc1-8dd6-0ccd31fbaab2 fed6803a835e471f9bd60e3236e78e5d 4145ed6cde61439ebcc12fae2609b724 - - default default] Lock "d4258b43-e73e-47f3-b1d1-f169bcaf4534" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.301 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.301 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.302 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.302 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.302 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.303 257641 INFO nova.compute.manager [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Terminating instance#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.305 257641 DEBUG nova.compute.manager [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:29:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:54.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:54 np0005539550 kernel: tap63c2e7d2-fc (unregistering): left promiscuous mode
Nov 29 03:29:54 np0005539550 NetworkManager[49039]: <info>  [1764404994.5214] device (tap63c2e7d2-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:54Z|00704|binding|INFO|Releasing lport 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 from this chassis (sb_readonly=0)
Nov 29 03:29:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:54Z|00705|binding|INFO|Setting lport 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 down in Southbound
Nov 29 03:29:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:54Z|00706|binding|INFO|Removing iface tap63c2e7d2-fc ovn-installed in OVS
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.533 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:54.539 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d0:82:69 10.100.0.13'], port_security=['fa:16:3e:d0:82:69 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '95241eca-3cb6-49da-89d4-b172e76cb35c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39da21e61faa40aa979d498a98e394c9', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd874a9ce-9675-4251-a517-4ee6c9b8bc77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7d38eaf6-b450-4e6c-a289-9250cb0b8ce0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=63c2e7d2-fc25-43ac-bc72-e8d14b736a69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:54.541 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 63c2e7d2-fc25-43ac-bc72-e8d14b736a69 in datapath e0f1a1c4-7b51-4506-bd45-263dbe20ab0f unbound from our chassis#033[00m
Nov 29 03:29:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:54.543 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:29:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:54.544 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2e895d01-4f3a-43a0-a6d2-09ed2dbc3f08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:54 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:54.545 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f namespace which is not needed anymore#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]: {
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:    "0": [
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:        {
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "devices": [
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "/dev/loop3"
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            ],
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "lv_name": "ceph_lv0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "lv_size": "7511998464",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "name": "ceph_lv0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "tags": {
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.cluster_name": "ceph",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.crush_device_class": "",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.encrypted": "0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.osd_id": "0",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.type": "block",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:                "ceph.vdo": "0"
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            },
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "type": "block",
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:            "vg_name": "ceph_vg0"
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:        }
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]:    ]
Nov 29 03:29:54 np0005539550 zealous_mendel[352947]: }
Nov 29 03:29:54 np0005539550 systemd[1]: libpod-e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed.scope: Deactivated successfully.
Nov 29 03:29:54 np0005539550 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000096.scope: Deactivated successfully.
Nov 29 03:29:54 np0005539550 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000096.scope: Consumed 8.230s CPU time.
Nov 29 03:29:54 np0005539550 systemd-machined[216673]: Machine qemu-82-instance-00000096 terminated.
Nov 29 03:29:54 np0005539550 podman[352996]: 2025-11-29 08:29:54.637726657 +0000 UTC m=+0.027585732 container died e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.733 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.738 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.747 257641 INFO nova.virt.libvirt.driver [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Instance destroyed successfully.#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.748 257641 DEBUG nova.objects.instance [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lazy-loading 'resources' on Instance uuid 95241eca-3cb6-49da-89d4-b172e76cb35c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.752 257641 DEBUG nova.compute.manager [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-unplugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.752 257641 DEBUG oslo_concurrency.lockutils [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.753 257641 DEBUG oslo_concurrency.lockutils [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.753 257641 DEBUG oslo_concurrency.lockutils [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.753 257641 DEBUG nova.compute.manager [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] No waiting events found dispatching network-vif-unplugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.754 257641 DEBUG nova.compute.manager [req-a63738da-6a71-4b3b-a9ad-8ecbe8c2f052 req-3c230383-6555-4679-9200-3d813ff45c6f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-unplugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:29:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d921275e03f6f4f50df603aa38ed51d89297524c85d2984210793bfa04afb33d-merged.mount: Deactivated successfully.
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.771 257641 DEBUG nova.virt.libvirt.vif [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:29:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-589487472',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-589487472',id=150,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='39da21e61faa40aa979d498a98e394c9',ramdisk_id='',reservation_id='r-jley4o2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-1666930967',owner_user_name='tempest-ServerTagsTestJSON-1666930967-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:47Z,user_data=None,user_id='795ef5ce2edb4c1987378419f19947a2',uuid=95241eca-3cb6-49da-89d4-b172e76cb35c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.772 257641 DEBUG nova.network.os_vif_util [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converting VIF {"id": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "address": "fa:16:3e:d0:82:69", "network": {"id": "e0f1a1c4-7b51-4506-bd45-263dbe20ab0f", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-1996921875-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "39da21e61faa40aa979d498a98e394c9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63c2e7d2-fc", "ovs_interfaceid": "63c2e7d2-fc25-43ac-bc72-e8d14b736a69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.772 257641 DEBUG nova.network.os_vif_util [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.773 257641 DEBUG os_vif [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.775 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63c2e7d2-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.781 257641 INFO os_vif [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d0:82:69,bridge_name='br-int',has_traffic_filtering=True,id=63c2e7d2-fc25-43ac-bc72-e8d14b736a69,network=Network(e0f1a1c4-7b51-4506-bd45-263dbe20ab0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63c2e7d2-fc')#033[00m
Nov 29 03:29:54 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [NOTICE]   (352420) : haproxy version is 2.8.14-c23fe91
Nov 29 03:29:54 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [NOTICE]   (352420) : path to executable is /usr/sbin/haproxy
Nov 29 03:29:54 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [WARNING]  (352420) : Exiting Master process...
Nov 29 03:29:54 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [ALERT]    (352420) : Current worker (352422) exited with code 143 (Terminated)
Nov 29 03:29:54 np0005539550 neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f[352416]: [WARNING]  (352420) : All workers exited. Exiting... (0)
Nov 29 03:29:54 np0005539550 systemd[1]: libpod-08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6.scope: Deactivated successfully.
Nov 29 03:29:54 np0005539550 podman[353013]: 2025-11-29 08:29:54.893095892 +0000 UTC m=+0.254675649 container died 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:29:54 np0005539550 nova_compute[257631]: 2025-11-29 08:29:54.935 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:29:55 np0005539550 nova_compute[257631]: 2025-11-29 08:29:55.006 257641 DEBUG nova.compute.manager [req-a614fc76-a0ab-4dc1-a2f1-821c78bd840b req-1983e876-6fae-4660-a2c0-baa3e192fffe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Received event network-vif-deleted-f455cc42-f497-49e9-84f6-0713ec25f786 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:55 np0005539550 podman[352996]: 2025-11-29 08:29:55.023934705 +0000 UTC m=+0.413793760 container remove e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendel, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:29:55 np0005539550 systemd[1]: libpod-conmon-e82eba672f5a744ac8da152967a512cedfb03f7a3f33ad4286952b73517949ed.scope: Deactivated successfully.
Nov 29 03:29:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6-userdata-shm.mount: Deactivated successfully.
Nov 29 03:29:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-95f31909aef509926deafa627518e6bc370ceb6d10570a2c4051f9a3f9e28bf0-merged.mount: Deactivated successfully.
Nov 29 03:29:55 np0005539550 podman[353013]: 2025-11-29 08:29:55.153192737 +0000 UTC m=+0.514772494 container cleanup 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:29:55 np0005539550 systemd[1]: libpod-conmon-08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6.scope: Deactivated successfully.
Nov 29 03:29:55 np0005539550 podman[353105]: 2025-11-29 08:29:55.295730867 +0000 UTC m=+0.120651065 container remove 08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.301 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1611c78-d440-4610-b8b8-f67e95f4a6a0]: (4, ('Sat Nov 29 08:29:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f (08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6)\n08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6\nSat Nov 29 08:29:55 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f (08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6)\n08cdeaf0f7cc168bcbbafc0603c5b6eb735fde2f4c1110bcdb4c3ea7b9d1bed6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.303 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[68e0ad6d-9255-4ad4-8ed2-b9abecd2ff5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.304 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f1a1c4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:29:55 np0005539550 kernel: tape0f1a1c4-70: left promiscuous mode
Nov 29 03:29:55 np0005539550 nova_compute[257631]: 2025-11-29 08:29:55.306 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:55 np0005539550 nova_compute[257631]: 2025-11-29 08:29:55.308 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.310 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0d33e38b-000d-425b-85e1-a23b2644f23f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 nova_compute[257631]: 2025-11-29 08:29:55.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.324 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7183a854-05e5-4d4d-bd01-0f561b28609a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.325 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[800baa03-3292-48d7-bc0b-ade1665710b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.340 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[adb53e2a-d242-4542-84b5-d45692964957]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 803360, 'reachable_time': 26768, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353187, 'error': None, 'target': 'ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:55 np0005539550 systemd[1]: run-netns-ovnmeta\x2de0f1a1c4\x2d7b51\x2d4506\x2dbd45\x2d263dbe20ab0f.mount: Deactivated successfully.
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.343 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0f1a1c4-7b51-4506-bd45-263dbe20ab0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:29:55 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:55.344 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[c638e89b-904d-40e5-883a-c63211bd2884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:29:55 np0005539550 podman[353231]: 2025-11-29 08:29:55.624388793 +0000 UTC m=+0.023894828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:55 np0005539550 podman[353231]: 2025-11-29 08:29:55.82233072 +0000 UTC m=+0.221836725 container create 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:29:55 np0005539550 systemd[1]: Started libpod-conmon-56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542.scope.
Nov 29 03:29:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:56 np0005539550 podman[353231]: 2025-11-29 08:29:56.018056221 +0000 UTC m=+0.417562276 container init 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:29:56 np0005539550 podman[353231]: 2025-11-29 08:29:56.029145452 +0000 UTC m=+0.428651467 container start 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:29:56 np0005539550 clever_hellman[353248]: 167 167
Nov 29 03:29:56 np0005539550 systemd[1]: libpod-56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542.scope: Deactivated successfully.
Nov 29 03:29:56 np0005539550 podman[353231]: 2025-11-29 08:29:56.058081177 +0000 UTC m=+0.457587222 container attach 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:29:56 np0005539550 podman[353231]: 2025-11-29 08:29:56.058619141 +0000 UTC m=+0.458125186 container died 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:29:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 511 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Nov 29 03:29:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:56.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-47d053f4649d86bb12c1c667c60ed752a138dea5eba4f5b7cdb5143e5bb0fff2-merged.mount: Deactivated successfully.
Nov 29 03:29:56 np0005539550 podman[353231]: 2025-11-29 08:29:56.813154841 +0000 UTC m=+1.212660846 container remove 56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hellman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:29:56 np0005539550 systemd[1]: libpod-conmon-56c99a531bdd09cfab0ebc6f3db5c02cf3a89c58138e088123ef0608086b7542.scope: Deactivated successfully.
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.868 257641 DEBUG nova.compute.manager [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.869 257641 DEBUG oslo_concurrency.lockutils [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.869 257641 DEBUG oslo_concurrency.lockutils [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.870 257641 DEBUG oslo_concurrency.lockutils [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.870 257641 DEBUG nova.compute.manager [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] No waiting events found dispatching network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:29:56 np0005539550 nova_compute[257631]: 2025-11-29 08:29:56.870 257641 WARNING nova.compute.manager [req-a4b5ffd4-89ba-4bde-90c0-b3db35f35574 req-a96ef1ff-77cb-4a5b-a313-a32928df106d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received unexpected event network-vif-plugged-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:29:57 np0005539550 podman[353272]: 2025-11-29 08:29:56.959072437 +0000 UTC m=+0.022463182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:29:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:57.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:57 np0005539550 podman[353272]: 2025-11-29 08:29:57.400275242 +0000 UTC m=+0.463665977 container create 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:57 np0005539550 systemd[1]: Started libpod-conmon-4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602.scope.
Nov 29 03:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:29:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c7a4ff33ecfdb9a8ede7848e5827d0384e3d83f379188fbf8b65154d2ea58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c7a4ff33ecfdb9a8ede7848e5827d0384e3d83f379188fbf8b65154d2ea58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c7a4ff33ecfdb9a8ede7848e5827d0384e3d83f379188fbf8b65154d2ea58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a9c7a4ff33ecfdb9a8ede7848e5827d0384e3d83f379188fbf8b65154d2ea58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:29:58 np0005539550 podman[353272]: 2025-11-29 08:29:58.05585992 +0000 UTC m=+1.119250675 container init 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:29:58 np0005539550 podman[353272]: 2025-11-29 08:29:58.063068503 +0000 UTC m=+1.126459228 container start 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:29:58 np0005539550 podman[353272]: 2025-11-29 08:29:58.091635549 +0000 UTC m=+1.155026284 container attach 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:29:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 511 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 169 op/s
Nov 29 03:29:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:29:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:29:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:29:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:58Z|00707|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.368 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.410 257641 INFO nova.virt.libvirt.driver [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Deleting instance files /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c_del#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.411 257641 INFO nova.virt.libvirt.driver [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Deletion of /var/lib/nova/instances/95241eca-3cb6-49da-89d4-b172e76cb35c_del complete#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.496 257641 INFO nova.compute.manager [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Took 4.19 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.496 257641 DEBUG oslo.service.loopingcall [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.497 257641 DEBUG nova.compute.manager [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.497 257641 DEBUG nova.network.neutron [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.572 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:29:58Z|00708|binding|INFO|Releasing lport f2118d1b-0f35-4211-8508-64237a2d816e from this chassis (sb_readonly=0)
Nov 29 03:29:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:29:58 np0005539550 nova_compute[257631]: 2025-11-29 08:29:58.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]: {
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:        "osd_id": 0,
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:        "type": "bluestore"
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]:    }
Nov 29 03:29:58 np0005539550 blissful_rhodes[353289]: }
Nov 29 03:29:58 np0005539550 systemd[1]: libpod-4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602.scope: Deactivated successfully.
Nov 29 03:29:58 np0005539550 podman[353272]: 2025-11-29 08:29:58.948290424 +0000 UTC m=+2.011681149 container died 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:29:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9a9c7a4ff33ecfdb9a8ede7848e5827d0384e3d83f379188fbf8b65154d2ea58-merged.mount: Deactivated successfully.
Nov 29 03:29:59 np0005539550 podman[353272]: 2025-11-29 08:29:59.21789441 +0000 UTC m=+2.281285165 container remove 4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Nov 29 03:29:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:29:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:29:59 np0005539550 systemd[1]: libpod-conmon-4800207b3ab31b5ac8561db945b5f59b9a9335fbc4ea4a5941fa67c6d39fa602.scope: Deactivated successfully.
Nov 29 03:29:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b76b72a1-5864-4542-b6bc-1eb44f904962 does not exist
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 802d9a2d-d2ab-45d2-9349-47144bf3d5c9 does not exist
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f453faf-fceb-4634-87c9-1e988cf6dbc7 does not exist
Nov 29 03:29:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:29:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:29:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:29:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:29:59
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'default.rgw.log', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'vms']
Nov 29 03:29:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:29:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:59.497 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.498 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:29:59.499 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.777 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.796 257641 DEBUG nova.network.neutron [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.828 257641 INFO nova.compute.manager [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Took 1.33 seconds to deallocate network for instance.#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.883 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.884 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.890 257641 DEBUG nova.compute.manager [req-6f969c99-e0c2-4448-936b-52c2522d4840 req-76d5eb4b-4030-4e75-b99d-1df86713df0f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Received event network-vif-deleted-63c2e7d2-fc25-43ac-bc72-e8d14b736a69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:29:59 np0005539550 nova_compute[257631]: 2025-11-29 08:29:59.949 257641 DEBUG oslo_concurrency.processutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:30:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Nov 29 03:30:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081051349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.458 257641 DEBUG oslo_concurrency.processutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.465 257641 DEBUG nova.compute.provider_tree [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.490 257641 DEBUG nova.scheduler.client.report [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.521 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.546 257641 INFO nova.scheduler.client.report [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Deleted allocations for instance 95241eca-3cb6-49da-89d4-b172e76cb35c#033[00m
Nov 29 03:30:00 np0005539550 nova_compute[257631]: 2025-11-29 08:30:00.641 257641 DEBUG oslo_concurrency.lockutils [None req-b45a7cd3-9569-4e2a-a61e-a91baf7fcb2a 795ef5ce2edb4c1987378419f19947a2 39da21e61faa40aa979d498a98e394c9 - - default default] Lock "95241eca-3cb6-49da-89d4-b172e76cb35c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:01.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 465 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.4 MiB/s wr, 183 op/s
Nov 29 03:30:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:02.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:02 np0005539550 nova_compute[257631]: 2025-11-29 08:30:02.329 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404987.32775, d4258b43-e73e-47f3-b1d1-f169bcaf4534 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:02 np0005539550 nova_compute[257631]: 2025-11-29 08:30:02.329 257641 INFO nova.compute.manager [-] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:02 np0005539550 nova_compute[257631]: 2025-11-29 08:30:02.679 257641 DEBUG nova.compute.manager [None req-94ee239e-67ec-4179-bd88-9f4efe2ff60a - - - - - -] [instance: d4258b43-e73e-47f3-b1d1-f169bcaf4534] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:03.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:03 np0005539550 nova_compute[257631]: 2025-11-29 08:30:03.576 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 197 op/s
Nov 29 03:30:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:04.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:04 np0005539550 nova_compute[257631]: 2025-11-29 08:30:04.780 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.7 MiB/s wr, 242 op/s
Nov 29 03:30:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:06.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:07.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.447 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.447 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.470 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.543 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.543 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.555 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.555 257641 INFO nova.compute.claims [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.683 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:07 np0005539550 nova_compute[257631]: 2025-11-29 08:30:07.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 200 op/s
Nov 29 03:30:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/263760399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.135 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.142 257641 DEBUG nova.compute.provider_tree [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.160 257641 DEBUG nova.scheduler.client.report [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.185 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.187 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.241 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.242 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.274 257641 INFO nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.293 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:30:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:30:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:08.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.388 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.390 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.390 257641 INFO nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Creating image(s)#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.420 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.445 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.469 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.472 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.540 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.542 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.542 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.543 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.569 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.572 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.854 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.914 257641 DEBUG nova.policy [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:08 np0005539550 nova_compute[257631]: 2025-11-29 08:30:08.920 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.009 257641 DEBUG nova.objects.instance [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.022 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.023 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Ensure instance console log exists: /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.024 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.024 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.024 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:30:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:30:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:30:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:30:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:30:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:09 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:09.501 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.687 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Successfully created port: 87b89ebb-67dd-420c-b59a-66707953da73 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.745 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764404994.7448313, 95241eca-3cb6-49da-89d4-b172e76cb35c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.746 257641 INFO nova.compute.manager [-] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:09 np0005539550 nova_compute[257631]: 2025-11-29 08:30:09.788 257641 DEBUG nova.compute.manager [None req-4be067a6-0e06-4325-98e5-47e5221c801e - - - - - -] [instance: 95241eca-3cb6-49da-89d4-b172e76cb35c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 200 op/s
Nov 29 03:30:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:10.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:11 np0005539550 podman[353642]: 2025-11-29 08:30:11.33803627 +0000 UTC m=+0.065753710 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:11 np0005539550 podman[353641]: 2025-11-29 08:30:11.348078375 +0000 UTC m=+0.077370765 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:30:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:11.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.809 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Successfully updated port: 87b89ebb-67dd-420c-b59a-66707953da73 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.840 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.840 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.841 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.941 257641 DEBUG nova.compute.manager [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-changed-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.942 257641 DEBUG nova.compute.manager [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Refreshing instance network info cache due to event network-changed-87b89ebb-67dd-420c-b59a-66707953da73. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:11 np0005539550 nova_compute[257631]: 2025-11-29 08:30:11.942 257641 DEBUG oslo_concurrency.lockutils [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 759 KiB/s wr, 191 op/s
Nov 29 03:30:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:12.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:12 np0005539550 nova_compute[257631]: 2025-11-29 08:30:12.488 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.580 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.750 257641 DEBUG nova.network.neutron [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Updating instance_info_cache with network_info: [{"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.770 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.771 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Instance network_info: |[{"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.771 257641 DEBUG oslo_concurrency.lockutils [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.771 257641 DEBUG nova.network.neutron [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Refreshing network info cache for port 87b89ebb-67dd-420c-b59a-66707953da73 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.773 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Start _get_guest_xml network_info=[{"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.778 257641 WARNING nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.788 257641 DEBUG nova.virt.libvirt.host [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.788 257641 DEBUG nova.virt.libvirt.host [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.795 257641 DEBUG nova.virt.libvirt.host [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.796 257641 DEBUG nova.virt.libvirt.host [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.797 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.798 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.798 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.799 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.799 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.799 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.800 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.800 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.800 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.801 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.801 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.803 257641 DEBUG nova.virt.hardware [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:13 np0005539550 nova_compute[257631]: 2025-11-29 08:30:13.806 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 466 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Nov 29 03:30:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512627555' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.246 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.280 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.284 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328248557' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.732 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.736 257641 DEBUG nova.virt.libvirt.vif [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=153,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-2wpwp7ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:08Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=122c0a9b-bb66-40c6-ad51-ca11eb95e9a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.737 257641 DEBUG nova.network.os_vif_util [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.739 257641 DEBUG nova.network.os_vif_util [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.741 257641 DEBUG nova.objects.instance [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.756 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <uuid>122c0a9b-bb66-40c6-ad51-ca11eb95e9a6</uuid>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <name>instance-00000099</name>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersTestJSON-server-901142645</nova:name>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:30:13</nova:creationTime>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <nova:port uuid="87b89ebb-67dd-420c-b59a-66707953da73">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="serial">122c0a9b-bb66-40c6-ad51-ca11eb95e9a6</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="uuid">122c0a9b-bb66-40c6-ad51-ca11eb95e9a6</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:3e:13:86"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <target dev="tap87b89ebb-67"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/console.log" append="off"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:30:14 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:30:14 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:30:14 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:30:14 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.757 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Preparing to wait for external event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.758 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.758 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.758 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.759 257641 DEBUG nova.virt.libvirt.vif [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=153,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-2wpwp7ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:08Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=122c0a9b-bb66-40c6-ad51-ca11eb95e9a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.759 257641 DEBUG nova.network.os_vif_util [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.760 257641 DEBUG nova.network.os_vif_util [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.760 257641 DEBUG os_vif [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.761 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.762 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.767 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87b89ebb-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.768 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87b89ebb-67, col_values=(('external_ids', {'iface-id': '87b89ebb-67dd-420c-b59a-66707953da73', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3e:13:86', 'vm-uuid': '122c0a9b-bb66-40c6-ad51-ca11eb95e9a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539550 NetworkManager[49039]: <info>  [1764405014.7943] manager: (tap87b89ebb-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.800 257641 INFO os_vif [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67')#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.862 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.863 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.863 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:3e:13:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.863 257641 INFO nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Using config drive#033[00m
Nov 29 03:30:14 np0005539550 nova_compute[257631]: 2025-11-29 08:30:14.894 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.365 257641 INFO nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Creating config drive at /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config#033[00m
Nov 29 03:30:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:15.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.371 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz53wcu5j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.399 257641 DEBUG nova.network.neutron [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Updated VIF entry in instance network info cache for port 87b89ebb-67dd-420c-b59a-66707953da73. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.400 257641 DEBUG nova.network.neutron [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Updating instance_info_cache with network_info: [{"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.429 257641 DEBUG oslo_concurrency.lockutils [req-6eac9a71-81ae-43ef-835e-242075657cd2 req-49486ac5-21a4-46dd-aeff-11d8279cedb3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.505 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz53wcu5j" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.538 257641 DEBUG nova.storage.rbd_utils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.543 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.728 257641 DEBUG oslo_concurrency.processutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.729 257641 INFO nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Deleting local config drive /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:15 np0005539550 kernel: tap87b89ebb-67: entered promiscuous mode
Nov 29 03:30:15 np0005539550 NetworkManager[49039]: <info>  [1764405015.7843] manager: (tap87b89ebb-67): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Nov 29 03:30:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:15Z|00709|binding|INFO|Claiming lport 87b89ebb-67dd-420c-b59a-66707953da73 for this chassis.
Nov 29 03:30:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:15Z|00710|binding|INFO|87b89ebb-67dd-420c-b59a-66707953da73: Claiming fa:16:3e:3e:13:86 10.100.0.7
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.796 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:13:86 10.100.0.7'], port_security=['fa:16:3e:3e:13:86 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '122c0a9b-bb66-40c6-ad51-ca11eb95e9a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=87b89ebb-67dd-420c-b59a-66707953da73) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.797 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 87b89ebb-67dd-420c-b59a-66707953da73 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.798 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.809 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[14b3b119-9f93-4ea9-95b3-96d2ed82a5c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.810 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14ea2b48-91 in ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.812 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14ea2b48-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6ffb78-cbe5-4463-b6f0-13e75ca3327e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 systemd-udevd[353813]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.813 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7cea44b3-6ba0-4b67-acf7-472cb1f0dc22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 systemd-machined[216673]: New machine qemu-83-instance-00000099.
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.827 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dff7dbb9-b043-432e-8388-81c9f1c1b5af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 NetworkManager[49039]: <info>  [1764405015.8287] device (tap87b89ebb-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:15 np0005539550 NetworkManager[49039]: <info>  [1764405015.8299] device (tap87b89ebb-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.854 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c99b035d-6cc6-42bc-9607-8be409e6a3cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 systemd[1]: Started Virtual Machine qemu-83-instance-00000099.
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.861 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:15Z|00711|binding|INFO|Setting lport 87b89ebb-67dd-420c-b59a-66707953da73 ovn-installed in OVS
Nov 29 03:30:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:15Z|00712|binding|INFO|Setting lport 87b89ebb-67dd-420c-b59a-66707953da73 up in Southbound
Nov 29 03:30:15 np0005539550 nova_compute[257631]: 2025-11-29 08:30:15.871 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.887 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b33c8ac9-2eee-4253-8f96-c70d8d2cbed6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.893 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c24e459a-e788-4ba3-9715-870afe8c095d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 NetworkManager[49039]: <info>  [1764405015.8945] manager: (tap14ea2b48-90): new Veth device (/org/freedesktop/NetworkManager/Devices/313)
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.926 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e8047dc9-ad13-4983-90ec-5bf61870ccd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.929 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[de8fd121-76f6-4df8-a784-954827181e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 NetworkManager[49039]: <info>  [1764405015.9543] device (tap14ea2b48-90): carrier: link connected
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.960 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[711bbc4d-01e4-47c1-90fd-025197f75dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.979 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8e2ebe-b631-4c6a-a7de-8d07b6e0f5a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806361, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353846, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:15.997 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dfac760b-9548-4e71-a097-ebb1c97a9957]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:168b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806361, 'tstamp': 806361}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353847, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.015 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a002d0e5-53ad-4b24-af4a-678f018c6bf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806361, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353848, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1619c5-f587-41c0-9627-9e830c2998ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.3 MiB/s wr, 169 op/s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.102 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b7cf6a42-03e1-4ec0-9de8-120a9c321d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.103 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.104 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.104 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.106 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539550 NetworkManager[49039]: <info>  [1764405016.1072] manager: (tap14ea2b48-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/314)
Nov 29 03:30:16 np0005539550 kernel: tap14ea2b48-90: entered promiscuous mode
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.115 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.116 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:16Z|00713|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.120 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.126 257641 DEBUG nova.compute.manager [req-17869ab0-6226-46a5-ac7e-716cab3a9efe req-69e0ea3f-147a-446d-88f2-3ef9887c6657 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.126 257641 DEBUG oslo_concurrency.lockutils [req-17869ab0-6226-46a5-ac7e-716cab3a9efe req-69e0ea3f-147a-446d-88f2-3ef9887c6657 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.127 257641 DEBUG oslo_concurrency.lockutils [req-17869ab0-6226-46a5-ac7e-716cab3a9efe req-69e0ea3f-147a-446d-88f2-3ef9887c6657 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.127 257641 DEBUG oslo_concurrency.lockutils [req-17869ab0-6226-46a5-ac7e-716cab3a9efe req-69e0ea3f-147a-446d-88f2-3ef9887c6657 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.127 257641 DEBUG nova.compute.manager [req-17869ab0-6226-46a5-ac7e-716cab3a9efe req-69e0ea3f-147a-446d-88f2-3ef9887c6657 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Processing event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.132 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.131 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9d833d86-57dd-4559-b400-4f61901bb98e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.133 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:16.135 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'env', 'PROCESS_TAG=haproxy-14ea2b48-9984-443b-82fc-568ae98723fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14ea2b48-9984-443b-82fc-568ae98723fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:16.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:16 np0005539550 podman[353880]: 2025-11-29 08:30:16.516549639 +0000 UTC m=+0.072441661 container create 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:30:16 np0005539550 podman[353880]: 2025-11-29 08:30:16.47250169 +0000 UTC m=+0.028393732 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:16 np0005539550 systemd[1]: Started libpod-conmon-5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418.scope.
Nov 29 03:30:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:30:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a08a9c6887ab2076e60709315e2ab7263636b61d5a95336e711b9b70ba23bdd6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:16 np0005539550 podman[353880]: 2025-11-29 08:30:16.621693189 +0000 UTC m=+0.177585211 container init 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:16 np0005539550 podman[353880]: 2025-11-29 08:30:16.627183018 +0000 UTC m=+0.183075040 container start 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:30:16 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [NOTICE]   (353901) : New worker (353918) forked
Nov 29 03:30:16 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [NOTICE]   (353901) : Loading success.
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.836 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.838 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405016.8350582, 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.838 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.844 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.849 257641 INFO nova.virt.libvirt.driver [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Instance spawned successfully.#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.850 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.856 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.863 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.870 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.870 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.871 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.871 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.872 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.872 257641 DEBUG nova.virt.libvirt.driver [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.878 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.879 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405016.8368878, 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.879 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.901 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.906 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405016.8438249, 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.907 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.920 257641 INFO nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.921 257641 DEBUG nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.931 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.936 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.959 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:16 np0005539550 nova_compute[257631]: 2025-11-29 08:30:16.978 257641 INFO nova.compute.manager [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Took 9.46 seconds to build instance.#033[00m
Nov 29 03:30:17 np0005539550 nova_compute[257631]: 2025-11-29 08:30:17.002 257641 DEBUG oslo_concurrency.lockutils [None req-0497fdac-50ae-4c77-af9a-515f31323c3a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:17.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 532 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 227 KiB/s rd, 5.3 MiB/s wr, 96 op/s
Nov 29 03:30:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:18.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.527 257641 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.527 257641 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.528 257641 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.528 257641 DEBUG oslo_concurrency.lockutils [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.528 257641 DEBUG nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] No waiting events found dispatching network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.529 257641 WARNING nova.compute.manager [req-ba0bee5c-5ef8-4d4a-b040-3fb8e56d2334 req-37706582-35c6-417f-9c9f-9385b3b12d86 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received unexpected event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:18 np0005539550 nova_compute[257631]: 2025-11-29 08:30:18.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.599047) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018599113, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1656, "num_deletes": 262, "total_data_size": 2671329, "memory_usage": 2725184, "flush_reason": "Manual Compaction"}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018616196, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 2627036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50867, "largest_seqno": 52522, "table_properties": {"data_size": 2619315, "index_size": 4535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16959, "raw_average_key_size": 20, "raw_value_size": 2603552, "raw_average_value_size": 3167, "num_data_blocks": 197, "num_entries": 822, "num_filter_entries": 822, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764404880, "oldest_key_time": 1764404880, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 17201 microseconds, and 6145 cpu microseconds.
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.616245) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 2627036 bytes OK
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.616266) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.618484) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.618540) EVENT_LOG_v1 {"time_micros": 1764405018618527, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.618570) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 2664131, prev total WAL file size 2664131, number of live WAL files 2.
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373539' seq:72057594037927935, type:22 .. '6C6F676D0032303131' seq:0, type:0; will stop at (end)
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(2565KB)], [110(12MB)]
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018619838, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 15753997, "oldest_snapshot_seqno": -1}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 8983 keys, 15596222 bytes, temperature: kUnknown
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018746363, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 15596222, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15533108, "index_size": 39578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 232668, "raw_average_key_size": 25, "raw_value_size": 15370000, "raw_average_value_size": 1711, "num_data_blocks": 1559, "num_entries": 8983, "num_filter_entries": 8983, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405018, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.747415) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 15596222 bytes
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.750064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.7 rd, 122.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 12.5 +0.0 blob) out(14.9 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 9524, records dropped: 541 output_compression: NoCompression
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.750106) EVENT_LOG_v1 {"time_micros": 1764405018750090, "job": 66, "event": "compaction_finished", "compaction_time_micros": 127345, "compaction_time_cpu_micros": 37757, "output_level": 6, "num_output_files": 1, "total_output_size": 15596222, "num_input_records": 9524, "num_output_records": 8983, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018751252, "job": 66, "event": "table_file_deletion", "file_number": 112}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405018754535, "job": 66, "event": "table_file_deletion", "file_number": 110}
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.619656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.754768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.754775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.754776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.754778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:18.754780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:18.962 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:18.963 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:18.963 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:19.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:19 np0005539550 nova_compute[257631]: 2025-11-29 08:30:19.794 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 161 op/s
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008495530313605063 of space, bias 1.0, pg target 2.548659094081519 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6441015843104411 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:30:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:30:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.240 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.242 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.258 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.351 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.352 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.360 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.361 257641 INFO nova.compute.claims [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:30:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:21.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:21 np0005539550 nova_compute[257631]: 2025-11-29 08:30:21.512 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1364597513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.011 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.018 257641 DEBUG nova.compute.provider_tree [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.042 257641 DEBUG nova.scheduler.client.report [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.075 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.076 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 544 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.119 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.120 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.142 257641 INFO nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.165 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.293 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.295 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.295 257641 INFO nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Creating image(s)#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.327 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.365 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.401 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.406 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.483 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.485 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.485 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.486 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.512 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.519 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 98874a34-13b7-4247-93ef-901257769274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:22 np0005539550 podman[354084]: 2025-11-29 08:30:22.573192716 +0000 UTC m=+0.094094480 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.850 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 98874a34-13b7-4247-93ef-901257769274_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.890 257641 DEBUG nova.policy [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:22 np0005539550 nova_compute[257631]: 2025-11-29 08:30:22.940 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 98874a34-13b7-4247-93ef-901257769274_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.052 257641 DEBUG nova.objects.instance [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 98874a34-13b7-4247-93ef-901257769274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.067 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.067 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Ensure instance console log exists: /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.068 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.068 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.068 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:23.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.607669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023607720, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 302, "num_deletes": 251, "total_data_size": 85159, "memory_usage": 90544, "flush_reason": "Manual Compaction"}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023610821, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 83952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52523, "largest_seqno": 52824, "table_properties": {"data_size": 82022, "index_size": 158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5667, "raw_average_key_size": 20, "raw_value_size": 78098, "raw_average_value_size": 279, "num_data_blocks": 7, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405019, "oldest_key_time": 1764405019, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 3220 microseconds, and 1109 cpu microseconds.
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.610887) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 83952 bytes OK
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.610912) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612242) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612260) EVENT_LOG_v1 {"time_micros": 1764405023612255, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 82974, prev total WAL file size 82974, number of live WAL files 2.
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373532' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(81KB)], [113(14MB)]
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023612719, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 15680174, "oldest_snapshot_seqno": -1}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 8752 keys, 11837725 bytes, temperature: kUnknown
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023721733, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 11837725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11781036, "index_size": 33729, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 228075, "raw_average_key_size": 26, "raw_value_size": 11626816, "raw_average_value_size": 1328, "num_data_blocks": 1314, "num_entries": 8752, "num_filter_entries": 8752, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.722218) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11837725 bytes
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.724091) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.5 rd, 108.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.9 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(327.8) write-amplify(141.0) OK, records in: 9262, records dropped: 510 output_compression: NoCompression
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.724114) EVENT_LOG_v1 {"time_micros": 1764405023724103, "job": 68, "event": "compaction_finished", "compaction_time_micros": 109299, "compaction_time_cpu_micros": 30719, "output_level": 6, "num_output_files": 1, "total_output_size": 11837725, "num_input_records": 9262, "num_output_records": 8752, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023724253, "job": 68, "event": "table_file_deletion", "file_number": 115}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405023727093, "job": 68, "event": "table_file_deletion", "file_number": 113}
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.612582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.727138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.727143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.727146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.727148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:30:23.727151) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:30:23 np0005539550 nova_compute[257631]: 2025-11-29 08:30:23.733 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Successfully created port: f4011c98-b436-4719-bebd-fb90cd9e986f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 549 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Nov 29 03:30:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:24.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.819 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Successfully updated port: f4011c98-b436-4719-bebd-fb90cd9e986f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.840 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.841 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.841 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.977 257641 DEBUG nova.compute.manager [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-changed-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.978 257641 DEBUG nova.compute.manager [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Refreshing instance network info cache due to event network-changed-f4011c98-b436-4719-bebd-fb90cd9e986f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:24 np0005539550 nova_compute[257631]: 2025-11-29 08:30:24.978 257641 DEBUG oslo_concurrency.lockutils [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:25 np0005539550 nova_compute[257631]: 2025-11-29 08:30:25.016 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:25.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 591 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.4 MiB/s wr, 334 op/s
Nov 29 03:30:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.806 257641 DEBUG nova.network.neutron [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Updating instance_info_cache with network_info: [{"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.826 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.827 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Instance network_info: |[{"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.827 257641 DEBUG oslo_concurrency.lockutils [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.828 257641 DEBUG nova.network.neutron [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Refreshing network info cache for port f4011c98-b436-4719-bebd-fb90cd9e986f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.831 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Start _get_guest_xml network_info=[{"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.836 257641 WARNING nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.841 257641 DEBUG nova.virt.libvirt.host [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.841 257641 DEBUG nova.virt.libvirt.host [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.844 257641 DEBUG nova.virt.libvirt.host [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.844 257641 DEBUG nova.virt.libvirt.host [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.845 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.846 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.846 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.846 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.847 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.847 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.847 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.848 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.848 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.848 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.848 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.849 257641 DEBUG nova.virt.hardware [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:26 np0005539550 nova_compute[257631]: 2025-11-29 08:30:26.852 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/539343596' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.297 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.322 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.326 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:27.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3658233461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.769 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.771 257641 DEBUG nova.virt.libvirt.vif [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=155,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-cru9kfrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:22Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=98874a34-13b7-4247-93ef-901257769274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.772 257641 DEBUG nova.network.os_vif_util [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.773 257641 DEBUG nova.network.os_vif_util [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.774 257641 DEBUG nova.objects.instance [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98874a34-13b7-4247-93ef-901257769274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.789 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <uuid>98874a34-13b7-4247-93ef-901257769274</uuid>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <name>instance-0000009b</name>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersTestJSON-server-901142645</nova:name>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:30:26</nova:creationTime>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <nova:port uuid="f4011c98-b436-4719-bebd-fb90cd9e986f">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="serial">98874a34-13b7-4247-93ef-901257769274</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="uuid">98874a34-13b7-4247-93ef-901257769274</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/98874a34-13b7-4247-93ef-901257769274_disk">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/98874a34-13b7-4247-93ef-901257769274_disk.config">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d8:91:24"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <target dev="tapf4011c98-b4"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/console.log" append="off"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:30:27 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:30:27 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:30:27 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:30:27 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.793 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Preparing to wait for external event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.794 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.794 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.794 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.795 257641 DEBUG nova.virt.libvirt.vif [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=155,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-cru9kfrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:22Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=98874a34-13b7-4247-93ef-901257769274,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.796 257641 DEBUG nova.network.os_vif_util [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.796 257641 DEBUG nova.network.os_vif_util [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.797 257641 DEBUG os_vif [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.798 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.798 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.799 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.802 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4011c98-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.803 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf4011c98-b4, col_values=(('external_ids', {'iface-id': 'f4011c98-b436-4719-bebd-fb90cd9e986f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:91:24', 'vm-uuid': '98874a34-13b7-4247-93ef-901257769274'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.866 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539550 NetworkManager[49039]: <info>  [1764405027.8677] manager: (tapf4011c98-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.870 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.875 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:27 np0005539550 nova_compute[257631]: 2025-11-29 08:30:27.877 257641 INFO os_vif [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4')#033[00m
Nov 29 03:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.070 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.071 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.071 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:d8:91:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.072 257641 INFO nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Using config drive#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.102 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 591 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.2 MiB/s wr, 276 op/s
Nov 29 03:30:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.330 257641 DEBUG nova.network.neutron [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Updated VIF entry in instance network info cache for port f4011c98-b436-4719-bebd-fb90cd9e986f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.330 257641 DEBUG nova.network.neutron [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Updating instance_info_cache with network_info: [{"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.351 257641 DEBUG oslo_concurrency.lockutils [req-bb05e2fc-e4fc-4b22-b23a-327410f546a6 req-dc3396d3-2325-4679-88e1-0c912aa97d04 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-98874a34-13b7-4247-93ef-901257769274" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:28 np0005539550 nova_compute[257631]: 2025-11-29 08:30:28.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.237 257641 INFO nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Creating config drive at /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.244 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzljieumi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.384 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzljieumi" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:29.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.414 257641 DEBUG nova.storage.rbd_utils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 98874a34-13b7-4247-93ef-901257769274_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.418 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config 98874a34-13b7-4247-93ef-901257769274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.810 257641 DEBUG oslo_concurrency.processutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config 98874a34-13b7-4247-93ef-901257769274_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.811 257641 INFO nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Deleting local config drive /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:29 np0005539550 kernel: tapf4011c98-b4: entered promiscuous mode
Nov 29 03:30:29 np0005539550 NetworkManager[49039]: <info>  [1764405029.8610] manager: (tapf4011c98-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/316)
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:29Z|00714|binding|INFO|Claiming lport f4011c98-b436-4719-bebd-fb90cd9e986f for this chassis.
Nov 29 03:30:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:29Z|00715|binding|INFO|f4011c98-b436-4719-bebd-fb90cd9e986f: Claiming fa:16:3e:d8:91:24 10.100.0.3
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.870 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:91:24 10.100.0.3'], port_security=['fa:16:3e:d8:91:24 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '98874a34-13b7-4247-93ef-901257769274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f4011c98-b436-4719-bebd-fb90cd9e986f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.872 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f4011c98-b436-4719-bebd-fb90cd9e986f in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.874 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:30:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:29Z|00716|binding|INFO|Setting lport f4011c98-b436-4719-bebd-fb90cd9e986f ovn-installed in OVS
Nov 29 03:30:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:29Z|00717|binding|INFO|Setting lport f4011c98-b436-4719-bebd-fb90cd9e986f up in Southbound
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.884 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539550 nova_compute[257631]: 2025-11-29 08:30:29.887 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.891 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0315b96d-d2e8-4758-b446-b9cf470f38a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 systemd-udevd[354363]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:29 np0005539550 systemd-machined[216673]: New machine qemu-84-instance-0000009b.
Nov 29 03:30:29 np0005539550 NetworkManager[49039]: <info>  [1764405029.9068] device (tapf4011c98-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:29 np0005539550 NetworkManager[49039]: <info>  [1764405029.9080] device (tapf4011c98-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:29 np0005539550 systemd[1]: Started Virtual Machine qemu-84-instance-0000009b.
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.923 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[06daa55d-cdbd-4da0-8fc1-43b017bd8ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.926 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[04f8e6dc-0960-477d-b2aa-e306e7f8f614]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.957 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[33a6e7de-f569-43b8-aeea-64b7bb13824c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ad69a07d-6c0c-4090-a1f2-1af3462a75fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806361, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354375, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.996 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d6918a7-9db9-45f6-8156-ae79c82e995d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806373, 'tstamp': 806373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354377, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806376, 'tstamp': 806376}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354377, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:29.998 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:30.001 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:30.001 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:30.002 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:30.002 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 591 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 307 op/s
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.158 257641 DEBUG nova.compute.manager [req-ec7eb322-3901-460c-a2a6-83b24c3a56a3 req-97350424-c9c3-45bb-97d6-72566faa7ef1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.158 257641 DEBUG oslo_concurrency.lockutils [req-ec7eb322-3901-460c-a2a6-83b24c3a56a3 req-97350424-c9c3-45bb-97d6-72566faa7ef1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.159 257641 DEBUG oslo_concurrency.lockutils [req-ec7eb322-3901-460c-a2a6-83b24c3a56a3 req-97350424-c9c3-45bb-97d6-72566faa7ef1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.159 257641 DEBUG oslo_concurrency.lockutils [req-ec7eb322-3901-460c-a2a6-83b24c3a56a3 req-97350424-c9c3-45bb-97d6-72566faa7ef1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.159 257641 DEBUG nova.compute.manager [req-ec7eb322-3901-460c-a2a6-83b24c3a56a3 req-97350424-c9c3-45bb-97d6-72566faa7ef1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Processing event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:30Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3e:13:86 10.100.0.7
Nov 29 03:30:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:30Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3e:13:86 10.100.0.7
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.372 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405030.371508, 98874a34-13b7-4247-93ef-901257769274 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.373 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.375 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.378 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.382 257641 INFO nova.virt.libvirt.driver [-] [instance: 98874a34-13b7-4247-93ef-901257769274] Instance spawned successfully.#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.382 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.400 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.405 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.409 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.410 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.410 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.411 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.411 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.411 257641 DEBUG nova.virt.libvirt.driver [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.460 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.460 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405030.3730314, 98874a34-13b7-4247-93ef-901257769274 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.460 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.494 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.497 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405030.3776262, 98874a34-13b7-4247-93ef-901257769274 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.497 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.501 257641 INFO nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Took 8.21 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.501 257641 DEBUG nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.512 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.515 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.557 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.640 257641 INFO nova.compute.manager [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Took 9.31 seconds to build instance.#033[00m
Nov 29 03:30:30 np0005539550 nova_compute[257631]: 2025-11-29 08:30:30.665 257641 DEBUG oslo_concurrency.lockutils [None req-4c29f34c-6957-4f15-8a7f-fffba7104d0e 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:31.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 649 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.5 MiB/s wr, 372 op/s
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.275 257641 DEBUG nova.compute.manager [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.276 257641 DEBUG oslo_concurrency.lockutils [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.276 257641 DEBUG oslo_concurrency.lockutils [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.277 257641 DEBUG oslo_concurrency.lockutils [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.278 257641 DEBUG nova.compute.manager [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] No waiting events found dispatching network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.278 257641 WARNING nova.compute.manager [req-1c40099a-260b-4bc4-9e62-c4de3e4788e3 req-07708836-5a0a-4660-aea4-231f8720bc71 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received unexpected event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.746 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.746 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.746 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.747 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.747 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.748 257641 INFO nova.compute.manager [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Terminating instance#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.750 257641 DEBUG nova.compute.manager [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:32 np0005539550 kernel: tapf4011c98-b4 (unregistering): left promiscuous mode
Nov 29 03:30:32 np0005539550 NetworkManager[49039]: <info>  [1764405032.7949] device (tapf4011c98-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:32Z|00718|binding|INFO|Releasing lport f4011c98-b436-4719-bebd-fb90cd9e986f from this chassis (sb_readonly=0)
Nov 29 03:30:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:32Z|00719|binding|INFO|Setting lport f4011c98-b436-4719-bebd-fb90cd9e986f down in Southbound
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:32Z|00720|binding|INFO|Removing iface tapf4011c98-b4 ovn-installed in OVS
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.813 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:91:24 10.100.0.3'], port_security=['fa:16:3e:d8:91:24 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '98874a34-13b7-4247-93ef-901257769274', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=f4011c98-b436-4719-bebd-fb90cd9e986f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.815 158978 INFO neutron.agent.ovn.metadata.agent [-] Port f4011c98-b436-4719-bebd-fb90cd9e986f in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.817 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.836 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8483c9cc-29c3-4fa8-9856-1510cee1fe18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009b.scope: Deactivated successfully.
Nov 29 03:30:32 np0005539550 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000009b.scope: Consumed 2.907s CPU time.
Nov 29 03:30:32 np0005539550 systemd-machined[216673]: Machine qemu-84-instance-0000009b terminated.
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.873 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3ae795d2-8324-452e-a19c-c98a23a5a5ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.877 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fa7092d6-3875-4959-9a1b-5335253fa307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.909 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[93010e32-942f-4fca-a60e-ea3d3d28a752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.930 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[044f30f1-c6c5-4c6c-b535-557e569172a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806361, 'reachable_time': 35538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354431, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.946 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dacc5d78-5856-4dd1-a5f4-99ead5a7a9c0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806373, 'tstamp': 806373}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354432, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap14ea2b48-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 806376, 'tstamp': 806376}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354432, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.949 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.951 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.958 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.958 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.959 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:32.959 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.986 257641 INFO nova.virt.libvirt.driver [-] [instance: 98874a34-13b7-4247-93ef-901257769274] Instance destroyed successfully.#033[00m
Nov 29 03:30:32 np0005539550 nova_compute[257631]: 2025-11-29 08:30:32.986 257641 DEBUG nova.objects.instance [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 98874a34-13b7-4247-93ef-901257769274 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.004 257641 DEBUG nova.virt.libvirt.vif [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=155,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-cru9kfrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:30Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=98874a34-13b7-4247-93ef-901257769274,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.005 257641 DEBUG nova.network.os_vif_util [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "f4011c98-b436-4719-bebd-fb90cd9e986f", "address": "fa:16:3e:d8:91:24", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4011c98-b4", "ovs_interfaceid": "f4011c98-b436-4719-bebd-fb90cd9e986f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.007 257641 DEBUG nova.network.os_vif_util [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.007 257641 DEBUG os_vif [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.011 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.012 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4011c98-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.019 257641 INFO os_vif [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:91:24,bridge_name='br-int',has_traffic_filtering=True,id=f4011c98-b436-4719-bebd-fb90cd9e986f,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4011c98-b4')#033[00m
Nov 29 03:30:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Nov 29 03:30:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Nov 29 03:30:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Nov 29 03:30:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:33.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.479 257641 INFO nova.virt.libvirt.driver [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Deleting instance files /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274_del#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.480 257641 INFO nova.virt.libvirt.driver [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Deletion of /var/lib/nova/instances/98874a34-13b7-4247-93ef-901257769274_del complete#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.537 257641 INFO nova.compute.manager [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.538 257641 DEBUG oslo.service.loopingcall [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.538 257641 DEBUG nova.compute.manager [-] [instance: 98874a34-13b7-4247-93ef-901257769274] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.538 257641 DEBUG nova.network.neutron [-] [instance: 98874a34-13b7-4247-93ef-901257769274] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:33 np0005539550 nova_compute[257631]: 2025-11-29 08:30:33.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 665 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.6 MiB/s wr, 451 op/s
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.203 257641 DEBUG nova.network.neutron [-] [instance: 98874a34-13b7-4247-93ef-901257769274] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.222 257641 INFO nova.compute.manager [-] [instance: 98874a34-13b7-4247-93ef-901257769274] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.292 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.292 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.406 257641 DEBUG oslo_concurrency.processutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.438 257641 DEBUG nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-vif-unplugged-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.439 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.439 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.439 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.440 257641 DEBUG nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] No waiting events found dispatching network-vif-unplugged-f4011c98-b436-4719-bebd-fb90cd9e986f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.440 257641 WARNING nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received unexpected event network-vif-unplugged-f4011c98-b436-4719-bebd-fb90cd9e986f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.440 257641 DEBUG nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.441 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "98874a34-13b7-4247-93ef-901257769274-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.441 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.441 257641 DEBUG oslo_concurrency.lockutils [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "98874a34-13b7-4247-93ef-901257769274-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.442 257641 DEBUG nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] No waiting events found dispatching network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.442 257641 WARNING nova.compute.manager [req-352d56de-4b60-4aff-b341-af44cd5f0645 req-4a50c3af-5bed-4b2d-bc90-6f2cfed8f3aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received unexpected event network-vif-plugged-f4011c98-b436-4719-bebd-fb90cd9e986f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.585 257641 DEBUG nova.compute.manager [req-642b3bce-879e-4bfd-bb79-7b2aa9580120 req-0e526e9b-8b47-42ec-b981-163b2fc996aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 98874a34-13b7-4247-93ef-901257769274] Received event network-vif-deleted-f4011c98-b436-4719-bebd-fb90cd9e986f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615877767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.902 257641 DEBUG oslo_concurrency.processutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.910 257641 DEBUG nova.compute.provider_tree [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.928 257641 DEBUG nova.scheduler.client.report [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.954 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:34 np0005539550 nova_compute[257631]: 2025-11-29 08:30:34.979 257641 INFO nova.scheduler.client.report [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 98874a34-13b7-4247-93ef-901257769274#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.060 257641 DEBUG oslo_concurrency.lockutils [None req-41ea6f14-5884-4bb1-9b80-d68dda4b4956 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "98874a34-13b7-4247-93ef-901257769274" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.314s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:35.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.847 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.847 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.848 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.848 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.848 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.849 257641 INFO nova.compute.manager [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Terminating instance#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.851 257641 DEBUG nova.compute.manager [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:35 np0005539550 kernel: tap87b89ebb-67 (unregistering): left promiscuous mode
Nov 29 03:30:35 np0005539550 NetworkManager[49039]: <info>  [1764405035.9054] device (tap87b89ebb-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:35Z|00721|binding|INFO|Releasing lport 87b89ebb-67dd-420c-b59a-66707953da73 from this chassis (sb_readonly=0)
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:35Z|00722|binding|INFO|Setting lport 87b89ebb-67dd-420c-b59a-66707953da73 down in Southbound
Nov 29 03:30:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:35Z|00723|binding|INFO|Removing iface tap87b89ebb-67 ovn-installed in OVS
Nov 29 03:30:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:35.916 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3e:13:86 10.100.0.7'], port_security=['fa:16:3e:3e:13:86 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '122c0a9b-bb66-40c6-ad51-ca11eb95e9a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=87b89ebb-67dd-420c-b59a-66707953da73) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:35.918 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 87b89ebb-67dd-420c-b59a-66707953da73 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:30:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:35.919 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14ea2b48-9984-443b-82fc-568ae98723fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:35.920 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d29aa7b0-6c61-4f9a-9ca2-c21029927dbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:35.921 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace which is not needed anymore#033[00m
Nov 29 03:30:35 np0005539550 nova_compute[257631]: 2025-11-29 08:30:35.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:35 np0005539550 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000099.scope: Deactivated successfully.
Nov 29 03:30:35 np0005539550 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d00000099.scope: Consumed 14.632s CPU time.
Nov 29 03:30:35 np0005539550 systemd-machined[216673]: Machine qemu-83-instance-00000099 terminated.
Nov 29 03:30:36 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [NOTICE]   (353901) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:36 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [NOTICE]   (353901) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:36 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [WARNING]  (353901) : Exiting Master process...
Nov 29 03:30:36 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [ALERT]    (353901) : Current worker (353918) exited with code 143 (Terminated)
Nov 29 03:30:36 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[353896]: [WARNING]  (353901) : All workers exited. Exiting... (0)
Nov 29 03:30:36 np0005539550 systemd[1]: libpod-5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418.scope: Deactivated successfully.
Nov 29 03:30:36 np0005539550 podman[354511]: 2025-11-29 08:30:36.065287127 +0000 UTC m=+0.045512526 container died 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.083 257641 INFO nova.virt.libvirt.driver [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Instance destroyed successfully.#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.084 257641 DEBUG nova.objects.instance [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a08a9c6887ab2076e60709315e2ab7263636b61d5a95336e711b9b70ba23bdd6-merged.mount: Deactivated successfully.
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.097 257641 DEBUG nova.virt.libvirt.vif [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-901142645',display_name='tempest-ServersTestJSON-server-901142645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-901142645',id=153,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-2wpwp7ap',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:16Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=122c0a9b-bb66-40c6-ad51-ca11eb95e9a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.098 257641 DEBUG nova.network.os_vif_util [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "87b89ebb-67dd-420c-b59a-66707953da73", "address": "fa:16:3e:3e:13:86", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b89ebb-67", "ovs_interfaceid": "87b89ebb-67dd-420c-b59a-66707953da73", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.099 257641 DEBUG nova.network.os_vif_util [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.100 257641 DEBUG os_vif [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.103 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b89ebb-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.106 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 563 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 467 op/s
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.110 257641 INFO os_vif [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3e:13:86,bridge_name='br-int',has_traffic_filtering=True,id=87b89ebb-67dd-420c-b59a-66707953da73,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b89ebb-67')#033[00m
Nov 29 03:30:36 np0005539550 podman[354511]: 2025-11-29 08:30:36.110846745 +0000 UTC m=+0.091072144 container cleanup 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:36 np0005539550 systemd[1]: libpod-conmon-5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418.scope: Deactivated successfully.
Nov 29 03:30:36 np0005539550 podman[354557]: 2025-11-29 08:30:36.185994873 +0000 UTC m=+0.049455527 container remove 5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.191 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[feeca0e1-14fc-4ba7-a44f-199fef80d791]: (4, ('Sat Nov 29 08:30:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418)\n5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418\nSat Nov 29 08:30:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418)\n5ee057e1d7fbacad248910f8602d8c01c9bef3e75126638f9650a47e3b704418\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.193 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d604f6cc-fedd-4e8c-b615-fcc785fb1ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.194 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.196 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:36 np0005539550 kernel: tap14ea2b48-90: left promiscuous mode
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.215 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.217 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0e77fd-f9bb-40b9-9904-44e851e0eef1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.233 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c063f50d-9ebb-4936-b695-16ab04ec764b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.234 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[088dcb2b-3986-487f-8a9a-75843f20363d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.249 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6ea9bc-b55c-4eea-95db-997cd75cb060]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 806354, 'reachable_time': 29800, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354585, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 systemd[1]: run-netns-ovnmeta\x2d14ea2b48\x2d9984\x2d443b\x2d82fc\x2d568ae98723fc.mount: Deactivated successfully.
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.254 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:36.254 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[645d148a-74ce-43a5-beda-f86e62d8700c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:36.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.507 257641 INFO nova.virt.libvirt.driver [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Deleting instance files /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_del#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.508 257641 INFO nova.virt.libvirt.driver [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Deletion of /var/lib/nova/instances/122c0a9b-bb66-40c6-ad51-ca11eb95e9a6_del complete#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.575 257641 INFO nova.compute.manager [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.576 257641 DEBUG oslo.service.loopingcall [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.576 257641 DEBUG nova.compute.manager [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.577 257641 DEBUG nova.network.neutron [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:36 np0005539550 nova_compute[257631]: 2025-11-29 08:30:36.934 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:37.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.909 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.909 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.909 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.910 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.910 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.911 257641 INFO nova.compute.manager [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Terminating instance#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.912 257641 DEBUG nova.compute.manager [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.941 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:30:37 np0005539550 nova_compute[257631]: 2025-11-29 08:30:37.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:37 np0005539550 kernel: tape1a8ada6-65 (unregistering): left promiscuous mode
Nov 29 03:30:37 np0005539550 NetworkManager[49039]: <info>  [1764405037.9619] device (tape1a8ada6-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:38Z|00724|binding|INFO|Releasing lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 from this chassis (sb_readonly=0)
Nov 29 03:30:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:38Z|00725|binding|INFO|Setting lport e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 down in Southbound
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:38Z|00726|binding|INFO|Removing iface tape1a8ada6-65 ovn-installed in OVS
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.019 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.023 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8d:56:a6 10.100.0.14'], port_security=['fa:16:3e:8d:56:a6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0310268-d298-469e-9f04-6315f83c3f89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae71059d02774857be85797a3be0e4e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9cdb0c1e-9792-4231-abe9-b49a2c7e81de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=43696b0d-f042-4e44-8852-c0333c8ffa4f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.025 158978 INFO neutron.agent.ovn.metadata.agent [-] Port e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 in datapath d9d41f0a-17f9-4df4-a453-04da996d63b6 unbound from our chassis#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.027 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d9d41f0a-17f9-4df4-a453-04da996d63b6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.028 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d74785e-2f1d-46f4-ae70-4b56a72bec63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.028 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 namespace which is not needed anymore#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000091.scope: Deactivated successfully.
Nov 29 03:30:38 np0005539550 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000091.scope: Consumed 1.440s CPU time.
Nov 29 03:30:38 np0005539550 systemd-machined[216673]: Machine qemu-80-instance-00000091 terminated.
Nov 29 03:30:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 563 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 467 op/s
Nov 29 03:30:38 np0005539550 NetworkManager[49039]: <info>  [1764405038.1325] manager: (tape1a8ada6-65): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.134 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.140 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.153 257641 INFO nova.virt.libvirt.driver [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Instance destroyed successfully.#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.153 257641 DEBUG nova.objects.instance [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lazy-loading 'resources' on Instance uuid a0310268-d298-469e-9f04-6315f83c3f89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:38 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [NOTICE]   (351180) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:38 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [NOTICE]   (351180) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:38 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [WARNING]  (351180) : Exiting Master process...
Nov 29 03:30:38 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [ALERT]    (351180) : Current worker (351182) exited with code 143 (Terminated)
Nov 29 03:30:38 np0005539550 neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6[351176]: [WARNING]  (351180) : All workers exited. Exiting... (0)
Nov 29 03:30:38 np0005539550 systemd[1]: libpod-cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400.scope: Deactivated successfully.
Nov 29 03:30:38 np0005539550 podman[354631]: 2025-11-29 08:30:38.165675957 +0000 UTC m=+0.052245468 container died cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.166 257641 DEBUG nova.virt.libvirt.vif [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:27:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-1793211077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-1793211077',id=145,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:29:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae71059d02774857be85797a3be0e4e6',ramdisk_id='',reservation_id='r-mu4bcpio',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1715153470-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:29:06Z,user_data=None,user_id='64b11a4dc36b4f55b85dbe846183be55',uuid=a0310268-d298-469e-9f04-6315f83c3f89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.167 257641 DEBUG nova.network.os_vif_util [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converting VIF {"id": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "address": "fa:16:3e:8d:56:a6", "network": {"id": "d9d41f0a-17f9-4df4-a453-04da996d63b6", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-811003261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae71059d02774857be85797a3be0e4e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape1a8ada6-65", "ovs_interfaceid": "e1a8ada6-6584-4ef5-8f52-75c5e5de9d86", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.168 257641 DEBUG nova.network.os_vif_util [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.168 257641 DEBUG os_vif [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.170 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.171 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1a8ada6-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.172 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.173 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.176 257641 INFO os_vif [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8d:56:a6,bridge_name='br-int',has_traffic_filtering=True,id=e1a8ada6-6584-4ef5-8f52-75c5e5de9d86,network=Network(d9d41f0a-17f9-4df4-a453-04da996d63b6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape1a8ada6-65')#033[00m
Nov 29 03:30:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-06a8f979fcd574cf5b384194a10058ade2ed5b8dc8cdb7e26f8593dfe78b8bd8-merged.mount: Deactivated successfully.
Nov 29 03:30:38 np0005539550 podman[354631]: 2025-11-29 08:30:38.202937463 +0000 UTC m=+0.089506974 container cleanup cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:30:38 np0005539550 systemd[1]: libpod-conmon-cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400.scope: Deactivated successfully.
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.253 257641 DEBUG nova.network.neutron [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.281 257641 INFO nova.compute.manager [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Took 1.70 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:38 np0005539550 podman[354687]: 2025-11-29 08:30:38.289788669 +0000 UTC m=+0.058308182 container remove cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.296 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee123ec-aeaf-41b6-9e1d-e7bfd2b1a477]: (4, ('Sat Nov 29 08:30:38 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400)\ncf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400\nSat Nov 29 08:30:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 (cf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400)\ncf19f9137064cda4932f01e6167cd49d73e990cf40cad738f26e40f376361400\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.298 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0cbe7a-44b4-4510-822f-bcb271388c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.298 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd9d41f0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:38 np0005539550 kernel: tapd9d41f0a-10: left promiscuous mode
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.314 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.320 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a92cdf-e468-44bf-9412-4d08ca44cb1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.326 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.327 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.336 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5f31a35a-086c-4c99-8d88-7a67c338afc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.338 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b44f4ff-a225-499d-abb8-c1c3b96d8c05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.357 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9ac83-6df6-471d-bcce-22d62b6398c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 799422, 'reachable_time': 28553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354705, 'error': None, 'target': 'ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.359 257641 DEBUG nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:30:38 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd9d41f0a\x2d17f9\x2d4df4\x2da453\x2d04da996d63b6.mount: Deactivated successfully.
Nov 29 03:30:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.362 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d9d41f0a-17f9-4df4-a453-04da996d63b6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:30:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:38.362 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b33775dc-1ab3-4199-9d1a-9cefddb40cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.377 257641 DEBUG nova.compute.manager [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.378 257641 DEBUG oslo_concurrency.lockutils [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.378 257641 DEBUG oslo_concurrency.lockutils [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.379 257641 DEBUG oslo_concurrency.lockutils [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.379 257641 DEBUG nova.compute.manager [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.379 257641 DEBUG nova.compute.manager [req-5888f40d-c0e0-4203-b734-a95a2736a6c0 req-c954e8bb-1626-4262-8ff5-095500e63a07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-unplugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.382 257641 DEBUG nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.383 257641 DEBUG nova.compute.provider_tree [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968850713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.397 257641 DEBUG nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.414 257641 INFO nova.virt.libvirt.driver [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Deleting instance files /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89_del#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.415 257641 INFO nova.virt.libvirt.driver [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Deletion of /var/lib/nova/instances/a0310268-d298-469e-9f04-6315f83c3f89_del complete#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.418 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.422 257641 DEBUG nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.501 257641 INFO nova.compute.manager [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.503 257641 DEBUG oslo.service.loopingcall [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.506 257641 DEBUG oslo_concurrency.processutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.533 257641 DEBUG nova.compute.manager [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.533 257641 DEBUG nova.network.neutron [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.542 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Error from libvirt while getting description of instance-00000091: [Error Code 42] Domain not found: no domain with matching uuid 'a0310268-d298-469e-9f04-6315f83c3f89' (instance-00000091): libvirt.libvirtError: Domain not found: no domain with matching uuid 'a0310268-d298-469e-9f04-6315f83c3f89' (instance-00000091)#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.710 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.711 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4287MB free_disk=20.77368927001953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.712 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.873 257641 DEBUG nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-vif-unplugged-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.874 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.874 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.874 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.874 257641 DEBUG nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] No waiting events found dispatching network-vif-unplugged-87b89ebb-67dd-420c-b59a-66707953da73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 WARNING nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received unexpected event network-vif-unplugged-87b89ebb-67dd-420c-b59a-66707953da73 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 DEBUG nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 DEBUG oslo_concurrency.lockutils [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.875 257641 DEBUG nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] No waiting events found dispatching network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.876 257641 WARNING nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received unexpected event network-vif-plugged-87b89ebb-67dd-420c-b59a-66707953da73 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.876 257641 DEBUG nova.compute.manager [req-fd5af82b-7371-4496-aab1-12580742430b req-2c0afe30-535e-46a0-b512-f0d540a6b3d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Received event network-vif-deleted-87b89ebb-67dd-420c-b59a-66707953da73 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125218648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2364776924' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:30:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2364776924' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.992 257641 DEBUG oslo_concurrency.processutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:38 np0005539550 nova_compute[257631]: 2025-11-29 08:30:38.997 257641 DEBUG nova.compute.provider_tree [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.024 257641 DEBUG nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.046 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.049 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.109 257641 INFO nova.scheduler.client.report [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.139 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance a0310268-d298-469e-9f04-6315f83c3f89 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.140 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.140 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.149 257641 DEBUG nova.network.neutron [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.173 257641 INFO nova.compute.manager [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.175 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.213 257641 DEBUG oslo_concurrency.lockutils [None req-8e221844-2eb2-4630-b40f-93584b5c6875 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "122c0a9b-bb66-40c6-ad51-ca11eb95e9a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.366s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:39.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.480 257641 INFO nova.compute.manager [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.525 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/707960659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.617 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.622 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.644 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.668 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.669 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.669 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:39 np0005539550 nova_compute[257631]: 2025-11-29 08:30:39.730 257641 DEBUG oslo_concurrency.processutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 482 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 7.2 MiB/s wr, 502 op/s
Nov 29 03:30:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/964867085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.207 257641 DEBUG oslo_concurrency.processutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.215 257641 DEBUG nova.compute.provider_tree [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.238 257641 DEBUG nova.scheduler.client.report [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.259 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.279 257641 INFO nova.scheduler.client.report [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Deleted allocations for instance a0310268-d298-469e-9f04-6315f83c3f89#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.357 257641 DEBUG oslo_concurrency.lockutils [None req-09bdbe67-29c7-4614-803c-7fc57d5e0b1d 64b11a4dc36b4f55b85dbe846183be55 ae71059d02774857be85797a3be0e4e6 - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.500 257641 DEBUG nova.compute.manager [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.501 257641 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "a0310268-d298-469e-9f04-6315f83c3f89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.501 257641 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.502 257641 DEBUG oslo_concurrency.lockutils [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "a0310268-d298-469e-9f04-6315f83c3f89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.502 257641 DEBUG nova.compute.manager [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] No waiting events found dispatching network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.502 257641 WARNING nova.compute.manager [req-8489eed8-a98d-4b74-a2cd-1d10fa681a30 req-ce55b896-4236-488c-a2fe-211b55872af1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received unexpected event network-vif-plugged-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.670 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.671 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.688 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:30:40 np0005539550 nova_compute[257631]: 2025-11-29 08:30:40.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:41 np0005539550 nova_compute[257631]: 2025-11-29 08:30:41.231 257641 DEBUG nova.compute.manager [req-789786b6-f71b-4a21-ad4d-c2a33d2f4aa6 req-938e1e9b-f047-44af-aa3b-1a842593376d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Received event network-vif-deleted-e1a8ada6-6584-4ef5-8f52-75c5e5de9d86 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:41.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Nov 29 03:30:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Nov 29 03:30:41 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Nov 29 03:30:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 3.0 MiB/s wr, 410 op/s
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.191 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.192 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.236 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.292 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.293 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.299 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.300 257641 INFO nova.compute.claims [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:30:42 np0005539550 podman[354779]: 2025-11-29 08:30:42.327145128 +0000 UTC m=+0.058790634 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:30:42 np0005539550 podman[354778]: 2025-11-29 08:30:42.331750805 +0000 UTC m=+0.067023913 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:42.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.388 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:30:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3416939960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.862 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.870 257641 DEBUG nova.compute.provider_tree [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.888 257641 DEBUG nova.scheduler.client.report [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.913 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.914 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.962 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.963 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:30:42 np0005539550 nova_compute[257631]: 2025-11-29 08:30:42.988 257641 INFO nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.011 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.131 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.133 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.133 257641 INFO nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Creating image(s)#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.220 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.251 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.279 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.283 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.317 257641 DEBUG nova.policy [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.320 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.359 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.359 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.360 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.360 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.383 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.386 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8e50b42f-203e-48d7-a154-42a6b14927c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:43.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:43 np0005539550 nova_compute[257631]: 2025-11-29 08:30:43.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 436 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.6 MiB/s wr, 392 op/s
Nov 29 03:30:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:44.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.466 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8e50b42f-203e-48d7-a154-42a6b14927c9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Nov 29 03:30:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.658 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Successfully created port: 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.665 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.780 257641 DEBUG nova.objects.instance [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e50b42f-203e-48d7-a154-42a6b14927c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.798 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.798 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Ensure instance console log exists: /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.799 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.799 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:44 np0005539550 nova_compute[257631]: 2025-11-29 08:30:44.800 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 324 op/s
Nov 29 03:30:46 np0005539550 nova_compute[257631]: 2025-11-29 08:30:46.123 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Successfully updated port: 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:30:46 np0005539550 nova_compute[257631]: 2025-11-29 08:30:46.143 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:46 np0005539550 nova_compute[257631]: 2025-11-29 08:30:46.143 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:46 np0005539550 nova_compute[257631]: 2025-11-29 08:30:46.144 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:30:46 np0005539550 nova_compute[257631]: 2025-11-29 08:30:46.285 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:30:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:46.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.249 257641 DEBUG nova.network.neutron [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Updating instance_info_cache with network_info: [{"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.269 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.270 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Instance network_info: |[{"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.272 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Start _get_guest_xml network_info=[{"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.276 257641 WARNING nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.283 257641 DEBUG nova.virt.libvirt.host [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.284 257641 DEBUG nova.virt.libvirt.host [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.288 257641 DEBUG nova.virt.libvirt.host [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.288 257641 DEBUG nova.virt.libvirt.host [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.289 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.290 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.290 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.290 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.291 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.291 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.291 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.291 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.292 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.292 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.292 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.292 257641 DEBUG nova.virt.hardware [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.295 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4229942525' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.800 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.831 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.838 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.984 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405032.9835052, 98874a34-13b7-4247-93ef-901257769274 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:47 np0005539550 nova_compute[257631]: 2025-11-29 08:30:47.985 257641 INFO nova.compute.manager [-] [instance: 98874a34-13b7-4247-93ef-901257769274] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.005 257641 DEBUG nova.compute.manager [None req-439eef9f-b97b-49a1-af14-784811bb725d - - - - - -] [instance: 98874a34-13b7-4247-93ef-901257769274] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 235 op/s
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.208 257641 DEBUG nova.compute.manager [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-changed-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.208 257641 DEBUG nova.compute.manager [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Refreshing instance network info cache due to event network-changed-3d4f3867-eaac-4f2e-bc45-77212f42b7c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.209 257641 DEBUG oslo_concurrency.lockutils [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.209 257641 DEBUG oslo_concurrency.lockutils [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.209 257641 DEBUG nova.network.neutron [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Refreshing network info cache for port 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:30:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1975412455' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.569 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.570 257641 DEBUG nova.virt.libvirt.vif [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-598845279',display_name='tempest-ServersTestJSON-server-598845279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-598845279',id=157,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-gevcel89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:43Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=8e50b42f-203e-48d7-a154-42a6b14927c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.571 257641 DEBUG nova.network.os_vif_util [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.572 257641 DEBUG nova.network.os_vif_util [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.573 257641 DEBUG nova.objects.instance [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e50b42f-203e-48d7-a154-42a6b14927c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.591 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <uuid>8e50b42f-203e-48d7-a154-42a6b14927c9</uuid>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <name>instance-0000009d</name>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersTestJSON-server-598845279</nova:name>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:30:47</nova:creationTime>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <nova:port uuid="3d4f3867-eaac-4f2e-bc45-77212f42b7c2">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="serial">8e50b42f-203e-48d7-a154-42a6b14927c9</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="uuid">8e50b42f-203e-48d7-a154-42a6b14927c9</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8e50b42f-203e-48d7-a154-42a6b14927c9_disk">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:9b:2b:bd"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <target dev="tap3d4f3867-ea"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/console.log" append="off"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:30:48 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:30:48 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:30:48 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:30:48 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.593 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Preparing to wait for external event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.593 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.593 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.594 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.594 257641 DEBUG nova.virt.libvirt.vif [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:30:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-598845279',display_name='tempest-ServersTestJSON-server-598845279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-598845279',id=157,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-gevcel89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:30:43Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=8e50b42f-203e-48d7-a154-42a6b14927c9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.595 257641 DEBUG nova.network.os_vif_util [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.596 257641 DEBUG nova.network.os_vif_util [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.596 257641 DEBUG os_vif [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.598 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.599 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.602 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.602 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d4f3867-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.602 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3d4f3867-ea, col_values=(('external_ids', {'iface-id': '3d4f3867-eaac-4f2e-bc45-77212f42b7c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9b:2b:bd', 'vm-uuid': '8e50b42f-203e-48d7-a154-42a6b14927c9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.604 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 NetworkManager[49039]: <info>  [1764405048.6050] manager: (tap3d4f3867-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Nov 29 03:30:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.607 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.609 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.610 257641 INFO os_vif [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea')#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.892 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.892 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.892 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:9b:2b:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.893 257641 INFO nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Using config drive#033[00m
Nov 29 03:30:48 np0005539550 nova_compute[257631]: 2025-11-29 08:30:48.919 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Nov 29 03:30:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Nov 29 03:30:49 np0005539550 nova_compute[257631]: 2025-11-29 08:30:49.373 257641 INFO nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Creating config drive at /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config#033[00m
Nov 29 03:30:49 np0005539550 nova_compute[257631]: 2025-11-29 08:30:49.378 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5rg7nr1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:49 np0005539550 nova_compute[257631]: 2025-11-29 08:30:49.512 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5rg7nr1y" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Nov 29 03:30:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:50.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:50 np0005539550 nova_compute[257631]: 2025-11-29 08:30:50.830 257641 DEBUG nova.storage.rbd_utils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:30:50 np0005539550 nova_compute[257631]: 2025-11-29 08:30:50.834 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config 8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.080 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405036.079353, 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.081 257641 INFO nova.compute.manager [-] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.154 257641 DEBUG nova.compute.manager [None req-1ca63a1b-37b6-4626-aab5-d17337f294f6 - - - - - -] [instance: 122c0a9b-bb66-40c6-ad51-ca11eb95e9a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.404 257641 DEBUG nova.network.neutron [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Updated VIF entry in instance network info cache for port 3d4f3867-eaac-4f2e-bc45-77212f42b7c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.405 257641 DEBUG nova.network.neutron [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Updating instance_info_cache with network_info: [{"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:30:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.425 257641 DEBUG oslo_concurrency.lockutils [req-5714c634-cf71-47b1-b9e5-26123392180d req-52b3d2e1-c082-4c98-8138-d4e645dacbf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8e50b42f-203e-48d7-a154-42a6b14927c9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:30:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:51.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.718 257641 DEBUG oslo_concurrency.processutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config 8e50b42f-203e-48d7-a154-42a6b14927c9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.884s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.718 257641 INFO nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Deleting local config drive /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9/disk.config because it was imported into RBD.#033[00m
Nov 29 03:30:51 np0005539550 kernel: tap3d4f3867-ea: entered promiscuous mode
Nov 29 03:30:51 np0005539550 NetworkManager[49039]: <info>  [1764405051.7786] manager: (tap3d4f3867-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/319)
Nov 29 03:30:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:51Z|00727|binding|INFO|Claiming lport 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 for this chassis.
Nov 29 03:30:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:51Z|00728|binding|INFO|3d4f3867-eaac-4f2e-bc45-77212f42b7c2: Claiming fa:16:3e:9b:2b:bd 10.100.0.10
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.830 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:2b:bd 10.100.0.10'], port_security=['fa:16:3e:9b:2b:bd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8e50b42f-203e-48d7-a154-42a6b14927c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3d4f3867-eaac-4f2e-bc45-77212f42b7c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.831 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.832 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.838 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:51Z|00729|binding|INFO|Setting lport 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 ovn-installed in OVS
Nov 29 03:30:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:51Z|00730|binding|INFO|Setting lport 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 up in Southbound
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.841 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.844 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.845 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8a98d9-7dd1-44a6-bd25-059e77405895]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.846 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14ea2b48-91 in ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.848 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14ea2b48-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.848 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[10cbebcd-b8cf-4b87-8989-f07072787208]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.850 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e01000-1b6f-42a4-a8be-e501c468f83c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 systemd-udevd[355194]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.862 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[851090c5-9738-46d3-b8a7-5559f3fb382e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 systemd-machined[216673]: New machine qemu-85-instance-0000009d.
Nov 29 03:30:51 np0005539550 NetworkManager[49039]: <info>  [1764405051.8706] device (tap3d4f3867-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:30:51 np0005539550 NetworkManager[49039]: <info>  [1764405051.8719] device (tap3d4f3867-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:30:51 np0005539550 systemd[1]: Started Virtual Machine qemu-85-instance-0000009d.
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.887 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[af277d8e-fbd3-4b67-9d6f-ff4b17b02b9d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 nova_compute[257631]: 2025-11-29 08:30:51.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.918 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[90d93ebf-99ad-47e9-b9e3-8cdb4dfbd2bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.923 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f74138a-8ff8-40cc-ad66-70a7493ba412]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 NetworkManager[49039]: <info>  [1764405051.9250] manager: (tap14ea2b48-90): new Veth device (/org/freedesktop/NetworkManager/Devices/320)
Nov 29 03:30:51 np0005539550 systemd-udevd[355200]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.955 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2687cb-19cc-411b-827c-f126a4e21c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.958 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[df1e2832-dbc2-4d65-97d6-18aead7d1051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:51 np0005539550 NetworkManager[49039]: <info>  [1764405051.9834] device (tap14ea2b48-90): carrier: link connected
Nov 29 03:30:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:51.990 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[827f1332-4f08-476d-bd5d-b028f5cde344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.009 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22161c3e-24c8-4305-8cae-f6df7bb445c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809964, 'reachable_time': 36208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355228, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.029 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1761b86-1723-4789-b7af-17bf31cc8823]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:168b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 809964, 'tstamp': 809964}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355229, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.047 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82567ab6-e3bc-4fad-a91e-3d8ee6031fd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 211], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809964, 'reachable_time': 36208, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355230, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.054 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.065 257641 DEBUG nova.compute.manager [req-40fdeb11-fcaa-4416-8374-0f210e572412 req-266bfae3-27c6-4ee1-b9fd-e553db4c4571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.066 257641 DEBUG oslo_concurrency.lockutils [req-40fdeb11-fcaa-4416-8374-0f210e572412 req-266bfae3-27c6-4ee1-b9fd-e553db4c4571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.066 257641 DEBUG oslo_concurrency.lockutils [req-40fdeb11-fcaa-4416-8374-0f210e572412 req-266bfae3-27c6-4ee1-b9fd-e553db4c4571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.067 257641 DEBUG oslo_concurrency.lockutils [req-40fdeb11-fcaa-4416-8374-0f210e572412 req-266bfae3-27c6-4ee1-b9fd-e553db4c4571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.067 257641 DEBUG nova.compute.manager [req-40fdeb11-fcaa-4416-8374-0f210e572412 req-266bfae3-27c6-4ee1-b9fd-e553db4c4571 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Processing event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.080 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ffaee130-1fe7-4b01-b9dd-1b96e019bca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 384 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 4.4 MiB/s wr, 112 op/s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.143 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9f7a28-b436-49ca-a951-6674edac0b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.144 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.144 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.145 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.146 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539550 NetworkManager[49039]: <info>  [1764405052.1472] manager: (tap14ea2b48-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Nov 29 03:30:52 np0005539550 kernel: tap14ea2b48-90: entered promiscuous mode
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.148 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.149 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.150 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:52Z|00731|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:30:52 np0005539550 nova_compute[257631]: 2025-11-29 08:30:52.168 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.169 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.170 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e9d86b1-d05b-4313-9744-ca39c869245b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.171 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:30:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:52.172 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'env', 'PROCESS_TAG=haproxy-14ea2b48-9984-443b-82fc-568ae98723fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14ea2b48-9984-443b-82fc-568ae98723fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:30:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:52.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:52 np0005539550 podman[355261]: 2025-11-29 08:30:52.537373625 +0000 UTC m=+0.029024458 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:30:52 np0005539550 podman[355261]: 2025-11-29 08:30:52.947536992 +0000 UTC m=+0.439187815 container create c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 29 03:30:53 np0005539550 systemd[1]: Started libpod-conmon-c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195.scope.
Nov 29 03:30:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:30:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de242e01594ac284b82f5b4d030c4822ea48d77cba4d040ced0268337a45c296/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:30:53 np0005539550 podman[355261]: 2025-11-29 08:30:53.051525742 +0000 UTC m=+0.543176565 container init c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:30:53 np0005539550 podman[355261]: 2025-11-29 08:30:53.06837187 +0000 UTC m=+0.560022683 container start c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:30:53 np0005539550 podman[355275]: 2025-11-29 08:30:53.083051853 +0000 UTC m=+0.100362580 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:30:53 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [NOTICE]   (355306) : New worker (355310) forked
Nov 29 03:30:53 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [NOTICE]   (355306) : Loading success.
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.152 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405038.1506917, a0310268-d298-469e-9f04-6315f83c3f89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.152 257641 INFO nova.compute.manager [-] [instance: a0310268-d298-469e-9f04-6315f83c3f89] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.169 257641 DEBUG nova.compute.manager [None req-5ec182f3-b560-48c7-8666-13549f1f9300 - - - - - -] [instance: a0310268-d298-469e-9f04-6315f83c3f89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.341 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405053.340681, 8e50b42f-203e-48d7-a154-42a6b14927c9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.341 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] VM Started (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.343 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.347 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.351 257641 INFO nova.virt.libvirt.driver [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Instance spawned successfully.#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.351 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.367 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.370 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.379 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.379 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.380 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.380 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.381 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.381 257641 DEBUG nova.virt.libvirt.driver [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.415 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.416 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405053.3408875, 8e50b42f-203e-48d7-a154-42a6b14927c9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.416 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:30:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:53.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.464 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.467 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405053.346238, 8e50b42f-203e-48d7-a154-42a6b14927c9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.468 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.484 257641 INFO nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Took 10.35 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.485 257641 DEBUG nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.508 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.511 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.541 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.563 257641 INFO nova.compute.manager [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Took 11.28 seconds to build instance.#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.576 257641 DEBUG oslo_concurrency.lockutils [None req-3f693a13-e51a-4a65-9d9b-a8230b0b672d 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539550 nova_compute[257631]: 2025-11-29 08:30:53.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 391 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 4.6 MiB/s wr, 112 op/s
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.228 257641 DEBUG nova.compute.manager [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.229 257641 DEBUG oslo_concurrency.lockutils [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.229 257641 DEBUG oslo_concurrency.lockutils [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.230 257641 DEBUG oslo_concurrency.lockutils [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.230 257641 DEBUG nova.compute.manager [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] No waiting events found dispatching network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:54 np0005539550 nova_compute[257631]: 2025-11-29 08:30:54.230 257641 WARNING nova.compute.manager [req-ae625151-9480-4241-b792-876856f11621 req-bd5ac539-9b1c-4753-aafe-faca1b68a37f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received unexpected event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:30:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:30:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:54.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:30:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 402 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.299 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.300 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.301 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.301 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.301 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.303 257641 INFO nova.compute.manager [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Terminating instance#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.303 257641 DEBUG nova.compute.manager [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:30:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:56.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:56 np0005539550 kernel: tap3d4f3867-ea (unregistering): left promiscuous mode
Nov 29 03:30:56 np0005539550 NetworkManager[49039]: <info>  [1764405056.6089] device (tap3d4f3867-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:56Z|00732|binding|INFO|Releasing lport 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 from this chassis (sb_readonly=0)
Nov 29 03:30:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:56Z|00733|binding|INFO|Setting lport 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 down in Southbound
Nov 29 03:30:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:30:56Z|00734|binding|INFO|Removing iface tap3d4f3867-ea ovn-installed in OVS
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.670 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.687 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Nov 29 03:30:56 np0005539550 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d0000009d.scope: Consumed 4.532s CPU time.
Nov 29 03:30:56 np0005539550 systemd-machined[216673]: Machine qemu-85-instance-0000009d terminated.
Nov 29 03:30:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:56.720 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9b:2b:bd 10.100.0.10'], port_security=['fa:16:3e:9b:2b:bd 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8e50b42f-203e-48d7-a154-42a6b14927c9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3d4f3867-eaac-4f2e-bc45-77212f42b7c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:30:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:56.721 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3d4f3867-eaac-4f2e-bc45-77212f42b7c2 in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:30:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:56.722 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14ea2b48-9984-443b-82fc-568ae98723fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:30:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:56.723 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7747ced5-676b-43d6-bd35-aa6697fe1773]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:30:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:30:56.724 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace which is not needed anymore#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.736 257641 INFO nova.virt.libvirt.driver [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Instance destroyed successfully.#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.736 257641 DEBUG nova.objects.instance [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 8e50b42f-203e-48d7-a154-42a6b14927c9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.760 257641 DEBUG nova.virt.libvirt.vif [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:30:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-598845279',display_name='tempest-ServersTestJSON-server-598845279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-598845279',id=157,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:30:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-gevcel89',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:30:54Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=8e50b42f-203e-48d7-a154-42a6b14927c9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.761 257641 DEBUG nova.network.os_vif_util [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "address": "fa:16:3e:9b:2b:bd", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3d4f3867-ea", "ovs_interfaceid": "3d4f3867-eaac-4f2e-bc45-77212f42b7c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.762 257641 DEBUG nova.network.os_vif_util [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.763 257641 DEBUG os_vif [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.764 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.765 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d4f3867-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.768 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:56 np0005539550 nova_compute[257631]: 2025-11-29 08:30:56.772 257641 INFO os_vif [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9b:2b:bd,bridge_name='br-int',has_traffic_filtering=True,id=3d4f3867-eaac-4f2e-bc45-77212f42b7c2,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3d4f3867-ea')#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.051 257641 DEBUG nova.compute.manager [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-unplugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.052 257641 DEBUG oslo_concurrency.lockutils [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.053 257641 DEBUG oslo_concurrency.lockutils [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.053 257641 DEBUG oslo_concurrency.lockutils [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.053 257641 DEBUG nova.compute.manager [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] No waiting events found dispatching network-vif-unplugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:57 np0005539550 nova_compute[257631]: 2025-11-29 08:30:57.054 257641 DEBUG nova.compute.manager [req-ddeffb3c-8f86-42cc-a478-2a975e83113f req-f8b080ab-75b6-496e-928e-af0c94628606 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-unplugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [NOTICE]   (355306) : haproxy version is 2.8.14-c23fe91
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [NOTICE]   (355306) : path to executable is /usr/sbin/haproxy
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [WARNING]  (355306) : Exiting Master process...
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [WARNING]  (355306) : Exiting Master process...
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [ALERT]    (355306) : Current worker (355310) exited with code 143 (Terminated)
Nov 29 03:30:57 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[355292]: [WARNING]  (355306) : All workers exited. Exiting... (0)
Nov 29 03:30:57 np0005539550 systemd[1]: libpod-c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195.scope: Deactivated successfully.
Nov 29 03:30:57 np0005539550 podman[355405]: 2025-11-29 08:30:57.077890903 +0000 UTC m=+0.261054061 container died c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:30:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:57.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 402 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.7 MiB/s wr, 201 op/s
Nov 29 03:30:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:30:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:30:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195-userdata-shm.mount: Deactivated successfully.
Nov 29 03:30:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-de242e01594ac284b82f5b4d030c4822ea48d77cba4d040ced0268337a45c296-merged.mount: Deactivated successfully.
Nov 29 03:30:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:30:58.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:30:58 np0005539550 nova_compute[257631]: 2025-11-29 08:30:58.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:30:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:30:59 np0005539550 podman[355405]: 2025-11-29 08:30:59.134474639 +0000 UTC m=+2.317637787 container cleanup c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:30:59 np0005539550 systemd[1]: libpod-conmon-c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195.scope: Deactivated successfully.
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.270 257641 DEBUG nova.compute.manager [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.271 257641 DEBUG oslo_concurrency.lockutils [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.271 257641 DEBUG oslo_concurrency.lockutils [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.271 257641 DEBUG oslo_concurrency.lockutils [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.271 257641 DEBUG nova.compute.manager [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] No waiting events found dispatching network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:30:59 np0005539550 nova_compute[257631]: 2025-11-29 08:30:59.272 257641 WARNING nova.compute.manager [req-6f45bc6f-5179-41b2-9582-869a2020be14 req-2ebf1e2c-b8a0-4b68-830a-7a5b7bf48213 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received unexpected event network-vif-plugged-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:30:59
Nov 29 03:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'images', 'backups']
Nov 29 03:30:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:30:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:30:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:30:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:30:59.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.4 MiB/s wr, 235 op/s
Nov 29 03:31:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:01.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:31:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:31:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:31:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:31:01 np0005539550 nova_compute[257631]: 2025-11-29 08:31:01.868 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 370 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 225 op/s
Nov 29 03:31:02 np0005539550 podman[355450]: 2025-11-29 08:31:02.291103861 +0000 UTC m=+3.134689256 container remove c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.304 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a94617f-f853-4a32-bbca-7d51baa90712]: (4, ('Sat Nov 29 08:30:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195)\nc6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195\nSat Nov 29 08:30:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (c6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195)\nc6704fa47bec6454b4a2557fd929a18a4ca386e8de9d5b595dfd8b55a8bfd195\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.306 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f7d8bdb-8c99-4d17-9033-de4fe1245e15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.307 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:02 np0005539550 nova_compute[257631]: 2025-11-29 08:31:02.309 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539550 kernel: tap14ea2b48-90: left promiscuous mode
Nov 29 03:31:02 np0005539550 nova_compute[257631]: 2025-11-29 08:31:02.333 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.337 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4d42a846-5578-42cc-9d8a-dc1a22b7876c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.356 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[37c68ca6-e330-4979-a487-a38fdd416895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.357 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e15d81f9-3171-4901-af67-a8f22d01ce90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.375 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63b8fdd3-3975-429f-9500-3a994afaeacd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 809957, 'reachable_time': 41544, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355597, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.378 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:02 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:02.378 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[aa9a31cc-d7a2-4370-b7da-4976e0d0805b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:02 np0005539550 systemd[1]: run-netns-ovnmeta\x2d14ea2b48\x2d9984\x2d443b\x2d82fc\x2d568ae98723fc.mount: Deactivated successfully.
Nov 29 03:31:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:02.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 89c8166f-e375-44e0-8a21-6d9109b718ad does not exist
Nov 29 03:31:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0111c8f2-1ac8-4178-9c5d-2e5ee5a09f2f does not exist
Nov 29 03:31:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e341d4d8-ae31-418e-be8f-909ccbbfcc6c does not exist
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:31:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:03 np0005539550 nova_compute[257631]: 2025-11-29 08:31:03.641 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:03 np0005539550 podman[355791]: 2025-11-29 08:31:03.890351874 +0000 UTC m=+0.021927328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:04 np0005539550 podman[355791]: 2025-11-29 08:31:04.000813469 +0000 UTC m=+0.132388893 container create 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 359 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 1013 KiB/s wr, 213 op/s
Nov 29 03:31:04 np0005539550 systemd[1]: Started libpod-conmon-984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442.scope.
Nov 29 03:31:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:31:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:04.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:04 np0005539550 podman[355791]: 2025-11-29 08:31:04.615684173 +0000 UTC m=+0.747259637 container init 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:04 np0005539550 podman[355791]: 2025-11-29 08:31:04.631128225 +0000 UTC m=+0.762703639 container start 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:04 np0005539550 trusting_wescoff[355807]: 167 167
Nov 29 03:31:04 np0005539550 systemd[1]: libpod-984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442.scope: Deactivated successfully.
Nov 29 03:31:04 np0005539550 conmon[355807]: conmon 984f5684990d6f5a567d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442.scope/container/memory.events
Nov 29 03:31:04 np0005539550 podman[355791]: 2025-11-29 08:31:04.97195199 +0000 UTC m=+1.103527404 container attach 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:31:04 np0005539550 podman[355791]: 2025-11-29 08:31:04.972830263 +0000 UTC m=+1.104405677 container died 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:31:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d9e922111e0447a0a980831843485f1fee439b0a218e9a905532c24b71bd2c20-merged.mount: Deactivated successfully.
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.037 257641 INFO nova.virt.libvirt.driver [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Deleting instance files /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9_del#033[00m
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.039 257641 INFO nova.virt.libvirt.driver [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Deletion of /var/lib/nova/instances/8e50b42f-203e-48d7-a154-42a6b14927c9_del complete#033[00m
Nov 29 03:31:06 np0005539550 podman[355791]: 2025-11-29 08:31:06.058978376 +0000 UTC m=+2.190553790 container remove 984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.103 257641 INFO nova.compute.manager [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Took 9.80 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.104 257641 DEBUG oslo.service.loopingcall [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.104 257641 DEBUG nova.compute.manager [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.104 257641 DEBUG nova.network.neutron [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 262 KiB/s wr, 214 op/s
Nov 29 03:31:06 np0005539550 systemd[1]: libpod-conmon-984f5684990d6f5a567deaf06a97a1dfffee8e4120535e7719cd14838ddc7442.scope: Deactivated successfully.
Nov 29 03:31:06 np0005539550 podman[355832]: 2025-11-29 08:31:06.195197215 +0000 UTC m=+0.022900873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:06 np0005539550 podman[355832]: 2025-11-29 08:31:06.420217609 +0000 UTC m=+0.247921247 container create fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:06 np0005539550 systemd[1]: Started libpod-conmon-fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644.scope.
Nov 29 03:31:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:06 np0005539550 podman[355832]: 2025-11-29 08:31:06.749514222 +0000 UTC m=+0.577217870 container init fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:31:06 np0005539550 podman[355832]: 2025-11-29 08:31:06.756474779 +0000 UTC m=+0.584178417 container start fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.871 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:06 np0005539550 podman[355832]: 2025-11-29 08:31:06.97192058 +0000 UTC m=+0.799624298 container attach fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.973 257641 DEBUG nova.network.neutron [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:06 np0005539550 nova_compute[257631]: 2025-11-29 08:31:06.994 257641 INFO nova.compute.manager [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.037 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.037 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.090 257641 DEBUG nova.compute.manager [req-634b5c12-db44-4aa1-95d3-b0a1694c0a85 req-41f5ef68-60fb-4683-b7d7-2398d2e3ccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Received event network-vif-deleted-3d4f3867-eaac-4f2e-bc45-77212f42b7c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.093 257641 DEBUG oslo_concurrency.processutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:07.212 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:07.213 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:07 np0005539550 compassionate_meninsky[355848]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:31:07 np0005539550 compassionate_meninsky[355848]: --> relative data size: 1.0
Nov 29 03:31:07 np0005539550 compassionate_meninsky[355848]: --> All data devices are unavailable
Nov 29 03:31:07 np0005539550 systemd[1]: libpod-fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644.scope: Deactivated successfully.
Nov 29 03:31:07 np0005539550 podman[355832]: 2025-11-29 08:31:07.649414275 +0000 UTC m=+1.477117953 container died fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947077441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.739 257641 DEBUG oslo_concurrency.processutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.750 257641 DEBUG nova.compute.provider_tree [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.771 257641 DEBUG nova.scheduler.client.report [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.797 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.832 257641 INFO nova.scheduler.client.report [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 8e50b42f-203e-48d7-a154-42a6b14927c9#033[00m
Nov 29 03:31:07 np0005539550 nova_compute[257631]: 2025-11-29 08:31:07.894 257641 DEBUG oslo_concurrency.lockutils [None req-29db1c99-4551-4906-896c-c9631959fc8a 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "8e50b42f-203e-48d7-a154-42a6b14927c9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 79 KiB/s wr, 87 op/s
Nov 29 03:31:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2815678eef5396ee21dea09280c29e83eb81f83a6d0f2f9fb4ea5625d462235a-merged.mount: Deactivated successfully.
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:31:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:31:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:08 np0005539550 podman[355832]: 2025-11-29 08:31:08.427027082 +0000 UTC m=+2.254730730 container remove fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:08 np0005539550 systemd[1]: libpod-conmon-fd8b6d61ba0d2ea725a27cbce32f893bd539391c2b89bfbdbcd60f53743eb644.scope: Deactivated successfully.
Nov 29 03:31:08 np0005539550 nova_compute[257631]: 2025-11-29 08:31:08.644 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:31:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:31:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:31:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:31:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.20465138 +0000 UTC m=+0.090470829 container create 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.143724962 +0000 UTC m=+0.029544501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:09 np0005539550 systemd[1]: Started libpod-conmon-88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496.scope.
Nov 29 03:31:09 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.39168735 +0000 UTC m=+0.277506829 container init 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.399273562 +0000 UTC m=+0.285093011 container start 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:31:09 np0005539550 sharp_cray[356052]: 167 167
Nov 29 03:31:09 np0005539550 systemd[1]: libpod-88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496.scope: Deactivated successfully.
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.409620285 +0000 UTC m=+0.295439754 container attach 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.410559489 +0000 UTC m=+0.296378938 container died 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:09.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8a789a919641efdc506cc72b28cddc2fff7ead34ad9f3d6202133e36fab14197-merged.mount: Deactivated successfully.
Nov 29 03:31:09 np0005539550 podman[356036]: 2025-11-29 08:31:09.605094579 +0000 UTC m=+0.490914038 container remove 88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:31:09 np0005539550 systemd[1]: libpod-conmon-88125f101e21fd2f5adbd269a6e803cfaa384b174ba6b6a72b72ae38406fe496.scope: Deactivated successfully.
Nov 29 03:31:09 np0005539550 podman[356076]: 2025-11-29 08:31:09.826069301 +0000 UTC m=+0.048674317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 83 KiB/s wr, 98 op/s
Nov 29 03:31:10 np0005539550 podman[356076]: 2025-11-29 08:31:10.26824417 +0000 UTC m=+0.490849196 container create 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:31:10 np0005539550 systemd[1]: Started libpod-conmon-66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55.scope.
Nov 29 03:31:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:10 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b4568d77251e60cdee3f1cdd96bc8e49828c10f0a37a629ea48fcce4ef05b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b4568d77251e60cdee3f1cdd96bc8e49828c10f0a37a629ea48fcce4ef05b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b4568d77251e60cdee3f1cdd96bc8e49828c10f0a37a629ea48fcce4ef05b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855b4568d77251e60cdee3f1cdd96bc8e49828c10f0a37a629ea48fcce4ef05b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:10 np0005539550 podman[356076]: 2025-11-29 08:31:10.696612108 +0000 UTC m=+0.919217114 container init 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:10 np0005539550 podman[356076]: 2025-11-29 08:31:10.704919899 +0000 UTC m=+0.927524895 container start 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:31:10 np0005539550 podman[356076]: 2025-11-29 08:31:10.750032315 +0000 UTC m=+0.972637311 container attach 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]: {
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:    "0": [
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:        {
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "devices": [
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "/dev/loop3"
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            ],
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "lv_name": "ceph_lv0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "lv_size": "7511998464",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "name": "ceph_lv0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "tags": {
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.cluster_name": "ceph",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.crush_device_class": "",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.encrypted": "0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.osd_id": "0",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.type": "block",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:                "ceph.vdo": "0"
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            },
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "type": "block",
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:            "vg_name": "ceph_vg0"
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:        }
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]:    ]
Nov 29 03:31:11 np0005539550 silly_mestorf[356092]: }
Nov 29 03:31:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:11.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:11 np0005539550 systemd[1]: libpod-66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55.scope: Deactivated successfully.
Nov 29 03:31:11 np0005539550 podman[356076]: 2025-11-29 08:31:11.460488927 +0000 UTC m=+1.683093943 container died 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:31:11 np0005539550 nova_compute[257631]: 2025-11-29 08:31:11.735 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405056.7337003, 8e50b42f-203e-48d7-a154-42a6b14927c9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:11 np0005539550 nova_compute[257631]: 2025-11-29 08:31:11.737 257641 INFO nova.compute.manager [-] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:31:11 np0005539550 nova_compute[257631]: 2025-11-29 08:31:11.764 257641 DEBUG nova.compute.manager [None req-cf4c0332-0c84-4f03-bf32-9bda929c206d - - - - - -] [instance: 8e50b42f-203e-48d7-a154-42a6b14927c9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-855b4568d77251e60cdee3f1cdd96bc8e49828c10f0a37a629ea48fcce4ef05b-merged.mount: Deactivated successfully.
Nov 29 03:31:11 np0005539550 nova_compute[257631]: 2025-11-29 08:31:11.873 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:12 np0005539550 podman[356076]: 2025-11-29 08:31:12.009759635 +0000 UTC m=+2.232364641 container remove 66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:31:12 np0005539550 systemd[1]: libpod-conmon-66fd82d86f87a0bef8fc68ce1b450ce6f591538a767b7afbd3910616516b3f55.scope: Deactivated successfully.
Nov 29 03:31:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 28 KiB/s wr, 70 op/s
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.248 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.249 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.266 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.330 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.331 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.340 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.340 257641 INFO nova.compute.claims [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:31:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:12 np0005539550 nova_compute[257631]: 2025-11-29 08:31:12.439 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:12 np0005539550 podman[356277]: 2025-11-29 08:31:12.639941999 +0000 UTC m=+0.027164301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.251013727 +0000 UTC m=+0.638236039 container create 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:31:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327263560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.375 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.936s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:13 np0005539550 systemd[1]: Started libpod-conmon-029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34.scope.
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.384 257641 DEBUG nova.compute.provider_tree [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.402 257641 DEBUG nova.scheduler.client.report [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.434 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.435 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:13.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.495 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.495 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.514 257641 INFO nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.531332086 +0000 UTC m=+0.918554388 container init 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:31:13 np0005539550 podman[356293]: 2025-11-29 08:31:13.531792157 +0000 UTC m=+0.257096570 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:13 np0005539550 podman[356294]: 2025-11-29 08:31:13.537845121 +0000 UTC m=+0.270606553 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.535 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.542739015 +0000 UTC m=+0.929961317 container start 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:13 np0005539550 confident_dubinsky[356324]: 167 167
Nov 29 03:31:13 np0005539550 systemd[1]: libpod-029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34.scope: Deactivated successfully.
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.557560702 +0000 UTC m=+0.944782974 container attach 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.558107296 +0000 UTC m=+0.945329578 container died 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.609 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.611 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.611 257641 INFO nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Creating image(s)#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.641 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.669 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.692 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.696 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.722 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4be182430df19f4972a282ff01dce52cd9cb6ce56337664d4fff6e01f936148a-merged.mount: Deactivated successfully.
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.762 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.762 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.763 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.764 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.871 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.877 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:13 np0005539550 podman[356277]: 2025-11-29 08:31:13.893419411 +0000 UTC m=+1.280641733 container remove 029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:31:13 np0005539550 systemd[1]: libpod-conmon-029b36e1f5e3c4818e912c243d647941fbc837816318595d00936c91b7eecc34.scope: Deactivated successfully.
Nov 29 03:31:13 np0005539550 nova_compute[257631]: 2025-11-29 08:31:13.960 257641 DEBUG nova.policy [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0741d46905e94415a372bd62751dff66', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5970d12b2c42419e889cd48de28c4b86', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 358 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 29 KiB/s wr, 56 op/s
Nov 29 03:31:14 np0005539550 podman[356449]: 2025-11-29 08:31:14.152994953 +0000 UTC m=+0.092187002 container create 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:31:14 np0005539550 podman[356449]: 2025-11-29 08:31:14.086021872 +0000 UTC m=+0.025213961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:31:14 np0005539550 systemd[1]: Started libpod-conmon-427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191.scope.
Nov 29 03:31:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70216962f7c21c8422d360e34ea146b74813d87efa4ea2ed350e902bd666f940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70216962f7c21c8422d360e34ea146b74813d87efa4ea2ed350e902bd666f940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70216962f7c21c8422d360e34ea146b74813d87efa4ea2ed350e902bd666f940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70216962f7c21c8422d360e34ea146b74813d87efa4ea2ed350e902bd666f940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:14.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:14 np0005539550 podman[356449]: 2025-11-29 08:31:14.472838866 +0000 UTC m=+0.412030925 container init 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:31:14 np0005539550 podman[356449]: 2025-11-29 08:31:14.486537943 +0000 UTC m=+0.425729982 container start 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:31:14 np0005539550 podman[356449]: 2025-11-29 08:31:14.491789727 +0000 UTC m=+0.430981816 container attach 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:31:14 np0005539550 nova_compute[257631]: 2025-11-29 08:31:14.881 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.004s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:14 np0005539550 nova_compute[257631]: 2025-11-29 08:31:14.945 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Successfully created port: c01b1b4d-0c44-49aa-8eee-26fc15a26f4d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:14 np0005539550 nova_compute[257631]: 2025-11-29 08:31:14.952 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] resizing rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.140 257641 DEBUG nova.objects.instance [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'migration_context' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.154 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.154 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Ensure instance console log exists: /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.155 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.155 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.155 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]: {
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:        "osd_id": 0,
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:        "type": "bluestore"
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]:    }
Nov 29 03:31:15 np0005539550 sharp_bardeen[356468]: }
Nov 29 03:31:15 np0005539550 systemd[1]: libpod-427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191.scope: Deactivated successfully.
Nov 29 03:31:15 np0005539550 podman[356449]: 2025-11-29 08:31:15.433392158 +0000 UTC m=+1.372584207 container died 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:31:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:15.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-70216962f7c21c8422d360e34ea146b74813d87efa4ea2ed350e902bd666f940-merged.mount: Deactivated successfully.
Nov 29 03:31:15 np0005539550 podman[356449]: 2025-11-29 08:31:15.854161394 +0000 UTC m=+1.793353433 container remove 427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:31:15 np0005539550 systemd[1]: libpod-conmon-427c1f8fda0d81cadd351905bcc769196c4092f0a4fd6fe96985d7085c6b3191.scope: Deactivated successfully.
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.967 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Successfully updated port: c01b1b4d-0c44-49aa-8eee-26fc15a26f4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.984 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.984 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquired lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:15 np0005539550 nova_compute[257631]: 2025-11-29 08:31:15.984 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:16 np0005539550 nova_compute[257631]: 2025-11-29 08:31:16.128 257641 DEBUG nova.compute.manager [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-changed-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:16 np0005539550 nova_compute[257631]: 2025-11-29 08:31:16.129 257641 DEBUG nova.compute.manager [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Refreshing instance network info cache due to event network-changed-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 320 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 553 KiB/s rd, 1.0 MiB/s wr, 79 op/s
Nov 29 03:31:16 np0005539550 nova_compute[257631]: 2025-11-29 08:31:16.129 257641 DEBUG oslo_concurrency.lockutils [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:16 np0005539550 nova_compute[257631]: 2025-11-29 08:31:16.152 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:31:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:16.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d71a1bce-6c50-4a5d-8e39-b979e9c200cd does not exist
Nov 29 03:31:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 654a0813-2700-4289-aa0f-e4c4037993fd does not exist
Nov 29 03:31:16 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d3cab616-f5b7-4822-989d-163c221b5d3c does not exist
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.705692) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076705754, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 795, "num_deletes": 254, "total_data_size": 1039978, "memory_usage": 1054592, "flush_reason": "Manual Compaction"}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076714803, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 1027784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52825, "largest_seqno": 53619, "table_properties": {"data_size": 1023736, "index_size": 1764, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9710, "raw_average_key_size": 20, "raw_value_size": 1015324, "raw_average_value_size": 2110, "num_data_blocks": 77, "num_entries": 481, "num_filter_entries": 481, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405023, "oldest_key_time": 1764405023, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 9144 microseconds, and 3451 cpu microseconds.
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.714840) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 1027784 bytes OK
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.714859) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.716705) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.716717) EVENT_LOG_v1 {"time_micros": 1764405076716713, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.716733) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 1035959, prev total WAL file size 1035959, number of live WAL files 2.
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.717498) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(1003KB)], [116(11MB)]
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076717579, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12865509, "oldest_snapshot_seqno": -1}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 8709 keys, 11015013 bytes, temperature: kUnknown
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076884338, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11015013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10959263, "index_size": 32854, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 227980, "raw_average_key_size": 26, "raw_value_size": 10806372, "raw_average_value_size": 1240, "num_data_blocks": 1272, "num_entries": 8709, "num_filter_entries": 8709, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405076, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.884832) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11015013 bytes
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.889001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.1 rd, 66.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.3 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(23.2) write-amplify(10.7) OK, records in: 9233, records dropped: 524 output_compression: NoCompression
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.889038) EVENT_LOG_v1 {"time_micros": 1764405076889020, "job": 70, "event": "compaction_finished", "compaction_time_micros": 166937, "compaction_time_cpu_micros": 49193, "output_level": 6, "num_output_files": 1, "total_output_size": 11015013, "num_input_records": 9233, "num_output_records": 8709, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076889546, "job": 70, "event": "table_file_deletion", "file_number": 118}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405076894778, "job": 70, "event": "table_file_deletion", "file_number": 116}
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.717355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.894849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.894903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.894905) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.894906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:31:16.894908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:31:16 np0005539550 nova_compute[257631]: 2025-11-29 08:31:16.913 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.054 257641 DEBUG nova.network.neutron [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updating instance_info_cache with network_info: [{"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.074 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Releasing lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.075 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance network_info: |[{"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.075 257641 DEBUG oslo_concurrency.lockutils [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.076 257641 DEBUG nova.network.neutron [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Refreshing network info cache for port c01b1b4d-0c44-49aa-8eee-26fc15a26f4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.081 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Start _get_guest_xml network_info=[{"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.089 257641 WARNING nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.098 257641 DEBUG nova.virt.libvirt.host [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.099 257641 DEBUG nova.virt.libvirt.host [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.104 257641 DEBUG nova.virt.libvirt.host [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.105 257641 DEBUG nova.virt.libvirt.host [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.107 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.108 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.109 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.109 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.110 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.110 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.111 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.111 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.112 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.112 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.112 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.113 257641 DEBUG nova.virt.hardware [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.118 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:17.215 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:17.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589159877' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.586 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.610 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:17 np0005539550 nova_compute[257631]: 2025-11-29 08:31:17.613 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:17 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:31:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1023668212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.077 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.079 257641 DEBUG nova.virt.libvirt.vif [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1381867325',display_name='tempest-ServersTestJSON-server-1381867325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1381867325',id=158,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-0pj0c0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:13Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=430bbf28-daad-4d0e-a9fe-3a2affb2ae35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.080 257641 DEBUG nova.network.os_vif_util [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.081 257641 DEBUG nova.network.os_vif_util [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.082 257641 DEBUG nova.objects.instance [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'pci_devices' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.107 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <uuid>430bbf28-daad-4d0e-a9fe-3a2affb2ae35</uuid>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <name>instance-0000009e</name>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersTestJSON-server-1381867325</nova:name>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:31:17</nova:creationTime>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:user uuid="0741d46905e94415a372bd62751dff66">tempest-ServersTestJSON-1509574488-project-member</nova:user>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:project uuid="5970d12b2c42419e889cd48de28c4b86">tempest-ServersTestJSON-1509574488</nova:project>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <nova:port uuid="c01b1b4d-0c44-49aa-8eee-26fc15a26f4d">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="serial">430bbf28-daad-4d0e-a9fe-3a2affb2ae35</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="uuid">430bbf28-daad-4d0e-a9fe-3a2affb2ae35</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:2e:63:5b"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <target dev="tapc01b1b4d-0c"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/console.log" append="off"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:31:18 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:31:18 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:31:18 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:31:18 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.110 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Preparing to wait for external event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.111 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.111 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.112 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.113 257641 DEBUG nova.virt.libvirt.vif [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1381867325',display_name='tempest-ServersTestJSON-server-1381867325',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1381867325',id=158,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-0pj0c0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:13Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=430bbf28-daad-4d0e-a9fe-3a2affb2ae35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.114 257641 DEBUG nova.network.os_vif_util [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.115 257641 DEBUG nova.network.os_vif_util [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.116 257641 DEBUG os_vif [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.119 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.120 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.125 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.126 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc01b1b4d-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.127 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc01b1b4d-0c, col_values=(('external_ids', {'iface-id': 'c01b1b4d-0c44-49aa-8eee-26fc15a26f4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:63:5b', 'vm-uuid': '430bbf28-daad-4d0e-a9fe-3a2affb2ae35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 320 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 463 KiB/s rd, 1.0 MiB/s wr, 64 op/s
Nov 29 03:31:18 np0005539550 NetworkManager[49039]: <info>  [1764405078.1314] manager: (tapc01b1b4d-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.131 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.135 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.137 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.137 257641 INFO os_vif [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c')#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.190 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.190 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.190 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] No VIF found with MAC fa:16:3e:2e:63:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.191 257641 INFO nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Using config drive#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.219 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:18.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.575 257641 DEBUG nova.network.neutron [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updated VIF entry in instance network info cache for port c01b1b4d-0c44-49aa-8eee-26fc15a26f4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.575 257641 DEBUG nova.network.neutron [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updating instance_info_cache with network_info: [{"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.593 257641 DEBUG oslo_concurrency.lockutils [req-5a13e612-1e12-4211-b456-8229812b7072 req-a45adf88-fc69-44d3-917d-584ab22b5f28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.648 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.855 257641 INFO nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Creating config drive at /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config#033[00m
Nov 29 03:31:18 np0005539550 nova_compute[257631]: 2025-11-29 08:31:18.861 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjrmlem44 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:18.963 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:18.964 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:18.964 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.001 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjrmlem44" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.041 257641 DEBUG nova.storage.rbd_utils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] rbd image 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.044 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.346 257641 DEBUG oslo_concurrency.processutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config 430bbf28-daad-4d0e-a9fe-3a2affb2ae35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.347 257641 INFO nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Deleting local config drive /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:19 np0005539550 kernel: tapc01b1b4d-0c: entered promiscuous mode
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.4036] manager: (tapc01b1b4d-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/323)
Nov 29 03:31:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:19Z|00735|binding|INFO|Claiming lport c01b1b4d-0c44-49aa-8eee-26fc15a26f4d for this chassis.
Nov 29 03:31:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:19Z|00736|binding|INFO|c01b1b4d-0c44-49aa-8eee-26fc15a26f4d: Claiming fa:16:3e:2e:63:5b 10.100.0.7
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.404 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.411 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:63:5b 10.100.0.7'], port_security=['fa:16:3e:2e:63:5b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '430bbf28-daad-4d0e-a9fe-3a2affb2ae35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.412 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c01b1b4d-0c44-49aa-8eee-26fc15a26f4d in datapath 14ea2b48-9984-443b-82fc-568ae98723fc bound to our chassis#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.414 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 14ea2b48-9984-443b-82fc-568ae98723fc#033[00m
Nov 29 03:31:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:19Z|00737|binding|INFO|Setting lport c01b1b4d-0c44-49aa-8eee-26fc15a26f4d ovn-installed in OVS
Nov 29 03:31:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:19Z|00738|binding|INFO|Setting lport c01b1b4d-0c44-49aa-8eee-26fc15a26f4d up in Southbound
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.426 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[343bc1c2-fd73-4f70-aa5f-efe14a611ca9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.427 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap14ea2b48-91 in ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.429 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap14ea2b48-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.429 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bffd6947-be88-48e6-8936-e4e0639b8153]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.430 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2915323c-6b3a-4e93-bf1b-d4a8590f095a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 systemd-udevd[356762]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.443 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[13deadfa-727b-4a2f-ba7c-e8d25da76168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 systemd-machined[216673]: New machine qemu-86-instance-0000009e.
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.4542] device (tapc01b1b4d-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.4555] device (tapc01b1b4d-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:19 np0005539550 systemd[1]: Started Virtual Machine qemu-86-instance-0000009e.
Nov 29 03:31:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.469 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3e9430-be3f-485f-8d19-c22d9a34f68c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.497 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fb96290a-81a9-4590-8044-d64cad487eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.5032] manager: (tap14ea2b48-90): new Veth device (/org/freedesktop/NetworkManager/Devices/324)
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.502 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e831c07a-a033-4de1-b7aa-fe171a8770cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 systemd-udevd[356767]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.534 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7476ca-3271-497c-9e29-74b0d123987b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.536 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b053be-9f46-45d5-a9a1-609402208bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.5606] device (tap14ea2b48-90): carrier: link connected
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.567 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfdfcff-f7d9-47f5-a3cc-89bf863dce3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.582 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7c2dde-1f49-4ee1-9b56-8a643ad1b58b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812722, 'reachable_time': 25480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356795, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.596 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[345c71d9-6934-45eb-8288-e08dce86ccdb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:168b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812722, 'tstamp': 812722}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356796, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.610 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c767a9b0-4cf6-49c8-9f5c-bc4d1cb8bb4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap14ea2b48-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f8:16:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 214], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812722, 'reachable_time': 25480, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356797, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.637 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d77cdc-9453-4fc7-89c5-8555e2ba548b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.696 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b34a7f8d-18ca-45e4-99e0-1b8d33aab792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.697 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.698 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.698 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14ea2b48-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 NetworkManager[49039]: <info>  [1764405079.7005] manager: (tap14ea2b48-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Nov 29 03:31:19 np0005539550 kernel: tap14ea2b48-90: entered promiscuous mode
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.702 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.705 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap14ea2b48-90, col_values=(('external_ids', {'iface-id': '42f71355-5b3f-49f9-b3e9-d89b87086d5d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.706 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:19Z|00739|binding|INFO|Releasing lport 42f71355-5b3f-49f9-b3e9-d89b87086d5d from this chassis (sb_readonly=0)
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.724 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.726 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.726 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe1b4e9-7334-4b25-b22e-5ad4317a7a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.727 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/14ea2b48-9984-443b-82fc-568ae98723fc.pid.haproxy
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 14ea2b48-9984-443b-82fc-568ae98723fc
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:19.727 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'env', 'PROCESS_TAG=haproxy-14ea2b48-9984-443b-82fc-568ae98723fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/14ea2b48-9984-443b-82fc-568ae98723fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.959 257641 DEBUG nova.compute.manager [req-2acdf21a-ea63-4e78-a75f-598d95c292ff req-ec85fcfb-399d-414b-a5e0-10d559807638 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.960 257641 DEBUG oslo_concurrency.lockutils [req-2acdf21a-ea63-4e78-a75f-598d95c292ff req-ec85fcfb-399d-414b-a5e0-10d559807638 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.961 257641 DEBUG oslo_concurrency.lockutils [req-2acdf21a-ea63-4e78-a75f-598d95c292ff req-ec85fcfb-399d-414b-a5e0-10d559807638 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.962 257641 DEBUG oslo_concurrency.lockutils [req-2acdf21a-ea63-4e78-a75f-598d95c292ff req-ec85fcfb-399d-414b-a5e0-10d559807638 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.962 257641 DEBUG nova.compute.manager [req-2acdf21a-ea63-4e78-a75f-598d95c292ff req-ec85fcfb-399d-414b-a5e0-10d559807638 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Processing event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.969 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405079.9689002, 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.970 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.975 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.980 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.984 257641 INFO nova.virt.libvirt.driver [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance spawned successfully.#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.985 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:31:19 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.995 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:19.999 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.025 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.026 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.027 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.027 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.027 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.028 257641 DEBUG nova.virt.libvirt.driver [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.038 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.039 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405079.9690518, 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.039 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.068 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.071 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405079.9803715, 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.071 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.104 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.107 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.116 257641 INFO nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Took 6.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.117 257641 DEBUG nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.123 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 326 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 1.8 MiB/s wr, 94 op/s
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005339014226223711 of space, bias 1.0, pg target 1.6017042678671134 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6463716877210125 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:31:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:31:20 np0005539550 podman[356867]: 2025-11-29 08:31:20.156991384 +0000 UTC m=+0.069845645 container create d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:20 np0005539550 systemd[1]: Started libpod-conmon-d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406.scope.
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.197 257641 INFO nova.compute.manager [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Took 7.88 seconds to build instance.#033[00m
Nov 29 03:31:20 np0005539550 podman[356867]: 2025-11-29 08:31:20.114749651 +0000 UTC m=+0.027603912 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:20 np0005539550 nova_compute[257631]: 2025-11-29 08:31:20.212 257641 DEBUG oslo_concurrency.lockutils [None req-839eb218-a3c8-41ad-b752-2b7a2418c18f 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7f5ccd26b952804cdf8459895f71ef9f1d3282cb3bee77fb441334b0ba71b16/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:20 np0005539550 podman[356867]: 2025-11-29 08:31:20.259657421 +0000 UTC m=+0.172511752 container init d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:31:20 np0005539550 podman[356867]: 2025-11-29 08:31:20.269259335 +0000 UTC m=+0.182113586 container start d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:20 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [NOTICE]   (356886) : New worker (356888) forked
Nov 29 03:31:20 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [NOTICE]   (356886) : Loading success.
Nov 29 03:31:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:20.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.082 257641 DEBUG nova.compute.manager [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.083 257641 DEBUG oslo_concurrency.lockutils [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.083 257641 DEBUG oslo_concurrency.lockutils [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.083 257641 DEBUG oslo_concurrency.lockutils [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.083 257641 DEBUG nova.compute.manager [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] No waiting events found dispatching network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:22 np0005539550 nova_compute[257631]: 2025-11-29 08:31:22.084 257641 WARNING nova.compute.manager [req-2005a428-1f8b-4b34-953f-704fe8a1eb85 req-279820cf-a74d-45fa-9542-a5b9cbe9d0f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received unexpected event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Nov 29 03:31:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.131 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539550 podman[356951]: 2025-11-29 08:31:23.387056771 +0000 UTC m=+0.117439313 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 03:31:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.651 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.769 257641 DEBUG oslo_concurrency.lockutils [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.770 257641 DEBUG oslo_concurrency.lockutils [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.770 257641 DEBUG nova.compute.manager [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.775 257641 DEBUG nova.compute.manager [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.776 257641 DEBUG nova.objects.instance [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'flavor' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:23 np0005539550 nova_compute[257631]: 2025-11-29 08:31:23.801 257641 DEBUG nova.virt.libvirt.driver [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:31:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 326 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Nov 29 03:31:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:24.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:25.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 3.5 MiB/s wr, 237 op/s
Nov 29 03:31:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:31:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.5 MiB/s wr, 211 op/s
Nov 29 03:31:28 np0005539550 nova_compute[257631]: 2025-11-29 08:31:28.135 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:28.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:28 np0005539550 nova_compute[257631]: 2025-11-29 08:31:28.653 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:29.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.6 MiB/s wr, 229 op/s
Nov 29 03:31:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:30.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:31.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Nov 29 03:31:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:33 np0005539550 nova_compute[257631]: 2025-11-29 08:31:33.137 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:33.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:33 np0005539550 nova_compute[257631]: 2025-11-29 08:31:33.655 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:33 np0005539550 nova_compute[257631]: 2025-11-29 08:31:33.846 257641 DEBUG nova.virt.libvirt.driver [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:31:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 376 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 182 op/s
Nov 29 03:31:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:34.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:35.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.8 MiB/s wr, 161 op/s
Nov 29 03:31:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:36Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2e:63:5b 10.100.0.7
Nov 29 03:31:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:36Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2e:63:5b 10.100.0.7
Nov 29 03:31:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:36.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:37.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.959 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:31:37 np0005539550 nova_compute[257631]: 2025-11-29 08:31:37.959 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 393 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 536 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.140 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2312782904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:38.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.444 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.540 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.540 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-0000009e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.656 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.708 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.709 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4086MB free_disk=20.852489471435547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.709 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.709 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.794 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.795 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.795 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:31:38 np0005539550 nova_compute[257631]: 2025-11-29 08:31:38.835 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:39.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201334011' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:39 np0005539550 nova_compute[257631]: 2025-11-29 08:31:39.673 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.838s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:39 np0005539550 nova_compute[257631]: 2025-11-29 08:31:39.683 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:39 np0005539550 nova_compute[257631]: 2025-11-29 08:31:39.729 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:39 np0005539550 nova_compute[257631]: 2025-11-29 08:31:39.760 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:31:39 np0005539550 nova_compute[257631]: 2025-11-29 08:31:39.760 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 399 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 741 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Nov 29 03:31:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.762 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.762 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.762 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.847 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.847 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.847 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:31:40 np0005539550 nova_compute[257631]: 2025-11-29 08:31:40.847 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:41.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 429 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 205 op/s
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.238 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updating instance_info_cache with network_info: [{"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.259 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-430bbf28-daad-4d0e-a9fe-3a2affb2ae35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.259 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:31:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:42.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:42 np0005539550 nova_compute[257631]: 2025-11-29 08:31:42.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.144 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.658 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.682 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.682 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.707 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:31:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.768 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.768 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.775 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.776 257641 INFO nova.compute.claims [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:31:43 np0005539550 nova_compute[257631]: 2025-11-29 08:31:43.889 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 451 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Nov 29 03:31:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561936920' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.308 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.313 257641 DEBUG nova.compute.provider_tree [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.330 257641 DEBUG nova.scheduler.client.report [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.353 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.354 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:31:44 np0005539550 podman[357103]: 2025-11-29 08:31:44.365869588 +0000 UTC m=+0.088469766 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:31:44 np0005539550 podman[357102]: 2025-11-29 08:31:44.379233197 +0000 UTC m=+0.104189705 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.409 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.410 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.426 257641 INFO nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:31:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:44.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.445 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.542 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.544 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.544 257641 INFO nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Creating image(s)#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.580 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.615 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.692 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.696 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.728 257641 DEBUG nova.policy [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.765 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.765 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.766 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.766 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.789 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.792 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.891 257641 DEBUG nova.virt.libvirt.driver [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:31:44 np0005539550 nova_compute[257631]: 2025-11-29 08:31:44.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.073 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.147 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] resizing rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.248 257641 DEBUG nova.objects.instance [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.265 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.265 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Ensure instance console log exists: /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.266 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.266 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.267 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.910 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Successfully created port: 39164f5e-3f66-4cf5-8cc3-3903f7387b53 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:31:45 np0005539550 nova_compute[257631]: 2025-11-29 08:31:45.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 493 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.1 MiB/s wr, 267 op/s
Nov 29 03:31:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:46.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:46 np0005539550 nova_compute[257631]: 2025-11-29 08:31:46.973 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Successfully updated port: 39164f5e-3f66-4cf5-8cc3-3903f7387b53 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.000 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.001 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.001 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.097 257641 DEBUG nova.compute.manager [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.098 257641 DEBUG nova.compute.manager [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing instance network info cache due to event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.098 257641 DEBUG oslo_concurrency.lockutils [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:47 np0005539550 kernel: tapc01b1b4d-0c (unregistering): left promiscuous mode
Nov 29 03:31:47 np0005539550 NetworkManager[49039]: <info>  [1764405107.2086] device (tapc01b1b4d-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.210 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:31:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:47Z|00740|binding|INFO|Releasing lport c01b1b4d-0c44-49aa-8eee-26fc15a26f4d from this chassis (sb_readonly=0)
Nov 29 03:31:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:47Z|00741|binding|INFO|Setting lport c01b1b4d-0c44-49aa-8eee-26fc15a26f4d down in Southbound
Nov 29 03:31:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:47Z|00742|binding|INFO|Removing iface tapc01b1b4d-0c ovn-installed in OVS
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.254 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.255 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.263 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:63:5b 10.100.0.7'], port_security=['fa:16:3e:2e:63:5b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '430bbf28-daad-4d0e-a9fe-3a2affb2ae35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-14ea2b48-9984-443b-82fc-568ae98723fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5970d12b2c42419e889cd48de28c4b86', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1f4c15e1-3db4-4257-8a40-7ffdc4076590', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=deb2b192-93f0-4938-a0e1-77284f619a46, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.264 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c01b1b4d-0c44-49aa-8eee-26fc15a26f4d in datapath 14ea2b48-9984-443b-82fc-568ae98723fc unbound from our chassis#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.266 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 14ea2b48-9984-443b-82fc-568ae98723fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.268 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[154e2cd8-39a1-41ee-a846-d6f267d2fe23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.270 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc namespace which is not needed anymore#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539550 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000009e.scope: Deactivated successfully.
Nov 29 03:31:47 np0005539550 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000009e.scope: Consumed 14.995s CPU time.
Nov 29 03:31:47 np0005539550 systemd-machined[216673]: Machine qemu-86-instance-0000009e terminated.
Nov 29 03:31:47 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [NOTICE]   (356886) : haproxy version is 2.8.14-c23fe91
Nov 29 03:31:47 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [NOTICE]   (356886) : path to executable is /usr/sbin/haproxy
Nov 29 03:31:47 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [WARNING]  (356886) : Exiting Master process...
Nov 29 03:31:47 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [ALERT]    (356886) : Current worker (356888) exited with code 143 (Terminated)
Nov 29 03:31:47 np0005539550 neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc[356882]: [WARNING]  (356886) : All workers exited. Exiting... (0)
Nov 29 03:31:47 np0005539550 systemd[1]: libpod-d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406.scope: Deactivated successfully.
Nov 29 03:31:47 np0005539550 podman[357333]: 2025-11-29 08:31:47.418675343 +0000 UTC m=+0.043170816 container died d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:31:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406-userdata-shm.mount: Deactivated successfully.
Nov 29 03:31:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d7f5ccd26b952804cdf8459895f71ef9f1d3282cb3bee77fb441334b0ba71b16-merged.mount: Deactivated successfully.
Nov 29 03:31:47 np0005539550 podman[357333]: 2025-11-29 08:31:47.464379313 +0000 UTC m=+0.088874796 container cleanup d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:31:47 np0005539550 systemd[1]: libpod-conmon-d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406.scope: Deactivated successfully.
Nov 29 03:31:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:47 np0005539550 podman[357365]: 2025-11-29 08:31:47.531919437 +0000 UTC m=+0.045102005 container remove d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.537 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e927c895-47ab-4d93-b5d0-e7c855d5363c]: (4, ('Sat Nov 29 08:31:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406)\nd337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406\nSat Nov 29 08:31:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc (d337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406)\nd337c3031ea55b7b01186244bf92425ba3863c4909d0253b988f684e8f10d406\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.539 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58a8f6c6-b88f-4dd7-b1c2-61d85c716c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.540 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14ea2b48-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539550 kernel: tap14ea2b48-90: left promiscuous mode
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.560 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.563 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1ce5a868-07f2-4b8c-8d0c-455ba49b9318]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.566 257641 DEBUG nova.compute.manager [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-vif-unplugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.567 257641 DEBUG oslo_concurrency.lockutils [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.567 257641 DEBUG oslo_concurrency.lockutils [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.567 257641 DEBUG oslo_concurrency.lockutils [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.567 257641 DEBUG nova.compute.manager [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] No waiting events found dispatching network-vif-unplugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.568 257641 WARNING nova.compute.manager [req-2ed53463-375d-45c8-9922-6c643a91d47b req-34ec03ec-cf41-48f9-b902-36ecd2b49208 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received unexpected event network-vif-unplugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d for instance with vm_state active and task_state powering-off.#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.588 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[899229d5-1376-4ee7-8b2f-688217e3d25c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.589 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e6232bbf-c76a-4c7f-bae9-f4615092bb4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.607 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[789c4aa5-91b0-4912-abd6-1028789c995e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812715, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357394, 'error': None, 'target': 'ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.609 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-14ea2b48-9984-443b-82fc-568ae98723fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:31:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:47.609 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[63ae28de-2142-4d2e-a7e6-32fdfb4d4e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:47 np0005539550 systemd[1]: run-netns-ovnmeta\x2d14ea2b48\x2d9984\x2d443b\x2d82fc\x2d568ae98723fc.mount: Deactivated successfully.
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.905 257641 INFO nova.virt.libvirt.driver [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance shutdown successfully after 24 seconds.#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.912 257641 INFO nova.virt.libvirt.driver [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance destroyed successfully.#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.912 257641 DEBUG nova.objects.instance [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'numa_topology' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:47 np0005539550 nova_compute[257631]: 2025-11-29 08:31:47.957 257641 DEBUG nova.compute.manager [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:48 np0005539550 nova_compute[257631]: 2025-11-29 08:31:48.085 257641 DEBUG oslo_concurrency.lockutils [None req-3bbfc37a-0d19-4c54-a8e1-6f1fd7e14751 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 493 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 246 op/s
Nov 29 03:31:48 np0005539550 nova_compute[257631]: 2025-11-29 08:31:48.145 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:48.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:48 np0005539550 nova_compute[257631]: 2025-11-29 08:31:48.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.047 257641 DEBUG nova.network.neutron [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.097 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.097 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Instance network_info: |[{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.098 257641 DEBUG oslo_concurrency.lockutils [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.098 257641 DEBUG nova.network.neutron [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.103 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Start _get_guest_xml network_info=[{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.109 257641 WARNING nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.120 257641 DEBUG nova.virt.libvirt.host [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.121 257641 DEBUG nova.virt.libvirt.host [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.136 257641 DEBUG nova.virt.libvirt.host [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.138 257641 DEBUG nova.virt.libvirt.host [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.140 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.140 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.141 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.141 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.142 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.142 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.143 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.143 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.144 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.144 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.145 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.145 257641 DEBUG nova.virt.hardware [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.151 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:49.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856229948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.649 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.686 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.691 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.736 257641 DEBUG nova.compute.manager [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.736 257641 DEBUG oslo_concurrency.lockutils [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.737 257641 DEBUG oslo_concurrency.lockutils [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.737 257641 DEBUG oslo_concurrency.lockutils [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.737 257641 DEBUG nova.compute.manager [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] No waiting events found dispatching network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.737 257641 WARNING nova.compute.manager [req-a8bd0683-0680-4dc6-9087-0ac122f47fdf req-a471f0c7-63fa-4a9b-b77a-54a555b1d6e3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received unexpected event network-vif-plugged-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d for instance with vm_state stopped and task_state None.#033[00m
Nov 29 03:31:49 np0005539550 nova_compute[257631]: 2025-11-29 08:31:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:31:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 499 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.7 MiB/s wr, 282 op/s
Nov 29 03:31:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:31:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3848354683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.881 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.882 257641 DEBUG nova.virt.libvirt.vif [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=161,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-od0z31z7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=1509e19f-b5e6-496d-a0d9-d6740970fad0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.883 257641 DEBUG nova.network.os_vif_util [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.883 257641 DEBUG nova.network.os_vif_util [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.885 257641 DEBUG nova.objects.instance [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.912 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <uuid>1509e19f-b5e6-496d-a0d9-d6740970fad0</uuid>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <name>instance-000000a1</name>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:31:49</nova:creationTime>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <nova:port uuid="39164f5e-3f66-4cf5-8cc3-3903f7387b53">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="serial">1509e19f-b5e6-496d-a0d9-d6740970fad0</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="uuid">1509e19f-b5e6-496d-a0d9-d6740970fad0</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1509e19f-b5e6-496d-a0d9-d6740970fad0_disk">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8c:f9:49"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <target dev="tap39164f5e-3f"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/console.log" append="off"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:31:50 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:31:50 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:31:50 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:31:50 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.913 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Preparing to wait for external event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.913 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.914 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.914 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.914 257641 DEBUG nova.virt.libvirt.vif [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=161,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-od0z31z7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:31:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=1509e19f-b5e6-496d-a0d9-d6740970fad0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.915 257641 DEBUG nova.network.os_vif_util [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.915 257641 DEBUG nova.network.os_vif_util [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.916 257641 DEBUG os_vif [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.917 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.917 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.921 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39164f5e-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.922 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39164f5e-3f, col_values=(('external_ids', {'iface-id': '39164f5e-3f66-4cf5-8cc3-3903f7387b53', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:f9:49', 'vm-uuid': '1509e19f-b5e6-496d-a0d9-d6740970fad0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:50 np0005539550 NetworkManager[49039]: <info>  [1764405110.9876] manager: (tap39164f5e-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/326)
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.993 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:50 np0005539550 nova_compute[257631]: 2025-11-29 08:31:50.993 257641 INFO os_vif [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f')#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.192 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.194 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.194 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:8c:f9:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.195 257641 INFO nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Using config drive#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.229 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.919 257641 DEBUG nova.network.neutron [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updated VIF entry in instance network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.920 257641 DEBUG nova.network.neutron [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:51 np0005539550 nova_compute[257631]: 2025-11-29 08:31:51.936 257641 DEBUG oslo_concurrency.lockutils [req-f20d1774-816e-44ea-8187-a7eb27ee3fd1 req-7db662de-3611-4d7e-b31c-80deafc43a85 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.064 257641 INFO nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Creating config drive at /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.072 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplf5h9ic execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 511 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.8 MiB/s wr, 277 op/s
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.190 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.191 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.191 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.192 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.192 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.193 257641 INFO nova.compute.manager [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Terminating instance#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.194 257641 DEBUG nova.compute.manager [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.202 257641 INFO nova.virt.libvirt.driver [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Instance destroyed successfully.#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.202 257641 DEBUG nova.objects.instance [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lazy-loading 'resources' on Instance uuid 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.207 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplf5h9ic" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.239 257641 DEBUG nova.storage.rbd_utils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.243 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.277 257641 DEBUG nova.virt.libvirt.vif [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1381867325',display_name='tempest-Íñstáñcé-1094005302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1381867325',id=158,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='5970d12b2c42419e889cd48de28c4b86',ramdisk_id='',reservation_id='r-0pj0c0dw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1509574488',owner_user_name='tempest-ServersTestJSON-1509574488-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:49Z,user_data=None,user_id='0741d46905e94415a372bd62751dff66',uuid=430bbf28-daad-4d0e-a9fe-3a2affb2ae35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.278 257641 DEBUG nova.network.os_vif_util [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converting VIF {"id": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "address": "fa:16:3e:2e:63:5b", "network": {"id": "14ea2b48-9984-443b-82fc-568ae98723fc", "bridge": "br-int", "label": "tempest-ServersTestJSON-1937273828-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5970d12b2c42419e889cd48de28c4b86", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc01b1b4d-0c", "ovs_interfaceid": "c01b1b4d-0c44-49aa-8eee-26fc15a26f4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.279 257641 DEBUG nova.network.os_vif_util [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.279 257641 DEBUG os_vif [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.282 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.282 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc01b1b4d-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.351 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.354 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.357 257641 INFO os_vif [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2e:63:5b,bridge_name='br-int',has_traffic_filtering=True,id=c01b1b4d-0c44-49aa-8eee-26fc15a26f4d,network=Network(14ea2b48-9984-443b-82fc-568ae98723fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc01b1b4d-0c')#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.412 257641 DEBUG oslo_concurrency.processutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config 1509e19f-b5e6-496d-a0d9-d6740970fad0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.413 257641 INFO nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Deleting local config drive /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0/disk.config because it was imported into RBD.#033[00m
Nov 29 03:31:52 np0005539550 kernel: tap39164f5e-3f: entered promiscuous mode
Nov 29 03:31:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.4557] manager: (tap39164f5e-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/327)
Nov 29 03:31:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:52.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:52Z|00743|binding|INFO|Claiming lport 39164f5e-3f66-4cf5-8cc3-3903f7387b53 for this chassis.
Nov 29 03:31:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:52Z|00744|binding|INFO|39164f5e-3f66-4cf5-8cc3-3903f7387b53: Claiming fa:16:3e:8c:f9:49 10.100.0.12
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.458 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.471 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:f9:49 10.100.0.12'], port_security=['fa:16:3e:8c:f9:49 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1509e19f-b5e6-496d-a0d9-d6740970fad0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=39164f5e-3f66-4cf5-8cc3-3903f7387b53) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.472 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 39164f5e-3f66-4cf5-8cc3-3903f7387b53 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.473 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.485 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[47dd2045-2156-457b-8c9b-73f373bc7da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.486 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH taped50ff83-51 in ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:31:52 np0005539550 systemd-machined[216673]: New machine qemu-87-instance-000000a1.
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.489 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface taped50ff83-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.489 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8cddb798-e377-47d8-84e5-c8361ed77a12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.490 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b94f97dc-59a4-44d9-8ee5-d544c8cf0664]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.504 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[a437072d-6ff9-4f6f-a9e4-d80812d570f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 systemd[1]: Started Virtual Machine qemu-87-instance-000000a1.
Nov 29 03:31:52 np0005539550 systemd-udevd[357556]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.528 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[260c85fe-74f2-4d98-b832-43bfe3388794]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.534 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.5435] device (tap39164f5e-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.5447] device (tap39164f5e-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:31:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:52Z|00745|binding|INFO|Setting lport 39164f5e-3f66-4cf5-8cc3-3903f7387b53 ovn-installed in OVS
Nov 29 03:31:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:52Z|00746|binding|INFO|Setting lport 39164f5e-3f66-4cf5-8cc3-3903f7387b53 up in Southbound
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.559 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c0b229-effc-4aff-be4b-55cc5ed06213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.5663] manager: (taped50ff83-50): new Veth device (/org/freedesktop/NetworkManager/Devices/328)
Nov 29 03:31:52 np0005539550 systemd-udevd[357561]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.566 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e69f638c-731d-43da-a347-abb72f12fbd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.596 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[646f701f-abd8-485a-b3a2-409de8688d68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.599 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2514be1c-05c9-48fb-be1d-9e01b9916aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.6192] device (taped50ff83-50): carrier: link connected
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.624 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[519280ad-a992-4ce8-9876-347e5bfcc0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.640 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aefde818-3b87-48fa-a461-c0e16b3bb09c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357586, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.655 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8bacf2-a590-4cf4-ba02-569bde4b3b6b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:60f2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816028, 'tstamp': 816028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357587, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.671 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[214faa54-e00f-4391-9184-f9b9e844a8be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357588, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.700 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdcb387-2cfc-4909-ab72-c4094274c523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.755 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[28047b37-9673-42c6-82a4-4306c4da8473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.756 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.757 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.757 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 NetworkManager[49039]: <info>  [1764405112.7593] manager: (taped50ff83-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Nov 29 03:31:52 np0005539550 kernel: taped50ff83-50: entered promiscuous mode
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.761 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.762 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:52Z|00747|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:31:52 np0005539550 nova_compute[257631]: 2025-11-29 08:31:52.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.787 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.788 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6ab554-1948-43f3-bae3-872f8bcef5f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.789 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.pid.haproxy
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID ed50ff83-51d1-4b35-b85c-1cbe6fb812c6
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:31:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:31:52.789 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'env', 'PROCESS_TAG=haproxy-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ed50ff83-51d1-4b35-b85c-1cbe6fb812c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.103 257641 DEBUG nova.compute.manager [req-238e1a0f-8917-46f8-b53c-240e2a6fd8c1 req-ad9fdaaa-cd13-4d27-9ffd-14e87c271ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.105 257641 DEBUG oslo_concurrency.lockutils [req-238e1a0f-8917-46f8-b53c-240e2a6fd8c1 req-ad9fdaaa-cd13-4d27-9ffd-14e87c271ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.105 257641 DEBUG oslo_concurrency.lockutils [req-238e1a0f-8917-46f8-b53c-240e2a6fd8c1 req-ad9fdaaa-cd13-4d27-9ffd-14e87c271ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.106 257641 DEBUG oslo_concurrency.lockutils [req-238e1a0f-8917-46f8-b53c-240e2a6fd8c1 req-ad9fdaaa-cd13-4d27-9ffd-14e87c271ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.106 257641 DEBUG nova.compute.manager [req-238e1a0f-8917-46f8-b53c-240e2a6fd8c1 req-ad9fdaaa-cd13-4d27-9ffd-14e87c271ed4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Processing event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:31:53 np0005539550 podman[357620]: 2025-11-29 08:31:53.222243118 +0000 UTC m=+0.110039104 container create ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:53 np0005539550 podman[357620]: 2025-11-29 08:31:53.137903557 +0000 UTC m=+0.025699763 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:31:53 np0005539550 systemd[1]: Started libpod-conmon-ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003.scope.
Nov 29 03:31:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:31:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21eda19a5090f65cffd31d1cf6ddb29d1dc0970aa66fd2bd58627cd4c02ef9c2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:31:53 np0005539550 podman[357620]: 2025-11-29 08:31:53.453918578 +0000 UTC m=+0.341714594 container init ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:31:53 np0005539550 podman[357620]: 2025-11-29 08:31:53.465872112 +0000 UTC m=+0.353668088 container start ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:31:53 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [NOTICE]   (357639) : New worker (357648) forked
Nov 29 03:31:53 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [NOTICE]   (357639) : Loading success.
Nov 29 03:31:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:53.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.556 257641 INFO nova.virt.libvirt.driver [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Deleting instance files /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35_del#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.556 257641 INFO nova.virt.libvirt.driver [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Deletion of /var/lib/nova/instances/430bbf28-daad-4d0e-a9fe-3a2affb2ae35_del complete#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.611 257641 INFO nova.compute.manager [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Took 1.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.612 257641 DEBUG oslo.service.loopingcall [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.612 257641 DEBUG nova.compute.manager [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.612 257641 DEBUG nova.network.neutron [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.662 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.702 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405113.7014852, 1509e19f-b5e6-496d-a0d9-d6740970fad0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.702 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] VM Started (Lifecycle Event)#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.708 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.712 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.717 257641 INFO nova.virt.libvirt.driver [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Instance spawned successfully.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.717 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:31:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.747 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.753 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.754 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.754 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.755 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.756 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.757 257641 DEBUG nova.virt.libvirt.driver [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.763 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.802 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.803 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405113.7016106, 1509e19f-b5e6-496d-a0d9-d6740970fad0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.804 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.823 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.832 257641 INFO nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Took 9.29 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.833 257641 DEBUG nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.835 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405113.7114897, 1509e19f-b5e6-496d-a0d9-d6740970fad0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.835 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.863 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.867 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.891 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.905 257641 INFO nova.compute.manager [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Took 10.15 seconds to build instance.#033[00m
Nov 29 03:31:53 np0005539550 nova_compute[257631]: 2025-11-29 08:31:53.920 257641 DEBUG oslo_concurrency.lockutils [None req-f2ced6d1-a766-43a1-b7ae-cf3e24758851 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 489 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.2 MiB/s wr, 148 op/s
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.236 257641 DEBUG nova.network.neutron [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.255 257641 INFO nova.compute.manager [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.306 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.306 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.347 257641 DEBUG nova.compute.manager [req-ed2123be-9997-4c3e-b6b0-fe99b07bb2bb req-5479209e-51b5-4ad0-8340-79bad85c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Received event network-vif-deleted-c01b1b4d-0c44-49aa-8eee-26fc15a26f4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:54 np0005539550 podman[357692]: 2025-11-29 08:31:54.350019653 +0000 UTC m=+0.089708618 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.391 257641 DEBUG oslo_concurrency.processutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:31:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:54.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:31:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000559323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.859 257641 DEBUG oslo_concurrency.processutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.865 257641 DEBUG nova.compute.provider_tree [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.885 257641 DEBUG nova.scheduler.client.report [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.906 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:54 np0005539550 nova_compute[257631]: 2025-11-29 08:31:54.942 257641 INFO nova.scheduler.client.report [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Deleted allocations for instance 430bbf28-daad-4d0e-a9fe-3a2affb2ae35#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.014 257641 DEBUG oslo_concurrency.lockutils [None req-9ba9603e-ad79-418d-a82d-d2e98fa08eaf 0741d46905e94415a372bd62751dff66 5970d12b2c42419e889cd48de28c4b86 - - default default] Lock "430bbf28-daad-4d0e-a9fe-3a2affb2ae35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.193 257641 DEBUG nova.compute.manager [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.194 257641 DEBUG oslo_concurrency.lockutils [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.195 257641 DEBUG oslo_concurrency.lockutils [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.195 257641 DEBUG oslo_concurrency.lockutils [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.195 257641 DEBUG nova.compute.manager [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] No waiting events found dispatching network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:31:55 np0005539550 nova_compute[257631]: 2025-11-29 08:31:55.196 257641 WARNING nova.compute.manager [req-8d08d22f-a289-4cbf-965e-55e3be232be7 req-afc95eeb-7be3-4148-8f81-e7e834f0b3d8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received unexpected event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:31:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:55.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.1 MiB/s wr, 248 op/s
Nov 29 03:31:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:57 np0005539550 nova_compute[257631]: 2025-11-29 08:31:57.034 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539550 NetworkManager[49039]: <info>  [1764405117.0356] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/330)
Nov 29 03:31:57 np0005539550 NetworkManager[49039]: <info>  [1764405117.0378] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Nov 29 03:31:57 np0005539550 nova_compute[257631]: 2025-11-29 08:31:57.213 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:31:57Z|00748|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:31:57 np0005539550 nova_compute[257631]: 2025-11-29 08:31:57.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539550 nova_compute[257631]: 2025-11-29 08:31:57.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:31:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:57.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.103 257641 DEBUG nova.compute.manager [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.104 257641 DEBUG nova.compute.manager [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing instance network info cache due to event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.104 257641 DEBUG oslo_concurrency.lockutils [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.104 257641 DEBUG oslo_concurrency.lockutils [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.104 257641 DEBUG nova.network.neutron [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:31:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.5 MiB/s wr, 183 op/s
Nov 29 03:31:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:31:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:31:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:31:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:31:58.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:31:58 np0005539550 nova_compute[257631]: 2025-11-29 08:31:58.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:31:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:31:59
Nov 29 03:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', '.mgr', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Nov 29 03:31:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:31:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:31:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:31:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:31:59.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.8 MiB/s wr, 203 op/s
Nov 29 03:32:00 np0005539550 nova_compute[257631]: 2025-11-29 08:32:00.177 257641 DEBUG nova.network.neutron [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updated VIF entry in instance network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:00 np0005539550 nova_compute[257631]: 2025-11-29 08:32:00.178 257641 DEBUG nova.network.neutron [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:00 np0005539550 nova_compute[257631]: 2025-11-29 08:32:00.198 257641 DEBUG oslo_concurrency.lockutils [req-c1f25150-3fb7-48dd-b4f0-734b090fd585 req-6c33b560-1720-46b5-907e-42cc5d2debf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:00.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:32:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 53K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1688 writes, 7534 keys, 1685 commit groups, 1.0 writes per commit group, ingest: 10.87 MB, 0.02 MB/s#012Interval WAL: 1688 writes, 1685 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.2      2.09              0.23        35    0.060       0      0       0.0       0.0#012  L6      1/0   10.50 MB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   5.1     36.2     31.0     11.70              1.07        34    0.344    235K    18K       0.0       0.0#012 Sum      1/0   10.50 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   6.1     30.7     31.5     13.79              1.30        69    0.200    235K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.0    107.4    108.4      0.75              0.24        12    0.063     54K   3159       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.4      0.0       0.0   0.0     36.2     31.0     11.70              1.07        34    0.344    235K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     34.2      2.08              0.23        34    0.061       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.070, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.42 GB write, 0.09 MB/s write, 0.41 GB read, 0.09 MB/s read, 13.8 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 45.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000812 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2594,43.84 MB,14.4225%) FilterBlock(70,660.86 KB,0.212293%) IndexBlock(70,1.11 MB,0.365915%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:32:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:01.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 247 op/s
Nov 29 03:32:02 np0005539550 nova_compute[257631]: 2025-11-29 08:32:02.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:02 np0005539550 nova_compute[257631]: 2025-11-29 08:32:02.494 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405107.4935806, 430bbf28-daad-4d0e-a9fe-3a2affb2ae35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:02 np0005539550 nova_compute[257631]: 2025-11-29 08:32:02.496 257641 INFO nova.compute.manager [-] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:32:02 np0005539550 nova_compute[257631]: 2025-11-29 08:32:02.520 257641 DEBUG nova.compute.manager [None req-ae19fda4-7d7e-4e1b-916f-0e63e0230eac - - - - - -] [instance: 430bbf28-daad-4d0e-a9fe-3a2affb2ae35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:03.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:03 np0005539550 nova_compute[257631]: 2025-11-29 08:32:03.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 407 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 234 op/s
Nov 29 03:32:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000076s ======
Nov 29 03:32:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:04.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Nov 29 03:32:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:05.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.6 MiB/s wr, 239 op/s
Nov 29 03:32:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:32:06Z|00749|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:32:06 np0005539550 nova_compute[257631]: 2025-11-29 08:32:06.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:06.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:07 np0005539550 nova_compute[257631]: 2025-11-29 08:32:07.391 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:07.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:32:07Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:f9:49 10.100.0.12
Nov 29 03:32:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:32:07Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:f9:49 10.100.0.12
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 449 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 638 KiB/s rd, 3.7 MiB/s wr, 132 op/s
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:32:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:32:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:08 np0005539550 nova_compute[257631]: 2025-11-29 08:32:08.670 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:32:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:32:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:32:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:32:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:32:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:09.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:10.071 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:32:10 np0005539550 nova_compute[257631]: 2025-11-29 08:32:10.072 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:10.074 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:32:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 4.2 MiB/s wr, 151 op/s
Nov 29 03:32:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:10.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:11.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.806 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.807 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.828 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.898 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.899 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.904 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:11 np0005539550 nova_compute[257631]: 2025-11-29 08:32:11.905 257641 INFO nova.compute.claims [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.014 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 483 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.6 MiB/s wr, 246 op/s
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.393 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.457 257641 DEBUG nova.compute.manager [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.457 257641 DEBUG nova.compute.manager [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing instance network info cache due to event network-changed-39164f5e-3f66-4cf5-8cc3-3903f7387b53. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.458 257641 DEBUG oslo_concurrency.lockutils [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.458 257641 DEBUG oslo_concurrency.lockutils [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.458 257641 DEBUG nova.network.neutron [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Refreshing network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:32:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/494470200' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.503 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.508 257641 DEBUG nova.compute.provider_tree [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.526 257641 DEBUG nova.scheduler.client.report [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.552 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.553 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.620 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.641 257641 INFO nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.666 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.821 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.822 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.823 257641 INFO nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating image(s)#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.864 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.892 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.919 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.923 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.988 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.989 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.990 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:12 np0005539550 nova_compute[257631]: 2025-11-29 08:32:12.990 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.013 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.016 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:13.076 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.496 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:13.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.581 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] resizing rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.712 257641 DEBUG nova.objects.instance [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'migration_context' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.741 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.741 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Ensure instance console log exists: /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.742 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.742 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.743 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.744 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.749 257641 WARNING nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.754 257641 DEBUG nova.virt.libvirt.host [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.755 257641 DEBUG nova.virt.libvirt.host [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.758 257641 DEBUG nova.virt.libvirt.host [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.758 257641 DEBUG nova.virt.libvirt.host [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.760 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.760 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.760 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.760 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.761 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.762 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.762 257641 DEBUG nova.virt.hardware [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.764 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.959 257641 DEBUG nova.network.neutron [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updated VIF entry in instance network info cache for port 39164f5e-3f66-4cf5-8cc3-3903f7387b53. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:32:13 np0005539550 nova_compute[257631]: 2025-11-29 08:32:13.960 257641 DEBUG nova.network.neutron [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.033 257641 DEBUG oslo_concurrency.lockutils [req-14424080-fe3e-4382-9d92-5ae0558043f9 req-724eadb8-5773-44e8-af8d-6a0c155c3639 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 187 op/s
Nov 29 03:32:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/278006993' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.255 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.287 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.292 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:14.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3446620902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.767 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.770 257641 DEBUG nova.objects.instance [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:14 np0005539550 nova_compute[257631]: 2025-11-29 08:32:14.785 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <uuid>f9b5e592-8e5a-4157-8cb4-ea3d779822e7</uuid>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <name>instance-000000a3</name>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerShowV254Test-server-1192277651</nova:name>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:32:13</nova:creationTime>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:user uuid="ce5327e4760541578f2a465a788d04f7">tempest-ServerShowV254Test-1467634121-project-member</nova:user>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <nova:project uuid="d2ecba0524c44f4d89c1560f849c7be4">tempest-ServerShowV254Test-1467634121</nova:project>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="serial">f9b5e592-8e5a-4157-8cb4-ea3d779822e7</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="uuid">f9b5e592-8e5a-4157-8cb4-ea3d779822e7</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/console.log" append="off"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:32:14 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:32:14 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:32:14 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:32:14 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:32:14 np0005539550 podman[358055]: 2025-11-29 08:32:14.895971735 +0000 UTC m=+0.063874423 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:14 np0005539550 podman[358053]: 2025-11-29 08:32:14.901071924 +0000 UTC m=+0.069337121 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.062 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.063 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.063 257641 INFO nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Using config drive#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.098 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.191 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.192 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.208 257641 DEBUG nova.objects.instance [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.241 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.294 257641 INFO nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating config drive at /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.304 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5ylt8el execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.449 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj5ylt8el" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.483 257641 DEBUG nova.storage.rbd_utils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.489 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.536 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.537 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.538 257641 INFO nova.compute.manager [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Attaching volume 6273512e-203d-43b7-bb2f-a59b9ff4579f to /dev/vdb#033[00m
Nov 29 03:32:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:15.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.730 257641 DEBUG os_brick.utils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.732 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.742 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.742 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[50eb38fe-c680-475d-be7a-58e1985e081a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.743 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.751 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.751 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8e51da-f5e5-46dd-9e46-b0f2ece41858]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.753 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.762 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.762 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[faef706a-b5ad-4390-bee3-7f1cacdb4e50]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.764 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[e00a9436-e6a3-4cab-a8d0-61c80af5d0bb]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.765 257641 DEBUG oslo_concurrency.processutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.807 257641 DEBUG oslo_concurrency.processutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.811 257641 DEBUG os_brick.initiator.connectors.lightos [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.812 257641 DEBUG os_brick.initiator.connectors.lightos [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.812 257641 DEBUG os_brick.initiator.connectors.lightos [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.813 257641 DEBUG os_brick.utils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:32:15 np0005539550 nova_compute[257631]: 2025-11-29 08:32:15.814 257641 DEBUG nova.virt.block_device [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating existing volume attachment record: a720945f-bc54-4053-b932-cb4f783c48ff _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:32:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 448 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.5 MiB/s wr, 301 op/s
Nov 29 03:32:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.560 257641 DEBUG nova.objects.instance [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.590 257641 DEBUG nova.virt.libvirt.driver [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Attempting to attach volume 6273512e-203d-43b7-bb2f-a59b9ff4579f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.593 257641 DEBUG nova.virt.libvirt.guest [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:32:16 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:16 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:32:16 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:32:16 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.663 257641 DEBUG oslo_concurrency.processutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.664 257641 INFO nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deleting local config drive /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config because it was imported into RBD.#033[00m
Nov 29 03:32:16 np0005539550 systemd-machined[216673]: New machine qemu-88-instance-000000a3.
Nov 29 03:32:16 np0005539550 systemd[1]: Started Virtual Machine qemu-88-instance-000000a3.
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.762 257641 DEBUG nova.virt.libvirt.driver [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.763 257641 DEBUG nova.virt.libvirt.driver [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.763 257641 DEBUG nova.virt.libvirt.driver [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.764 257641 DEBUG nova.virt.libvirt.driver [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:8c:f9:49, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:32:16 np0005539550 nova_compute[257631]: 2025-11-29 08:32:16.944 257641 DEBUG oslo_concurrency.lockutils [None req-00a87f1e-809e-4300-9cfc-f646cac926ee c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.397 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:17.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.603 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405137.6035523, f9b5e592-8e5a-4157-8cb4-ea3d779822e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.605 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.607 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.607 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.611 257641 INFO nova.virt.libvirt.driver [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance spawned successfully.#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.612 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.627 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.632 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.635 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.635 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.636 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.636 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.636 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.637 257641 DEBUG nova.virt.libvirt.driver [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.676 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.676 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405137.6047578, f9b5e592-8e5a-4157-8cb4-ea3d779822e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.676 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.723 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.727 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.733 257641 INFO nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Took 4.91 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.734 257641 DEBUG nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.759 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.797 257641 INFO nova.compute.manager [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Took 5.93 seconds to build instance.#033[00m
Nov 29 03:32:17 np0005539550 nova_compute[257631]: 2025-11-29 08:32:17.826 257641 DEBUG oslo_concurrency.lockutils [None req-8cf929b0-9a7c-4cac-b735-f8fa9497a50c ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:32:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:32:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:32:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d74aa96e-62fa-40ce-8939-52a1d37621ae does not exist
Nov 29 03:32:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1787cbf1-dc89-43a1-b51b-69f09506e609 does not exist
Nov 29 03:32:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d1f7ffe7-b569-4ca3-a2de-0cbba840e8b4 does not exist
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:32:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 278 op/s
Nov 29 03:32:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:18.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:32:18Z|00750|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:32:18 np0005539550 nova_compute[257631]: 2025-11-29 08:32:18.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539550 nova_compute[257631]: 2025-11-29 08:32:18.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:18 np0005539550 podman[358503]: 2025-11-29 08:32:18.718461367 +0000 UTC m=+0.022640296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:18 np0005539550 podman[358503]: 2025-11-29 08:32:18.870109206 +0000 UTC m=+0.174288105 container create fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 29 03:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:18.964 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:18.965 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:32:18.965 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:19 np0005539550 systemd[1]: Started libpod-conmon-fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517.scope.
Nov 29 03:32:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:19 np0005539550 podman[358503]: 2025-11-29 08:32:19.347084252 +0000 UTC m=+0.651263151 container init fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:32:19 np0005539550 podman[358503]: 2025-11-29 08:32:19.356392519 +0000 UTC m=+0.660571418 container start fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:32:19 np0005539550 podman[358503]: 2025-11-29 08:32:19.36037773 +0000 UTC m=+0.664556659 container attach fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:32:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3120882658' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:19 np0005539550 crazy_gates[358520]: 167 167
Nov 29 03:32:19 np0005539550 systemd[1]: libpod-fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539550 conmon[358520]: conmon fac427d5d9483d95c4a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517.scope/container/memory.events
Nov 29 03:32:19 np0005539550 podman[358503]: 2025-11-29 08:32:19.369751398 +0000 UTC m=+0.673930317 container died fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7619ba7417617abfad56cedeb710a71849694acb0090c48738615412c7c346c1-merged.mount: Deactivated successfully.
Nov 29 03:32:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:19.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:19 np0005539550 podman[358503]: 2025-11-29 08:32:19.66419197 +0000 UTC m=+0.968370909 container remove fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_gates, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:32:19 np0005539550 systemd[1]: libpod-conmon-fac427d5d9483d95c4a447968a265781f7e641dfd02d2b2637065986df418517.scope: Deactivated successfully.
Nov 29 03:32:19 np0005539550 podman[358544]: 2025-11-29 08:32:19.833680832 +0000 UTC m=+0.024566774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:19 np0005539550 podman[358544]: 2025-11-29 08:32:19.941681063 +0000 UTC m=+0.132566975 container create 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:19 np0005539550 systemd[1]: Started libpod-conmon-8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658.scope.
Nov 29 03:32:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:20 np0005539550 podman[358544]: 2025-11-29 08:32:20.049199092 +0000 UTC m=+0.240085024 container init 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:32:20 np0005539550 podman[358544]: 2025-11-29 08:32:20.055890562 +0000 UTC m=+0.246776474 container start 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:32:20 np0005539550 podman[358544]: 2025-11-29 08:32:20.059147905 +0000 UTC m=+0.250033817 container attach 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 278 op/s
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006347928880513679 of space, bias 1.0, pg target 1.9043786641541038 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00432228416387493 of space, bias 1.0, pg target 1.2923629649986041 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:32:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:32:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:20.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:20 np0005539550 upbeat_hermann[358560]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:32:20 np0005539550 upbeat_hermann[358560]: --> relative data size: 1.0
Nov 29 03:32:20 np0005539550 upbeat_hermann[358560]: --> All data devices are unavailable
Nov 29 03:32:20 np0005539550 systemd[1]: libpod-8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658.scope: Deactivated successfully.
Nov 29 03:32:20 np0005539550 podman[358578]: 2025-11-29 08:32:20.92690294 +0000 UTC m=+0.023754264 container died 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:32:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-521e6bac6774cd9d3a2bf5afa513cfeff54c77366f9c5769700f978f90c8b877-merged.mount: Deactivated successfully.
Nov 29 03:32:20 np0005539550 podman[358578]: 2025-11-29 08:32:20.981461405 +0000 UTC m=+0.078312709 container remove 8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:32:20 np0005539550 systemd[1]: libpod-conmon-8d7dd2610a8cc2545595f019a89d7b2c8fddf748033517251a4e502c40daf658.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:21.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.598956308 +0000 UTC m=+0.034132717 container create 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:32:21 np0005539550 systemd[1]: Started libpod-conmon-21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1.scope.
Nov 29 03:32:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.673206623 +0000 UTC m=+0.108383052 container init 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.584211014 +0000 UTC m=+0.019387453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.681802091 +0000 UTC m=+0.116978500 container start 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.684539471 +0000 UTC m=+0.119715900 container attach 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:32:21 np0005539550 vibrant_rhodes[358747]: 167 167
Nov 29 03:32:21 np0005539550 systemd[1]: libpod-21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.688377068 +0000 UTC m=+0.123553477 container died 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:32:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-db0fc21f9e473c17dedea31cd5fb30c7fe849824654fadac7fdbb7db4e61e30e-merged.mount: Deactivated successfully.
Nov 29 03:32:21 np0005539550 podman[358731]: 2025-11-29 08:32:21.72432038 +0000 UTC m=+0.159496789 container remove 21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_rhodes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:32:21 np0005539550 systemd[1]: libpod-conmon-21a6721604f61da2b06a2b07c8ac09092023b23091a24e7bf5576624b32e59c1.scope: Deactivated successfully.
Nov 29 03:32:21 np0005539550 podman[358771]: 2025-11-29 08:32:21.892294984 +0000 UTC m=+0.040034697 container create 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:32:21 np0005539550 systemd[1]: Started libpod-conmon-9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d.scope.
Nov 29 03:32:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203f4802a8c4657fbb6e71762be9f1a88f23db0810b1f43e5e1e9fc732f9ca00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203f4802a8c4657fbb6e71762be9f1a88f23db0810b1f43e5e1e9fc732f9ca00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203f4802a8c4657fbb6e71762be9f1a88f23db0810b1f43e5e1e9fc732f9ca00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:21 np0005539550 podman[358771]: 2025-11-29 08:32:21.874331688 +0000 UTC m=+0.022071411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/203f4802a8c4657fbb6e71762be9f1a88f23db0810b1f43e5e1e9fc732f9ca00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:21 np0005539550 podman[358771]: 2025-11-29 08:32:21.984606247 +0000 UTC m=+0.132345980 container init 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:32:21 np0005539550 podman[358771]: 2025-11-29 08:32:21.990987139 +0000 UTC m=+0.138726842 container start 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:32:21 np0005539550 podman[358771]: 2025-11-29 08:32:21.994462017 +0000 UTC m=+0.142201720 container attach 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:32:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.5 MiB/s wr, 339 op/s
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.206 257641 INFO nova.compute.manager [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Rebuilding instance#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.214 257641 DEBUG oslo_concurrency.lockutils [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.215 257641 DEBUG oslo_concurrency.lockutils [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.241 257641 INFO nova.compute.manager [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Detaching volume 6273512e-203d-43b7-bb2f-a59b9ff4579f#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.399 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.434 257641 INFO nova.virt.block_device [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Attempting to driver detach volume 6273512e-203d-43b7-bb2f-a59b9ff4579f from mountpoint /dev/vdb#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.446 257641 DEBUG nova.virt.libvirt.driver [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Attempting to detach device vdb from instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.447 257641 DEBUG nova.virt.libvirt.guest [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:32:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.456 257641 INFO nova.virt.libvirt.driver [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 from the persistent domain config.#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.457 257641 DEBUG nova.virt.libvirt.driver [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.458 257641 DEBUG nova.virt.libvirt.guest [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-6273512e-203d-43b7-bb2f-a59b9ff4579f">
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <serial>6273512e-203d-43b7-bb2f-a59b9ff4579f</serial>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:32:22 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:32:22 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:32:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.518 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405142.518547, 1509e19f-b5e6-496d-a0d9-d6740970fad0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.521 257641 DEBUG nova.virt.libvirt.driver [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.523 257641 INFO nova.virt.libvirt.driver [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 from the live domain config.#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.555 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.581 257641 DEBUG nova.compute.manager [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.633 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'pci_requests' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.648 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.667 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'resources' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.679 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'migration_context' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.701 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.708 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]: {
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:    "0": [
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:        {
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "devices": [
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "/dev/loop3"
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            ],
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "lv_name": "ceph_lv0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "lv_size": "7511998464",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "name": "ceph_lv0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "tags": {
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.cluster_name": "ceph",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.crush_device_class": "",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.encrypted": "0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.osd_id": "0",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.type": "block",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:                "ceph.vdo": "0"
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            },
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "type": "block",
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:            "vg_name": "ceph_vg0"
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:        }
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]:    ]
Nov 29 03:32:22 np0005539550 optimistic_chandrasekhar[358787]: }
Nov 29 03:32:22 np0005539550 systemd[1]: libpod-9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d.scope: Deactivated successfully.
Nov 29 03:32:22 np0005539550 podman[358771]: 2025-11-29 08:32:22.763709332 +0000 UTC m=+0.911449035 container died 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:32:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-203f4802a8c4657fbb6e71762be9f1a88f23db0810b1f43e5e1e9fc732f9ca00-merged.mount: Deactivated successfully.
Nov 29 03:32:22 np0005539550 podman[358771]: 2025-11-29 08:32:22.906846715 +0000 UTC m=+1.054586418 container remove 9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chandrasekhar, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:22 np0005539550 systemd[1]: libpod-conmon-9b1c0ab054b2b3cdff5ee85213b7946a8b9c76e6440a4669a033e969e720f77d.scope: Deactivated successfully.
Nov 29 03:32:22 np0005539550 nova_compute[257631]: 2025-11-29 08:32:22.953 257641 DEBUG nova.objects.instance [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:23.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:23 np0005539550 podman[358998]: 2025-11-29 08:32:23.494232933 +0000 UTC m=+0.020797009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:23 np0005539550 podman[358998]: 2025-11-29 08:32:23.657232211 +0000 UTC m=+0.183796277 container create 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:32:23 np0005539550 nova_compute[257631]: 2025-11-29 08:32:23.678 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:23 np0005539550 nova_compute[257631]: 2025-11-29 08:32:23.912 257641 DEBUG oslo_concurrency.lockutils [None req-46870504-f1f6-44d0-80be-de2f8f5cac16 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:23 np0005539550 systemd[1]: Started libpod-conmon-9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81.scope.
Nov 29 03:32:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 453 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 240 op/s
Nov 29 03:32:24 np0005539550 podman[358998]: 2025-11-29 08:32:24.381265778 +0000 UTC m=+0.907829854 container init 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:32:24 np0005539550 podman[358998]: 2025-11-29 08:32:24.390905893 +0000 UTC m=+0.917469939 container start 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:32:24 np0005539550 sad_ride[359016]: 167 167
Nov 29 03:32:24 np0005539550 systemd[1]: libpod-9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81.scope: Deactivated successfully.
Nov 29 03:32:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:24 np0005539550 podman[358998]: 2025-11-29 08:32:24.589113324 +0000 UTC m=+1.115677380 container attach 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:32:24 np0005539550 podman[358998]: 2025-11-29 08:32:24.590443917 +0000 UTC m=+1.117007973 container died 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:32:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-aff94e7c6348cbbc0042ec299aa5130c8bf558e3e8918fce2794fe90d3bbfecd-merged.mount: Deactivated successfully.
Nov 29 03:32:24 np0005539550 podman[358998]: 2025-11-29 08:32:24.642245152 +0000 UTC m=+1.168809218 container remove 9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ride, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:32:24 np0005539550 systemd[1]: libpod-conmon-9d547f06dc18e98ac66079c6c0c92ac393b2bf0cf4ba6df3e834f4c45dc5aa81.scope: Deactivated successfully.
Nov 29 03:32:24 np0005539550 podman[359022]: 2025-11-29 08:32:24.726142222 +0000 UTC m=+0.296964409 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:32:24 np0005539550 podman[359069]: 2025-11-29 08:32:24.81314335 +0000 UTC m=+0.037818021 container create d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:32:24 np0005539550 systemd[1]: Started libpod-conmon-d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d.scope.
Nov 29 03:32:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:32:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599ccb800382c09a653f1f3ef5526b8244b16d2eedcdea35aa810d4775957f09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599ccb800382c09a653f1f3ef5526b8244b16d2eedcdea35aa810d4775957f09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599ccb800382c09a653f1f3ef5526b8244b16d2eedcdea35aa810d4775957f09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/599ccb800382c09a653f1f3ef5526b8244b16d2eedcdea35aa810d4775957f09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:32:24 np0005539550 podman[359069]: 2025-11-29 08:32:24.797812201 +0000 UTC m=+0.022486882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:32:25 np0005539550 podman[359069]: 2025-11-29 08:32:25.041225019 +0000 UTC m=+0.265899700 container init d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:32:25 np0005539550 podman[359069]: 2025-11-29 08:32:25.048289448 +0000 UTC m=+0.272964129 container start d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:32:25 np0005539550 podman[359069]: 2025-11-29 08:32:25.051974862 +0000 UTC m=+0.276649573 container attach d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:32:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:25.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]: {
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:        "osd_id": 0,
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:        "type": "bluestore"
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]:    }
Nov 29 03:32:25 np0005539550 hopeful_bouman[359086]: }
Nov 29 03:32:25 np0005539550 systemd[1]: libpod-d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d.scope: Deactivated successfully.
Nov 29 03:32:25 np0005539550 podman[359069]: 2025-11-29 08:32:25.915989202 +0000 UTC m=+1.140663883 container died d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:32:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-599ccb800382c09a653f1f3ef5526b8244b16d2eedcdea35aa810d4775957f09-merged.mount: Deactivated successfully.
Nov 29 03:32:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 474 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 3.8 MiB/s wr, 283 op/s
Nov 29 03:32:26 np0005539550 podman[359069]: 2025-11-29 08:32:26.198289248 +0000 UTC m=+1.422963929 container remove d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:32:26 np0005539550 systemd[1]: libpod-conmon-d1615189e2b93dd9b0a5190a2135d990c7f94f71039d7d50cdba2d8151b3660d.scope: Deactivated successfully.
Nov 29 03:32:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:32:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:26.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:32:27 np0005539550 nova_compute[257631]: 2025-11-29 08:32:27.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:27.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 166 op/s
Nov 29 03:32:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:28 np0005539550 nova_compute[257631]: 2025-11-29 08:32:28.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c88f0e1e-e01f-48e6-b1ff-3e4898b0dc58 does not exist
Nov 29 03:32:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 90b14027-51cd-4ed3-86cd-2b2962927451 does not exist
Nov 29 03:32:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c60f6f23-f0ba-44ef-bb6f-2eb058025052 does not exist
Nov 29 03:32:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:29.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 162 op/s
Nov 29 03:32:30 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:32:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:31.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:32 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Nov 29 03:32:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 505 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 216 op/s
Nov 29 03:32:32 np0005539550 nova_compute[257631]: 2025-11-29 08:32:32.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:32.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:32 np0005539550 nova_compute[257631]: 2025-11-29 08:32:32.767 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Nov 29 03:32:32 np0005539550 nova_compute[257631]: 2025-11-29 08:32:32.859 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:33 np0005539550 nova_compute[257631]: 2025-11-29 08:32:33.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 527 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 154 op/s
Nov 29 03:32:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:35.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:35 np0005539550 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Nov 29 03:32:35 np0005539550 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000a3.scope: Consumed 13.964s CPU time.
Nov 29 03:32:35 np0005539550 systemd-machined[216673]: Machine qemu-88-instance-000000a3 terminated.
Nov 29 03:32:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 565 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1022 KiB/s rd, 6.1 MiB/s wr, 179 op/s
Nov 29 03:32:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:36.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:36 np0005539550 nova_compute[257631]: 2025-11-29 08:32:36.788 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance shutdown successfully after 14 seconds.#033[00m
Nov 29 03:32:36 np0005539550 nova_compute[257631]: 2025-11-29 08:32:36.796 257641 INFO nova.virt.libvirt.driver [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance destroyed successfully.#033[00m
Nov 29 03:32:36 np0005539550 nova_compute[257631]: 2025-11-29 08:32:36.802 257641 INFO nova.virt.libvirt.driver [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance destroyed successfully.#033[00m
Nov 29 03:32:37 np0005539550 nova_compute[257631]: 2025-11-29 08:32:37.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:37 np0005539550 nova_compute[257631]: 2025-11-29 08:32:37.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 555 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 490 KiB/s rd, 4.1 MiB/s wr, 124 op/s
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.210 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deleting instance files /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_del#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.211 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deletion of /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_del complete#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.463 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.464 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating image(s)#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.488 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:38.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.516 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.559 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.563 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.642 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.643 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "6e1589dfec5abd76868fdc022175780e085b08de" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.643 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.644 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "6e1589dfec5abd76868fdc022175780e085b08de" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.669 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.673 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:38 np0005539550 nova_compute[257631]: 2025-11-29 08:32:38.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.234 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.309 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] resizing rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.462 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.462 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Ensure instance console log exists: /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.463 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.463 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.464 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:39 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.465 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:32:39 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.470 257641 WARNING nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.476 257641 DEBUG nova.virt.libvirt.host [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.477 257641 DEBUG nova.virt.libvirt.host [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.481 257641 DEBUG nova.virt.libvirt.host [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.482 257641 DEBUG nova.virt.libvirt.host [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.483 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.483 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:36Z,direct_url=<?>,disk_format='qcow2',id=93eccffb-bacd-407f-af6f-64451dee7b21,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.484 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.484 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.484 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.485 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.485 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.485 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.486 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.486 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.486 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.486 257641 DEBUG nova.virt.hardware [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.487 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:39.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:39 np0005539550 nova_compute[257631]: 2025-11-29 08:32:39.943 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 555 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 485 KiB/s rd, 4.0 MiB/s wr, 122 op/s
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.257 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.291 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.291 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.292 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.292 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.292 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:40.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763932339' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.733 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.761 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.765 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002743773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:40 np0005539550 nova_compute[257631]: 2025-11-29 08:32:40.802 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.381 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.381 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:32:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:32:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641420752' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.406 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.408 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <uuid>f9b5e592-8e5a-4157-8cb4-ea3d779822e7</uuid>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <name>instance-000000a3</name>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServerShowV254Test-server-1192277651</nova:name>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:32:39</nova:creationTime>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:user uuid="ce5327e4760541578f2a465a788d04f7">tempest-ServerShowV254Test-1467634121-project-member</nova:user>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <nova:project uuid="d2ecba0524c44f4d89c1560f849c7be4">tempest-ServerShowV254Test-1467634121</nova:project>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="93eccffb-bacd-407f-af6f-64451dee7b21"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <nova:ports/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="serial">f9b5e592-8e5a-4157-8cb4-ea3d779822e7</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="uuid">f9b5e592-8e5a-4157-8cb4-ea3d779822e7</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/console.log" append="off"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:32:41 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:32:41 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:32:41 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:32:41 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.498 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.499 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.499 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Using config drive#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.520 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.539 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'ec2_ids' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:41.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.603 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.604 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4064MB free_disk=20.791507720947266GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.604 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.604 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.810 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.811 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance f9b5e592-8e5a-4157-8cb4-ea3d779822e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.811 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.812 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.910 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.944 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Creating config drive at /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config#033[00m
Nov 29 03:32:41 np0005539550 nova_compute[257631]: 2025-11-29 08:32:41.949 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_0ctw1o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.083 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph_0ctw1o" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 525 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 523 KiB/s rd, 5.6 MiB/s wr, 177 op/s
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.177 257641 DEBUG nova.storage.rbd_utils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] rbd image f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.180 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3450574795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.334 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.340 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.620 257641 DEBUG oslo_concurrency.processutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config f9b5e592-8e5a-4157-8cb4-ea3d779822e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.621 257641 INFO nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deleting local config drive /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7/disk.config because it was imported into RBD.#033[00m
Nov 29 03:32:42 np0005539550 systemd-machined[216673]: New machine qemu-89-instance-000000a3.
Nov 29 03:32:42 np0005539550 systemd[1]: Started Virtual Machine qemu-89-instance-000000a3.
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.732 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.774 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:32:42 np0005539550 nova_compute[257631]: 2025-11-29 08:32:42.774 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:43.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:43 np0005539550 nova_compute[257631]: 2025-11-29 08:32:43.687 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:43 np0005539550 nova_compute[257631]: 2025-11-29 08:32:43.775 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:43 np0005539550 nova_compute[257631]: 2025-11-29 08:32:43.776 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:32:43 np0005539550 nova_compute[257631]: 2025-11-29 08:32:43.776 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:32:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.009 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.009 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.009 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.009 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 533 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 181 KiB/s rd, 4.0 MiB/s wr, 130 op/s
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.426 257641 DEBUG nova.compute.manager [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.427 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.428 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for f9b5e592-8e5a-4157-8cb4-ea3d779822e7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.428 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405164.4281504, f9b5e592-8e5a-4157-8cb4-ea3d779822e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.429 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.434 257641 INFO nova.virt.libvirt.driver [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance spawned successfully.#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.434 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.516 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.520 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:44.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.530 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.531 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.531 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.532 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.532 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.533 257641 DEBUG nova.virt.libvirt.driver [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.657 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.658 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405164.430799, f9b5e592-8e5a-4157-8cb4-ea3d779822e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.658 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] VM Started (Lifecycle Event)#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.809 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.816 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.840 257641 DEBUG nova.compute.manager [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:32:44 np0005539550 nova_compute[257631]: 2025-11-29 08:32:44.931 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Nov 29 03:32:45 np0005539550 podman[359641]: 2025-11-29 08:32:45.359281736 +0000 UTC m=+0.078142414 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:32:45 np0005539550 podman[359640]: 2025-11-29 08:32:45.390061627 +0000 UTC m=+0.116955149 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 29 03:32:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:45.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:45 np0005539550 nova_compute[257631]: 2025-11-29 08:32:45.909 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:45 np0005539550 nova_compute[257631]: 2025-11-29 08:32:45.910 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:45 np0005539550 nova_compute[257631]: 2025-11-29 08:32:45.910 257641 DEBUG nova.objects.instance [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Nov 29 03:32:46 np0005539550 nova_compute[257631]: 2025-11-29 08:32:46.094 257641 DEBUG oslo_concurrency.lockutils [None req-57c6ae39-88a4-4db4-aa2f-d475d9e801a7 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 473 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.3 MiB/s wr, 189 op/s
Nov 29 03:32:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:46.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.040 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.085 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.085 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.085 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.086 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.086 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.086 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:32:47 np0005539550 nova_compute[257631]: 2025-11-29 08:32:47.411 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:47.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 452 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 194 op/s
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.483 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.484 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.484 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.484 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.484 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.485 257641 INFO nova.compute.manager [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Terminating instance#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.486 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "refresh_cache-f9b5e592-8e5a-4157-8cb4-ea3d779822e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.486 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquired lock "refresh_cache-f9b5e592-8e5a-4157-8cb4-ea3d779822e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.486 257641 DEBUG nova.network.neutron [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:32:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:48.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.689 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:48 np0005539550 nova_compute[257631]: 2025-11-29 08:32:48.878 257641 DEBUG nova.network.neutron [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:32:49 np0005539550 nova_compute[257631]: 2025-11-29 08:32:49.172 257641 DEBUG nova.network.neutron [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:49 np0005539550 nova_compute[257631]: 2025-11-29 08:32:49.195 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Releasing lock "refresh_cache-f9b5e592-8e5a-4157-8cb4-ea3d779822e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:32:49 np0005539550 nova_compute[257631]: 2025-11-29 08:32:49.196 257641 DEBUG nova.compute.manager [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:32:49 np0005539550 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a3.scope: Deactivated successfully.
Nov 29 03:32:49 np0005539550 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000a3.scope: Consumed 6.621s CPU time.
Nov 29 03:32:49 np0005539550 systemd-machined[216673]: Machine qemu-89-instance-000000a3 terminated.
Nov 29 03:32:49 np0005539550 nova_compute[257631]: 2025-11-29 08:32:49.418 257641 INFO nova.virt.libvirt.driver [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance destroyed successfully.#033[00m
Nov 29 03:32:49 np0005539550 nova_compute[257631]: 2025-11-29 08:32:49.419 257641 DEBUG nova.objects.instance [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lazy-loading 'resources' on Instance uuid f9b5e592-8e5a-4157-8cb4-ea3d779822e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:49.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 452 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.224 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.228 257641 INFO nova.virt.libvirt.driver [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deleting instance files /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_del#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.229 257641 INFO nova.virt.libvirt.driver [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deletion of /var/lib/nova/instances/f9b5e592-8e5a-4157-8cb4-ea3d779822e7_del complete#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.330 257641 INFO nova.compute.manager [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Took 1.13 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.331 257641 DEBUG oslo.service.loopingcall [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.331 257641 DEBUG nova.compute.manager [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.331 257641 DEBUG nova.network.neutron [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:32:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.906 257641 DEBUG nova.network.neutron [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:32:50 np0005539550 nova_compute[257631]: 2025-11-29 08:32:50.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.052 257641 DEBUG nova.network.neutron [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.079 257641 INFO nova.compute.manager [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Took 0.75 seconds to deallocate network for instance.#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.156 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.157 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.253 257641 DEBUG oslo_concurrency.processutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:51.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3354864543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.704 257641 DEBUG oslo_concurrency.processutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.712 257641 DEBUG nova.compute.provider_tree [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.788 257641 DEBUG nova.scheduler.client.report [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.822 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:51 np0005539550 nova_compute[257631]: 2025-11-29 08:32:51.891 257641 INFO nova.scheduler.client.report [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Deleted allocations for instance f9b5e592-8e5a-4157-8cb4-ea3d779822e7#033[00m
Nov 29 03:32:52 np0005539550 nova_compute[257631]: 2025-11-29 08:32:52.065 257641 DEBUG oslo_concurrency.lockutils [None req-b29ac9b0-e88f-4105-b9f3-033ee8087009 ce5327e4760541578f2a465a788d04f7 d2ecba0524c44f4d89c1560f849c7be4 - - default default] Lock "f9b5e592-8e5a-4157-8cb4-ea3d779822e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 413 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 243 op/s
Nov 29 03:32:52 np0005539550 nova_compute[257631]: 2025-11-29 08:32:52.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:52.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:53.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:53 np0005539550 nova_compute[257631]: 2025-11-29 08:32:53.691 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:53 np0005539550 nova_compute[257631]: 2025-11-29 08:32:53.984 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:53 np0005539550 nova_compute[257631]: 2025-11-29 08:32:53.984 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.010 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.107 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.107 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.118 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.118 257641 INFO nova.compute.claims [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:32:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 296 KiB/s wr, 195 op/s
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.289 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:54.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:32:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2861194220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.761 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.769 257641 DEBUG nova.compute.provider_tree [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.815 257641 DEBUG nova.scheduler.client.report [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.863 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.864 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.925 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.925 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.949 257641 INFO nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:32:54 np0005539550 nova_compute[257631]: 2025-11-29 08:32:54.988 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.163 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.164 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.165 257641 INFO nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Creating image(s)#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.245 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.277 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.301 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.307 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:55 np0005539550 podman[359765]: 2025-11-29 08:32:55.343177865 +0000 UTC m=+0.083492241 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.387 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.388 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.389 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.389 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.415 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.419 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:32:55 np0005539550 nova_compute[257631]: 2025-11-29 08:32:55.476 257641 DEBUG nova.policy [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:32:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:55.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 409 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 345 KiB/s wr, 194 op/s
Nov 29 03:32:56 np0005539550 nova_compute[257631]: 2025-11-29 08:32:56.307 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:32:56 np0005539550 nova_compute[257631]: 2025-11-29 08:32:56.383 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:32:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.450 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:32:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:57.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.812 257641 DEBUG nova.objects.instance [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 373a37da-5f23-4b61-b901-b36e7b8a1f46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.842 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.842 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Ensure instance console log exists: /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.842 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.843 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:32:57 np0005539550 nova_compute[257631]: 2025-11-29 08:32:57.843 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 439 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 133 op/s
Nov 29 03:32:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:32:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:32:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:32:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:32:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:32:58 np0005539550 nova_compute[257631]: 2025-11-29 08:32:58.569 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Successfully created port: cb992772-2542-4728-9c87-11f05dddcf88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:32:58 np0005539550 nova_compute[257631]: 2025-11-29 08:32:58.695 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:32:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:32:59
Nov 29 03:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'images']
Nov 29 03:32:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:32:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:32:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:32:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:32:59.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:32:59 np0005539550 nova_compute[257631]: 2025-11-29 08:32:59.800 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Successfully updated port: cb992772-2542-4728-9c87-11f05dddcf88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:32:59 np0005539550 nova_compute[257631]: 2025-11-29 08:32:59.832 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:32:59 np0005539550 nova_compute[257631]: 2025-11-29 08:32:59.832 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:32:59 np0005539550 nova_compute[257631]: 2025-11-29 08:32:59.832 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:00 np0005539550 nova_compute[257631]: 2025-11-29 08:33:00.011 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:33:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 439 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 87 op/s
Nov 29 03:33:00 np0005539550 nova_compute[257631]: 2025-11-29 08:33:00.324 257641 DEBUG nova.compute.manager [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-changed-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:00 np0005539550 nova_compute[257631]: 2025-11-29 08:33:00.324 257641 DEBUG nova.compute.manager [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing instance network info cache due to event network-changed-cb992772-2542-4728-9c87-11f05dddcf88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:00 np0005539550 nova_compute[257631]: 2025-11-29 08:33:00.325 257641 DEBUG oslo_concurrency.lockutils [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.177 257641 DEBUG nova.network.neutron [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updating instance_info_cache with network_info: [{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.334 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.335 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Instance network_info: |[{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.335 257641 DEBUG oslo_concurrency.lockutils [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.335 257641 DEBUG nova.network.neutron [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing network info cache for port cb992772-2542-4728-9c87-11f05dddcf88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.337 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Start _get_guest_xml network_info=[{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.341 257641 WARNING nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.345 257641 DEBUG nova.virt.libvirt.host [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.346 257641 DEBUG nova.virt.libvirt.host [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.351 257641 DEBUG nova.virt.libvirt.host [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.352 257641 DEBUG nova.virt.libvirt.host [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.353 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.353 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.354 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.354 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.355 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.355 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.355 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.356 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.356 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.356 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.357 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.357 257641 DEBUG nova.virt.hardware [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.360 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:01.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724192583' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.833 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.923 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:01 np0005539550 nova_compute[257631]: 2025-11-29 08:33:01.929 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 528 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 5.6 MiB/s wr, 179 op/s
Nov 29 03:33:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650003077' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.379 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.381 257641 DEBUG nova.virt.libvirt.vif [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1560374091',display_name='tempest-TestNetworkBasicOps-server-1560374091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1560374091',id=165,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/rdKIQ1picW6k9yVP1vIGJCfhx1mYRA+qv1GHgxy6GSeRsc6iyJz5SRou7NDR5yggNyTPyxaWXld++AOQcDLbRNPEryDxwikAmrBHKlKjJKKJoztumbkIHw2GknoBAaw==',key_name='tempest-TestNetworkBasicOps-1173164016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-0shnke0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=373a37da-5f23-4b61-b901-b36e7b8a1f46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.382 257641 DEBUG nova.network.os_vif_util [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.383 257641 DEBUG nova.network.os_vif_util [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.384 257641 DEBUG nova.objects.instance [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 373a37da-5f23-4b61-b901-b36e7b8a1f46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.399 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <uuid>373a37da-5f23-4b61-b901-b36e7b8a1f46</uuid>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <name>instance-000000a5</name>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-1560374091</nova:name>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:33:01</nova:creationTime>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <nova:port uuid="cb992772-2542-4728-9c87-11f05dddcf88">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="serial">373a37da-5f23-4b61-b901-b36e7b8a1f46</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="uuid">373a37da-5f23-4b61-b901-b36e7b8a1f46</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/373a37da-5f23-4b61-b901-b36e7b8a1f46_disk">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:1b:84:b6"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <target dev="tapcb992772-25"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/console.log" append="off"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:33:02 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:33:02 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:33:02 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:33:02 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.400 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Preparing to wait for external event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.400 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.401 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.401 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.402 257641 DEBUG nova.virt.libvirt.vif [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:32:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1560374091',display_name='tempest-TestNetworkBasicOps-server-1560374091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1560374091',id=165,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/rdKIQ1picW6k9yVP1vIGJCfhx1mYRA+qv1GHgxy6GSeRsc6iyJz5SRou7NDR5yggNyTPyxaWXld++AOQcDLbRNPEryDxwikAmrBHKlKjJKKJoztumbkIHw2GknoBAaw==',key_name='tempest-TestNetworkBasicOps-1173164016',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-0shnke0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:32:55Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=373a37da-5f23-4b61-b901-b36e7b8a1f46,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.402 257641 DEBUG nova.network.os_vif_util [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.402 257641 DEBUG nova.network.os_vif_util [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.403 257641 DEBUG os_vif [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.404 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.405 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.410 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcb992772-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.411 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcb992772-25, col_values=(('external_ids', {'iface-id': 'cb992772-2542-4728-9c87-11f05dddcf88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:84:b6', 'vm-uuid': '373a37da-5f23-4b61-b901-b36e7b8a1f46'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.412 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539550 NetworkManager[49039]: <info>  [1764405182.4139] manager: (tapcb992772-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.415 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.421 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.422 257641 INFO os_vif [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25')#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.485 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.486 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.486 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:1b:84:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.487 257641 INFO nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Using config drive#033[00m
Nov 29 03:33:02 np0005539550 nova_compute[257631]: 2025-11-29 08:33:02.519 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.297 257641 INFO nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Creating config drive at /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.303 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jnfirwg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.444 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0jnfirwg" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.475 257641 DEBUG nova.storage.rbd_utils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.479 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.552 257641 DEBUG nova.network.neutron [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updated VIF entry in instance network info cache for port cb992772-2542-4728-9c87-11f05dddcf88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.553 257641 DEBUG nova.network.neutron [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updating instance_info_cache with network_info: [{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.569 257641 DEBUG oslo_concurrency.lockutils [req-0c5b011c-fe1a-42ad-8733-5fb4f715b0d8 req-b0368863-904b-4bec-93d7-7d2f272a23a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:03.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.647 257641 DEBUG oslo_concurrency.processutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config 373a37da-5f23-4b61-b901-b36e7b8a1f46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.648 257641 INFO nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Deleting local config drive /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46/disk.config because it was imported into RBD.#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.740 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:03 np0005539550 kernel: tapcb992772-25: entered promiscuous mode
Nov 29 03:33:03 np0005539550 NetworkManager[49039]: <info>  [1764405183.7442] manager: (tapcb992772-25): new Tun device (/org/freedesktop/NetworkManager/Devices/333)
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.744 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:03Z|00751|binding|INFO|Claiming lport cb992772-2542-4728-9c87-11f05dddcf88 for this chassis.
Nov 29 03:33:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:03Z|00752|binding|INFO|cb992772-2542-4728-9c87-11f05dddcf88: Claiming fa:16:3e:1b:84:b6 10.100.0.3
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.758 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:84:b6 10.100.0.3'], port_security=['fa:16:3e:1b:84:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '373a37da-5f23-4b61-b901-b36e7b8a1f46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba924909-f58e-40ed-a02f-8b6d37fda8d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ca008e3-9cc0-429b-8a94-8266d31e7151, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=cb992772-2542-4728-9c87-11f05dddcf88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.760 158978 INFO neutron.agent.ovn.metadata.agent [-] Port cb992772-2542-4728-9c87-11f05dddcf88 in datapath 0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 bound to our chassis#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.761 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c12fb7c-b7e7-49dd-a7c7-9ead1e551738#033[00m
Nov 29 03:33:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:03Z|00753|binding|INFO|Setting lport cb992772-2542-4728-9c87-11f05dddcf88 ovn-installed in OVS
Nov 29 03:33:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:03Z|00754|binding|INFO|Setting lport cb992772-2542-4728-9c87-11f05dddcf88 up in Southbound
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:03 np0005539550 nova_compute[257631]: 2025-11-29 08:33:03.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.777 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8d5a14-1d70-4577-a5cf-292043673e15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.778 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0c12fb7c-b1 in ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:33:03 np0005539550 systemd-machined[216673]: New machine qemu-90-instance-000000a5.
Nov 29 03:33:03 np0005539550 systemd-udevd[360092]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.780 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0c12fb7c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.780 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bc967bb4-8807-419a-b72f-2d2480f676aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.781 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba9e699-e573-4272-b91c-4932af86c191]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 systemd[1]: Started Virtual Machine qemu-90-instance-000000a5.
Nov 29 03:33:03 np0005539550 NetworkManager[49039]: <info>  [1764405183.7980] device (tapcb992772-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:03 np0005539550 NetworkManager[49039]: <info>  [1764405183.7992] device (tapcb992772-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.796 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[667bf875-1bad-4de6-aee9-f98e0cc59eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.823 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[185a3c2d-4998-425b-92eb-d2cb0d9e9f5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.856 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9169bb01-5057-4d01-b5f2-df16c3564a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.863 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c50243-4e4f-481a-ba28-8e7b1889bc73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 systemd-udevd[360108]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:03 np0005539550 NetworkManager[49039]: <info>  [1764405183.8644] manager: (tap0c12fb7c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/334)
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.898 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e96efd5f-adc8-4d77-8017-58c2584cc203]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.902 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[86b599e5-65b3-4cf2-8f9b-46ba09e8b056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 NetworkManager[49039]: <info>  [1764405183.9262] device (tap0c12fb7c-b0): carrier: link connected
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.930 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0e39e185-6ec0-429b-8a9f-5b012ce0735b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.948 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[923b0460-6897-4b6e-ae07-02ad9cd7aa9e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c12fb7c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:b0:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823159, 'reachable_time': 43447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360164, 'error': None, 'target': 'ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.966 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e07a9f0-66bd-4fd3-8237-73affc4b2e3d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee2:b051'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 823159, 'tstamp': 823159}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360165, 'error': None, 'target': 'ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:03.982 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6a96c7b1-fb6b-4999-84ff-6ad29c86c87d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c12fb7c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e2:b0:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823159, 'reachable_time': 43447, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360166, 'error': None, 'target': 'ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.015 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63ce04ca-9a66-4f27-835e-7be54f7b99ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.077 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b3baa7c5-8640-4fe6-b8f1-060b01f66842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.079 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c12fb7c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.079 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.080 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c12fb7c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539550 NetworkManager[49039]: <info>  [1764405184.0824] manager: (tap0c12fb7c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Nov 29 03:33:04 np0005539550 kernel: tap0c12fb7c-b0: entered promiscuous mode
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.082 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.085 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c12fb7c-b0, col_values=(('external_ids', {'iface-id': '2825232a-c8aa-46c3-8863-29674d562705'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:04 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:04Z|00755|binding|INFO|Releasing lport 2825232a-c8aa-46c3-8863-29674d562705 from this chassis (sb_readonly=0)
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.086 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.103 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c12fb7c-b7e7-49dd-a7c7-9ead1e551738.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c12fb7c-b7e7-49dd-a7c7-9ead1e551738.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.104 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7e30f11b-5c2b-487c-b684-205ff6270f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.104 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/0c12fb7c-b7e7-49dd-a7c7-9ead1e551738.pid.haproxy
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 0c12fb7c-b7e7-49dd-a7c7-9ead1e551738
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:33:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:04.105 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'env', 'PROCESS_TAG=haproxy-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c12fb7c-b7e7-49dd-a7c7-9ead1e551738.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:33:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 572 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 7.9 MiB/s wr, 149 op/s
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.216 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405184.215784, 373a37da-5f23-4b61-b901-b36e7b8a1f46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.216 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.244 257641 DEBUG nova.compute.manager [req-d209644b-1a75-47f3-ac2c-db3d6c02839d req-2fe8ef17-ae12-4a35-9c88-fe1e75cc050b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.245 257641 DEBUG oslo_concurrency.lockutils [req-d209644b-1a75-47f3-ac2c-db3d6c02839d req-2fe8ef17-ae12-4a35-9c88-fe1e75cc050b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.245 257641 DEBUG oslo_concurrency.lockutils [req-d209644b-1a75-47f3-ac2c-db3d6c02839d req-2fe8ef17-ae12-4a35-9c88-fe1e75cc050b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.245 257641 DEBUG oslo_concurrency.lockutils [req-d209644b-1a75-47f3-ac2c-db3d6c02839d req-2fe8ef17-ae12-4a35-9c88-fe1e75cc050b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.245 257641 DEBUG nova.compute.manager [req-d209644b-1a75-47f3-ac2c-db3d6c02839d req-2fe8ef17-ae12-4a35-9c88-fe1e75cc050b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Processing event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.246 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.249 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.252 257641 INFO nova.virt.libvirt.driver [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Instance spawned successfully.#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.253 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.269 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.273 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.307 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.308 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.308 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.308 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.309 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.309 257641 DEBUG nova.virt.libvirt.driver [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.317 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.317 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405184.2179413, 373a37da-5f23-4b61-b901-b36e7b8a1f46 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.317 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.370 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.374 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405184.249056, 373a37da-5f23-4b61-b901-b36e7b8a1f46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.374 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.413 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.416 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.418 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405169.416023, f9b5e592-8e5a-4157-8cb4-ea3d779822e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.418 257641 INFO nova.compute.manager [-] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.428 257641 INFO nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Took 9.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.429 257641 DEBUG nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.455 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.458 257641 DEBUG nova.compute.manager [None req-3169d47f-ec8e-483e-ad3d-3ed12c823e97 - - - - - -] [instance: f9b5e592-8e5a-4157-8cb4-ea3d779822e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:04 np0005539550 podman[360240]: 2025-11-29 08:33:04.497458895 +0000 UTC m=+0.053247363 container create b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.507 257641 INFO nova.compute.manager [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Took 10.43 seconds to build instance.#033[00m
Nov 29 03:33:04 np0005539550 systemd[1]: Started libpod-conmon-b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39.scope.
Nov 29 03:33:04 np0005539550 nova_compute[257631]: 2025-11-29 08:33:04.533 257641 DEBUG oslo_concurrency.lockutils [None req-cfa575df-e0a5-49c2-b4e0-a43d4421ae90 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:04.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6d5c525e605f87924b567fd79fd1e43b479049e3cdff1045850564c94aa1e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:04 np0005539550 podman[360240]: 2025-11-29 08:33:04.468001057 +0000 UTC m=+0.023789555 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:33:04 np0005539550 podman[360240]: 2025-11-29 08:33:04.574688165 +0000 UTC m=+0.130476673 container init b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:04 np0005539550 podman[360240]: 2025-11-29 08:33:04.57999687 +0000 UTC m=+0.135785348 container start b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:33:04 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [NOTICE]   (360259) : New worker (360261) forked
Nov 29 03:33:04 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [NOTICE]   (360259) : Loading success.
Nov 29 03:33:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:05.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 9.3 MiB/s wr, 226 op/s
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.438 257641 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.439 257641 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.439 257641 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.440 257641 DEBUG oslo_concurrency.lockutils [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.440 257641 DEBUG nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] No waiting events found dispatching network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:06 np0005539550 nova_compute[257631]: 2025-11-29 08:33:06.440 257641 WARNING nova.compute.manager [req-01ff9888-c555-45c1-91c8-e05b06db1a46 req-744868e8-93b5-41c9-9e22-fcd151495af7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received unexpected event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:33:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:06.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:07 np0005539550 nova_compute[257631]: 2025-11-29 08:33:07.449 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:07.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.0 MiB/s wr, 312 op/s
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:33:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:33:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:08.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.742 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.758 257641 DEBUG nova.compute.manager [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-changed-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.758 257641 DEBUG nova.compute.manager [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing instance network info cache due to event network-changed-cb992772-2542-4728-9c87-11f05dddcf88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.759 257641 DEBUG oslo_concurrency.lockutils [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.759 257641 DEBUG oslo_concurrency.lockutils [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:08 np0005539550 nova_compute[257631]: 2025-11-29 08:33:08.759 257641 DEBUG nova.network.neutron [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing network info cache for port cb992772-2542-4728-9c87-11f05dddcf88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:33:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:33:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:33:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:33:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:33:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:09.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.0 MiB/s wr, 294 op/s
Nov 29 03:33:10 np0005539550 nova_compute[257631]: 2025-11-29 08:33:10.316 257641 DEBUG nova.network.neutron [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updated VIF entry in instance network info cache for port cb992772-2542-4728-9c87-11f05dddcf88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:10 np0005539550 nova_compute[257631]: 2025-11-29 08:33:10.317 257641 DEBUG nova.network.neutron [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updating instance_info_cache with network_info: [{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:10 np0005539550 nova_compute[257631]: 2025-11-29 08:33:10.721 257641 DEBUG oslo_concurrency.lockutils [req-d3ea8102-0b1d-4b99-a33e-28e90b39bd1f req-d3f7c90b-7abd-46e2-884f-50ff51b48dd5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:11 np0005539550 nova_compute[257631]: 2025-11-29 08:33:11.613 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:11.612 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:11.615 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:33:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:11.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 7.0 MiB/s wr, 373 op/s
Nov 29 03:33:12 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:33:12 np0005539550 nova_compute[257631]: 2025-11-29 08:33:12.451 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:12.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:13.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:13 np0005539550 nova_compute[257631]: 2025-11-29 08:33:13.747 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.7 MiB/s wr, 305 op/s
Nov 29 03:33:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:14.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:14.617 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:15.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 624 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 1.4 MiB/s wr, 316 op/s
Nov 29 03:33:16 np0005539550 podman[360277]: 2025-11-29 08:33:16.321701375 +0000 UTC m=+0.052593056 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:33:16 np0005539550 podman[360276]: 2025-11-29 08:33:16.336217193 +0000 UTC m=+0.068749696 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:33:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:16.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:17 np0005539550 nova_compute[257631]: 2025-11-29 08:33:17.454 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:17.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 629 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 688 KiB/s wr, 262 op/s
Nov 29 03:33:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:18.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:18 np0005539550 nova_compute[257631]: 2025-11-29 08:33:18.747 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:18.965 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:19Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1b:84:b6 10.100.0.3
Nov 29 03:33:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:19Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1b:84:b6 10.100.0.3
Nov 29 03:33:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:19.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010905584054531918 of space, bias 1.0, pg target 3.2716752163595757 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004322465917550716 of space, bias 1.0, pg target 1.2837723775125627 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:33:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 629 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 676 KiB/s wr, 172 op/s
Nov 29 03:33:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.029 257641 DEBUG nova.compute.manager [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.146 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.147 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.173 257641 DEBUG nova.objects.instance [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_requests' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.191 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.191 257641 INFO nova.compute.claims [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.192 257641 DEBUG nova.objects.instance [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.207 257641 DEBUG nova.objects.instance [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.275 257641 INFO nova.compute.resource_tracker [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating resource usage from migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.275 257641 DEBUG nova.compute.resource_tracker [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Starting to track incoming migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.377 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:21.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2975590236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.914 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.922 257641 DEBUG nova.compute.provider_tree [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.936 257641 DEBUG nova.scheduler.client.report [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.958 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:21 np0005539550 nova_compute[257631]: 2025-11-29 08:33:21.959 257641 INFO nova.compute.manager [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Migrating#033[00m
Nov 29 03:33:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 696 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.8 MiB/s wr, 288 op/s
Nov 29 03:33:22 np0005539550 nova_compute[257631]: 2025-11-29 08:33:22.469 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:22.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:23 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:33:23 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:33:23 np0005539550 systemd-logind[788]: New session 69 of user nova.
Nov 29 03:33:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:23.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:23 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:33:23 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:33:23 np0005539550 nova_compute[257631]: 2025-11-29 08:33:23.751 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:23 np0005539550 systemd[360347]: Queued start job for default target Main User Target.
Nov 29 03:33:23 np0005539550 systemd[360347]: Created slice User Application Slice.
Nov 29 03:33:23 np0005539550 systemd[360347]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:33:23 np0005539550 systemd[360347]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:33:23 np0005539550 systemd[360347]: Reached target Paths.
Nov 29 03:33:23 np0005539550 systemd[360347]: Reached target Timers.
Nov 29 03:33:23 np0005539550 systemd[360347]: Starting D-Bus User Message Bus Socket...
Nov 29 03:33:23 np0005539550 systemd[360347]: Starting Create User's Volatile Files and Directories...
Nov 29 03:33:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:23 np0005539550 systemd[360347]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:33:23 np0005539550 systemd[360347]: Reached target Sockets.
Nov 29 03:33:23 np0005539550 systemd[360347]: Finished Create User's Volatile Files and Directories.
Nov 29 03:33:23 np0005539550 systemd[360347]: Reached target Basic System.
Nov 29 03:33:23 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:33:23 np0005539550 systemd[360347]: Reached target Main User Target.
Nov 29 03:33:23 np0005539550 systemd[360347]: Startup finished in 221ms.
Nov 29 03:33:23 np0005539550 systemd[1]: Started Session 69 of User nova.
Nov 29 03:33:24 np0005539550 systemd[1]: session-69.scope: Deactivated successfully.
Nov 29 03:33:24 np0005539550 systemd-logind[788]: Session 69 logged out. Waiting for processes to exit.
Nov 29 03:33:24 np0005539550 systemd-logind[788]: Removed session 69.
Nov 29 03:33:24 np0005539550 systemd-logind[788]: New session 71 of user nova.
Nov 29 03:33:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 708 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 241 op/s
Nov 29 03:33:24 np0005539550 systemd[1]: Started Session 71 of User nova.
Nov 29 03:33:24 np0005539550 systemd[1]: session-71.scope: Deactivated successfully.
Nov 29 03:33:24 np0005539550 systemd-logind[788]: Session 71 logged out. Waiting for processes to exit.
Nov 29 03:33:24 np0005539550 systemd-logind[788]: Removed session 71.
Nov 29 03:33:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:25.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:26 np0005539550 nova_compute[257631]: 2025-11-29 08:33:26.178 257641 INFO nova.compute.manager [None req-6ffc8d86-2653-4ac1-bea5-34e251cac773 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Get console output#033[00m
Nov 29 03:33:26 np0005539550 nova_compute[257631]: 2025-11-29 08:33:26.189 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:33:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 725 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.3 MiB/s wr, 271 op/s
Nov 29 03:33:26 np0005539550 podman[360420]: 2025-11-29 08:33:26.369030393 +0000 UTC m=+0.097975548 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:33:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:27 np0005539550 nova_compute[257631]: 2025-11-29 08:33:27.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 702 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.1 MiB/s wr, 257 op/s
Nov 29 03:33:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:28 np0005539550 nova_compute[257631]: 2025-11-29 08:33:28.754 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.199 257641 DEBUG nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.200 257641 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.200 257641 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.200 257641 DEBUG oslo_concurrency.lockutils [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.201 257641 DEBUG nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:29 np0005539550 nova_compute[257631]: 2025-11-29 08:33:29.201 257641 WARNING nova.compute.manager [req-7d4187fd-4f35-4d46-8c33-53069b18d43d req-7f1b6f80-fae3-431b-81af-39e73328b7d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:29.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 702 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1004 KiB/s rd, 7.4 MiB/s wr, 225 op/s
Nov 29 03:33:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:30.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:33:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:33:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:33:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f926a5f-ff00-4156-95fd-cb53cdb7dc5c does not exist
Nov 29 03:33:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f635ce78-edb7-47e4-b559-1623615f4a34 does not exist
Nov 29 03:33:31 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev be2bf5f9-3121-4ba1-97af-7cf241a3d313 does not exist
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.338 257641 DEBUG nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.339 257641 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.339 257641 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.339 257641 DEBUG oslo_concurrency.lockutils [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.339 257641 DEBUG nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.339 257641 WARNING nova.compute.manager [req-688f2c20-90a8-436e-8570-c605183e003c req-5975f3ab-4a24-40b7-8b92-8fedad47c862 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:33:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:33:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:31.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:31 np0005539550 nova_compute[257631]: 2025-11-29 08:33:31.937 257641 INFO nova.network.neutron [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating port 5369324b-4a12-4cff-807c-444de53025fa with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:33:31 np0005539550 podman[360723]: 2025-11-29 08:33:31.876260276 +0000 UTC m=+0.022252375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:32 np0005539550 podman[360723]: 2025-11-29 08:33:32.144937346 +0000 UTC m=+0.290929425 container create 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 706 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 9.3 MiB/s wr, 284 op/s
Nov 29 03:33:32 np0005539550 nova_compute[257631]: 2025-11-29 08:33:32.475 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:32 np0005539550 systemd[1]: Started libpod-conmon-2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8.scope.
Nov 29 03:33:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:32 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:33:32 np0005539550 nova_compute[257631]: 2025-11-29 08:33:32.997 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:32 np0005539550 nova_compute[257631]: 2025-11-29 08:33:32.998 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:32 np0005539550 nova_compute[257631]: 2025-11-29 08:33:32.998 257641 DEBUG nova.network.neutron [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:33:33 np0005539550 podman[360723]: 2025-11-29 08:33:33.267382326 +0000 UTC m=+1.413374425 container init 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:33 np0005539550 podman[360723]: 2025-11-29 08:33:33.27465976 +0000 UTC m=+1.420651839 container start 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:33:33 np0005539550 bold_wiles[360740]: 167 167
Nov 29 03:33:33 np0005539550 systemd[1]: libpod-2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8.scope: Deactivated successfully.
Nov 29 03:33:33 np0005539550 podman[360723]: 2025-11-29 08:33:33.467515526 +0000 UTC m=+1.613507625 container attach 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:33 np0005539550 podman[360723]: 2025-11-29 08:33:33.46888423 +0000 UTC m=+1.614876319 container died 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:33 np0005539550 nova_compute[257631]: 2025-11-29 08:33:33.492 257641 DEBUG nova.compute.manager [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:33 np0005539550 nova_compute[257631]: 2025-11-29 08:33:33.492 257641 DEBUG nova.compute.manager [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:33:33 np0005539550 nova_compute[257631]: 2025-11-29 08:33:33.492 257641 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:33:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:33.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:33 np0005539550 nova_compute[257631]: 2025-11-29 08:33:33.755 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9f38083af8bd1e070b3f2ee005a2b6ffff5b00ecafd339cc1afa1ac2140e1560-merged.mount: Deactivated successfully.
Nov 29 03:33:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 731 KiB/s rd, 4.6 MiB/s wr, 184 op/s
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.473 257641 DEBUG nova.network.neutron [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:34 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:33:34 np0005539550 systemd[360347]: Activating special unit Exit the Session...
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped target Main User Target.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped target Basic System.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped target Paths.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped target Sockets.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped target Timers.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:33:34 np0005539550 systemd[360347]: Closed D-Bus User Message Bus Socket.
Nov 29 03:33:34 np0005539550 systemd[360347]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:33:34 np0005539550 systemd[360347]: Removed slice User Application Slice.
Nov 29 03:33:34 np0005539550 systemd[360347]: Reached target Shutdown.
Nov 29 03:33:34 np0005539550 systemd[360347]: Finished Exit the Session.
Nov 29 03:33:34 np0005539550 systemd[360347]: Reached target Exit the Session.
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.500 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.503 257641 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.503 257641 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:33:34 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:33:34 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:33:34 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:33:34 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:33:34 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:33:34 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:33:34 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:33:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:34.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.588 257641 DEBUG os_brick.utils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.591 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.608 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.609 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d4dfd85f-97f7-47be-9d9a-8486ff393979]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.610 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.622 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.622 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[76f53cc4-5136-4e0d-977f-66a8fcb4fe59]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.625 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.638 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.639 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[1860b658-8331-409e-8511-4cad92306ebd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.640 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[8629f16f-00f2-46ae-9181-cf5f5ba68f10]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.641 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.670 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.673 257641 DEBUG os_brick.initiator.connectors.lightos [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.673 257641 DEBUG os_brick.initiator.connectors.lightos [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.673 257641 DEBUG os_brick.initiator.connectors.lightos [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:33:34 np0005539550 nova_compute[257631]: 2025-11-29 08:33:34.674 257641 DEBUG os_brick.utils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:33:34 np0005539550 podman[360723]: 2025-11-29 08:33:34.770503018 +0000 UTC m=+2.916495097 container remove 2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:34 np0005539550 systemd[1]: libpod-conmon-2a8aeeabc3566043aeed21196d6840e17d94fc1ecc9d27a3dc6e73e293707ff8.scope: Deactivated successfully.
Nov 29 03:33:35 np0005539550 podman[360776]: 2025-11-29 08:33:34.96489277 +0000 UTC m=+0.021886346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:35 np0005539550 podman[360776]: 2025-11-29 08:33:35.074585805 +0000 UTC m=+0.131579361 container create 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:33:35 np0005539550 systemd[1]: Started libpod-conmon-5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293.scope.
Nov 29 03:33:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:35 np0005539550 podman[360776]: 2025-11-29 08:33:35.228010809 +0000 UTC m=+0.285004395 container init 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:33:35 np0005539550 podman[360776]: 2025-11-29 08:33:35.236116565 +0000 UTC m=+0.293110121 container start 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:33:35 np0005539550 podman[360776]: 2025-11-29 08:33:35.27018679 +0000 UTC m=+0.327180346 container attach 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:33:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:35.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:36 np0005539550 competent_kirch[360792]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:33:36 np0005539550 competent_kirch[360792]: --> relative data size: 1.0
Nov 29 03:33:36 np0005539550 competent_kirch[360792]: --> All data devices are unavailable
Nov 29 03:33:36 np0005539550 systemd[1]: libpod-5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293.scope: Deactivated successfully.
Nov 29 03:33:36 np0005539550 podman[360807]: 2025-11-29 08:33:36.141362712 +0000 UTC m=+0.030752392 container died 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:33:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 4.1 MiB/s wr, 159 op/s
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.322 257641 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.324 257641 DEBUG nova.network.neutron [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.345 257641 DEBUG oslo_concurrency.lockutils [req-093d7c0c-9ac4-4630-82a3-fe1d53c4ce4e req-c4128bce-d521-4f16-a840-0d4c7a8737e0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:33:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1d999cbf6a13423dcc13784dbd96920ef9a9375e952217f6a79262b74d070e7a-merged.mount: Deactivated successfully.
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.827 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.829 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:33:36 np0005539550 nova_compute[257631]: 2025-11-29 08:33:36.829 257641 INFO nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Creating image(s)#033[00m
Nov 29 03:33:37 np0005539550 podman[360807]: 2025-11-29 08:33:37.383759806 +0000 UTC m=+1.273149476 container remove 5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:33:37 np0005539550 systemd[1]: libpod-conmon-5a50a8c3bd09dc4f6c5a4f676a57dfeb245e353fbb63eda2e7beaf5af1459293.scope: Deactivated successfully.
Nov 29 03:33:37 np0005539550 nova_compute[257631]: 2025-11-29 08:33:37.473 257641 DEBUG nova.storage.rbd_utils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] creating snapshot(nova-resize) on rbd image(8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:33:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:37.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:37 np0005539550 nova_compute[257631]: 2025-11-29 08:33:37.878 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.023405702 +0000 UTC m=+0.036770195 container create 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:33:38 np0005539550 systemd[1]: Started libpod-conmon-075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393.scope.
Nov 29 03:33:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.083432945 +0000 UTC m=+0.096797458 container init 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.090402772 +0000 UTC m=+0.103767265 container start 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.093439789 +0000 UTC m=+0.106804282 container attach 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:38 np0005539550 youthful_greider[361016]: 167 167
Nov 29 03:33:38 np0005539550 systemd[1]: libpod-075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393.scope: Deactivated successfully.
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.096583299 +0000 UTC m=+0.109947792 container died 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.007395075 +0000 UTC m=+0.020759588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4899d4ad9b582c8ab57854ad7632f5a2869b8bd1266c254d56c741c0f7a05822-merged.mount: Deactivated successfully.
Nov 29 03:33:38 np0005539550 podman[361000]: 2025-11-29 08:33:38.129966966 +0000 UTC m=+0.143331459 container remove 075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_greider, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:33:38 np0005539550 systemd[1]: libpod-conmon-075c94c655cd7e2c3f7a4123f8b5b5f9f1c798d04f89382c50352d9457e18393.scope: Deactivated successfully.
Nov 29 03:33:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 928 KiB/s rd, 3.1 MiB/s wr, 132 op/s
Nov 29 03:33:38 np0005539550 podman[361042]: 2025-11-29 08:33:38.303156402 +0000 UTC m=+0.046981193 container create bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:33:38 np0005539550 systemd[1]: Started libpod-conmon-bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5.scope.
Nov 29 03:33:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d96455946fc1e4077ceb7c4bddaba90a62b0631c1449abda540c44f0d0a5fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d96455946fc1e4077ceb7c4bddaba90a62b0631c1449abda540c44f0d0a5fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d96455946fc1e4077ceb7c4bddaba90a62b0631c1449abda540c44f0d0a5fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76d96455946fc1e4077ceb7c4bddaba90a62b0631c1449abda540c44f0d0a5fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:38 np0005539550 podman[361042]: 2025-11-29 08:33:38.282470777 +0000 UTC m=+0.026295588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:38 np0005539550 podman[361042]: 2025-11-29 08:33:38.390205101 +0000 UTC m=+0.134029962 container init bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:33:38 np0005539550 podman[361042]: 2025-11-29 08:33:38.396153792 +0000 UTC m=+0.139978583 container start bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:33:38 np0005539550 podman[361042]: 2025-11-29 08:33:38.39925593 +0000 UTC m=+0.143080761 container attach bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:33:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:38.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Nov 29 03:33:38 np0005539550 nova_compute[257631]: 2025-11-29 08:33:38.757 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:38 np0005539550 nova_compute[257631]: 2025-11-29 08:33:38.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Nov 29 03:33:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.029 257641 DEBUG nova.objects.instance [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]: {
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:    "0": [
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:        {
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "devices": [
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "/dev/loop3"
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            ],
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "lv_name": "ceph_lv0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "lv_size": "7511998464",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "name": "ceph_lv0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "tags": {
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.cluster_name": "ceph",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.crush_device_class": "",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.encrypted": "0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.osd_id": "0",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.type": "block",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:                "ceph.vdo": "0"
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            },
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "type": "block",
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:            "vg_name": "ceph_vg0"
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:        }
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]:    ]
Nov 29 03:33:39 np0005539550 condescending_kapitsa[361059]: }
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3291194107' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3291194107' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:33:39 np0005539550 systemd[1]: libpod-bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5.scope: Deactivated successfully.
Nov 29 03:33:39 np0005539550 podman[361042]: 2025-11-29 08:33:39.170372903 +0000 UTC m=+0.914197694 container died bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.198 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.198 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Ensure instance console log exists: /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.199 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.200 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.200 257641 DEBUG oslo_concurrency.lockutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.204 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start _get_guest_xml network_info=[{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'a25ffe8f-9aa7-43d7-a806-07b7934cc3c4', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd51465d5-c782-4ab5-86e5-16500d7ed93e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a', 'attached_at': '2025-11-29T08:33:36.000000', 'detached_at': '', 'volume_id': 'd51465d5-c782-4ab5-86e5-16500d7ed93e', 'multiattach': True, 'serial': 'd51465d5-c782-4ab5-86e5-16500d7ed93e'}, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.210 257641 WARNING nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.215 257641 DEBUG nova.virt.libvirt.host [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.216 257641 DEBUG nova.virt.libvirt.host [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.219 257641 DEBUG nova.virt.libvirt.host [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.220 257641 DEBUG nova.virt.libvirt.host [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.221 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.221 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.222 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.222 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.222 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.222 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.224 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.224 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.225 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.225 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.225 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.225 257641 DEBUG nova.virt.hardware [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.225 257641 DEBUG nova.objects.instance [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.239 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:39.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960488735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.714 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:39 np0005539550 nova_compute[257631]: 2025-11-29 08:33:39.763 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-76d96455946fc1e4077ceb7c4bddaba90a62b0631c1449abda540c44f0d0a5fb-merged.mount: Deactivated successfully.
Nov 29 03:33:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1005 KiB/s rd, 2.8 MiB/s wr, 128 op/s
Nov 29 03:33:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:33:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648825264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.239 257641 DEBUG oslo_concurrency.processutils [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.274 257641 DEBUG nova.virt.libvirt.vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:32:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.275 257641 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.276 257641 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.278 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <uuid>8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</uuid>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <name>instance-000000a4</name>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:name>multiattach-server-0</nova:name>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:33:39</nova:creationTime>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <nova:port uuid="5369324b-4a12-4cff-807c-444de53025fa">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="serial">8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="uuid">8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_disk.config">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <shareable/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a3:51:12"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <target dev="tap5369324b-4a"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a/console.log" append="off"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:33:40 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:33:40 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:33:40 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:33:40 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.280 257641 DEBUG nova.virt.libvirt.vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:32:44Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:33:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.280 257641 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:a3:51:12"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.281 257641 DEBUG nova.network.os_vif_util [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.283 257641 DEBUG os_vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.283 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.284 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.284 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.288 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5369324b-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.289 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5369324b-4a, col_values=(('external_ids', {'iface-id': '5369324b-4a12-4cff-807c-444de53025fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a3:51:12', 'vm-uuid': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.291 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.292 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:33:40 np0005539550 NetworkManager[49039]: <info>  [1764405220.2923] manager: (tap5369324b-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.297 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.299 257641 INFO os_vif [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a')#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.583 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.584 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.584 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.584 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:a3:51:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.585 257641 INFO nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Using config drive#033[00m
Nov 29 03:33:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:40.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:40 np0005539550 NetworkManager[49039]: <info>  [1764405220.6925] manager: (tap5369324b-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Nov 29 03:33:40 np0005539550 kernel: tap5369324b-4a: entered promiscuous mode
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:40Z|00756|binding|INFO|Claiming lport 5369324b-4a12-4cff-807c-444de53025fa for this chassis.
Nov 29 03:33:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:40Z|00757|binding|INFO|5369324b-4a12-4cff-807c-444de53025fa: Claiming fa:16:3e:a3:51:12 10.100.0.11
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.702 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:51:12 10.100.0.11'], port_security=['fa:16:3e:a3:51:12 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5369324b-4a12-4cff-807c-444de53025fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.704 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5369324b-4a12-4cff-807c-444de53025fa in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.706 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:33:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:40Z|00758|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa ovn-installed in OVS
Nov 29 03:33:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:40Z|00759|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa up in Southbound
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.718 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 systemd-udevd[361213]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:33:40 np0005539550 systemd-machined[216673]: New machine qemu-91-instance-000000a4.
Nov 29 03:33:40 np0005539550 NetworkManager[49039]: <info>  [1764405220.7425] device (tap5369324b-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.742 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1ffd47-eec5-4efd-8800-2115d99dad69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 systemd[1]: Started Virtual Machine qemu-91-instance-000000a4.
Nov 29 03:33:40 np0005539550 NetworkManager[49039]: <info>  [1764405220.7444] device (tap5369324b-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.782 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8c27f3-6b8f-45ce-814d-043d01e2489b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.787 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[21139127-2283-4f0a-85d3-8ebae0ff412b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.818 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[29a371e9-072f-429e-9821-5c763fbc7d66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.836 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7e136c3d-958c-4fc5-9f2a-c5d93b4e09e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361226, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.852 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf00b4c-9a6e-4d98-a877-08073c5d8b37]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361228, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361228, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.853 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.887 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.888 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.888 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.888 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.889 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:33:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:33:40.889 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:33:40 np0005539550 nova_compute[257631]: 2025-11-29 08:33:40.947 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:33:41 np0005539550 podman[361042]: 2025-11-29 08:33:41.144964452 +0000 UTC m=+2.888789243 container remove bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.193 257641 DEBUG nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.200 257641 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.200 257641 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.200 257641 DEBUG oslo_concurrency.lockutils [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.201 257641 DEBUG nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.201 257641 WARNING nova.compute.manager [req-f2df386d-8d03-4883-8901-04875deada08 req-8ba8e7f8-6871-4a95-8310-79d2c453884a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:33:41 np0005539550 systemd[1]: libpod-conmon-bd68d367612ce1c608eca918983e435e14ade98149de6b346950e9a5368090a5.scope: Deactivated successfully.
Nov 29 03:33:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:41.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.778066262 +0000 UTC m=+0.039957156 container create 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:33:41 np0005539550 systemd[1]: Started libpod-conmon-456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d.scope.
Nov 29 03:33:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.76067956 +0000 UTC m=+0.022570474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.859358895 +0000 UTC m=+0.121249779 container init 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.866229439 +0000 UTC m=+0.128120333 container start 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.869206105 +0000 UTC m=+0.131096999 container attach 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:33:41 np0005539550 gracious_chaum[361404]: 167 167
Nov 29 03:33:41 np0005539550 systemd[1]: libpod-456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d.scope: Deactivated successfully.
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.87257705 +0000 UTC m=+0.134467944 container died 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:33:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e1f0904a3312ad56de7534b342b7cbba8b3d8e590cc117b0b1153d4bc9fa9e4b-merged.mount: Deactivated successfully.
Nov 29 03:33:41 np0005539550 podman[361387]: 2025-11-29 08:33:41.909924648 +0000 UTC m=+0.171815542 container remove 456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:33:41 np0005539550 systemd[1]: libpod-conmon-456fd38b94b6db0169a13df7f8659c2cffc24fd951a1e9d39a501b447e8c376d.scope: Deactivated successfully.
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.974 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.974 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.975 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.975 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:33:41 np0005539550 nova_compute[257631]: 2025-11-29 08:33:41.975 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:42 np0005539550 podman[361428]: 2025-11-29 08:33:42.137924695 +0000 UTC m=+0.048185854 container create 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:33:42 np0005539550 systemd[1]: Started libpod-conmon-9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738.scope.
Nov 29 03:33:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:33:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39288f5d5d8b47a9e10446a04896137e6b59b610c65cc47c1dd8a2137c32ee74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39288f5d5d8b47a9e10446a04896137e6b59b610c65cc47c1dd8a2137c32ee74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39288f5d5d8b47a9e10446a04896137e6b59b610c65cc47c1dd8a2137c32ee74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39288f5d5d8b47a9e10446a04896137e6b59b610c65cc47c1dd8a2137c32ee74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:33:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 588 KiB/s wr, 132 op/s
Nov 29 03:33:42 np0005539550 podman[361428]: 2025-11-29 08:33:42.120941414 +0000 UTC m=+0.031202583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:33:42 np0005539550 podman[361428]: 2025-11-29 08:33:42.219406293 +0000 UTC m=+0.129667482 container init 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:33:42 np0005539550 podman[361428]: 2025-11-29 08:33:42.227659562 +0000 UTC m=+0.137920721 container start 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:33:42 np0005539550 podman[361428]: 2025-11-29 08:33:42.231028908 +0000 UTC m=+0.141290087 container attach 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.452 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405222.4512374, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.453 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.456 257641 DEBUG nova.compute.manager [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:33:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146564964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.468 257641 INFO nova.virt.libvirt.driver [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance running successfully.#033[00m
Nov 29 03:33:42 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.471 257641 DEBUG nova.virt.libvirt.guest [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.471 257641 DEBUG nova.virt.libvirt.driver [None req-660b0ec1-368a-4d38-b2d8-971e6d64cec4 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.490 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.585 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.589 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:42.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.652 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.653 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405222.4544387, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.654 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.683 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.686 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.724 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.729 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.729 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.732 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.733 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.736 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.736 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.737 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.938 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.940 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3703MB free_disk=20.693748474121094GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:42 np0005539550 nova_compute[257631]: 2025-11-29 08:33:42.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.022 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Applying migration context for instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a as it has an incoming, in-progress migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60. Migration status is finished _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.023 257641 INFO nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating resource usage from migration 0f0b67e7-e113-40b3-9fa2-648865ec7b60#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.056 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.057 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 373a37da-5f23-4b61-b901-b36e7b8a1f46 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.057 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.057 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.058 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]: {
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:        "osd_id": 0,
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:        "type": "bluestore"
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]:    }
Nov 29 03:33:43 np0005539550 admiring_jackson[361462]: }
Nov 29 03:33:43 np0005539550 systemd[1]: libpod-9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738.scope: Deactivated successfully.
Nov 29 03:33:43 np0005539550 podman[361428]: 2025-11-29 08:33:43.10339335 +0000 UTC m=+1.013654509 container died 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.125 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:33:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-39288f5d5d8b47a9e10446a04896137e6b59b610c65cc47c1dd8a2137c32ee74-merged.mount: Deactivated successfully.
Nov 29 03:33:43 np0005539550 podman[361428]: 2025-11-29 08:33:43.225597022 +0000 UTC m=+1.135858181 container remove 9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:33:43 np0005539550 systemd[1]: libpod-conmon-9919b68fea703c7133d4084399cb3231be47bf5fa872d53ec5dc6fc121aad738.scope: Deactivated successfully.
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 696f4ea5-5a74-4399-937e-0c2fa4c06ff0 does not exist
Nov 29 03:33:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ee1568f-88b5-4e44-b9e4-0cae1b27f8ed does not exist
Nov 29 03:33:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f6145611-2a35-4009-a045-a686b9ef1a9f does not exist
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.330 257641 DEBUG nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.333 257641 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.334 257641 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.336 257641 DEBUG oslo_concurrency.lockutils [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.337 257641 DEBUG nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.337 257641 WARNING nova.compute.manager [req-15642aae-010c-4f3d-a12e-e62a8ac8d2a2 req-6b29344e-4321-4beb-a679-4dcdc79131e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:33:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934547085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.601 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.608 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.634 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.660 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.660 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:33:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:43.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:43 np0005539550 nova_compute[257631]: 2025-11-29 08:33:43.760 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 706 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 47 KiB/s wr, 141 op/s
Nov 29 03:33:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:33:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:44.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:44 np0005539550 nova_compute[257631]: 2025-11-29 08:33:44.659 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:44 np0005539550 nova_compute[257631]: 2025-11-29 08:33:44.660 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:44 np0005539550 nova_compute[257631]: 2025-11-29 08:33:44.660 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:33:45 np0005539550 nova_compute[257631]: 2025-11-29 08:33:45.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:45.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:45 np0005539550 nova_compute[257631]: 2025-11-29 08:33:45.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 721 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 234 op/s
Nov 29 03:33:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:46.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:47 np0005539550 podman[361669]: 2025-11-29 08:33:47.356713547 +0000 UTC m=+0.084743622 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:33:47 np0005539550 podman[361668]: 2025-11-29 08:33:47.362916374 +0000 UTC m=+0.089525513 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 03:33:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:47.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:47 np0005539550 nova_compute[257631]: 2025-11-29 08:33:47.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.1 MiB/s wr, 220 op/s
Nov 29 03:33:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:48.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:48 np0005539550 nova_compute[257631]: 2025-11-29 08:33:48.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:49.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.9 MiB/s wr, 196 op/s
Nov 29 03:33:50 np0005539550 nova_compute[257631]: 2025-11-29 08:33:50.327 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:50.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:51.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Nov 29 03:33:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Nov 29 03:33:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Nov 29 03:33:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:52.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:52 np0005539550 nova_compute[257631]: 2025-11-29 08:33:52.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:33:53 np0005539550 nova_compute[257631]: 2025-11-29 08:33:53.190 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:53.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:53 np0005539550 nova_compute[257631]: 2025-11-29 08:33:53.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Nov 29 03:33:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:33:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:54.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:33:55 np0005539550 nova_compute[257631]: 2025-11-29 08:33:55.329 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:33:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:33:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 39 KiB/s wr, 141 op/s
Nov 29 03:33:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:56.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:33:56Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a3:51:12 10.100.0.11
Nov 29 03:33:57 np0005539550 podman[361710]: 2025-11-29 08:33:57.426488585 +0000 UTC m=+0.141400260 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:33:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:57.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 39 KiB/s wr, 141 op/s
Nov 29 03:33:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:33:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:33:58 np0005539550 nova_compute[257631]: 2025-11-29 08:33:58.307 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:33:58.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:58 np0005539550 nova_compute[257631]: 2025-11-29 08:33:58.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:33:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:33:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Nov 29 03:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:33:59
Nov 29 03:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control']
Nov 29 03:33:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:33:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:33:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:33:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:33:59.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:33:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Nov 29 03:34:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Nov 29 03:34:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 46 KiB/s wr, 154 op/s
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.398 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.509 257641 DEBUG nova.compute.manager [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.509 257641 DEBUG nova.compute.manager [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.509 257641 DEBUG oslo_concurrency.lockutils [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.510 257641 DEBUG oslo_concurrency.lockutils [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:00 np0005539550 nova_compute[257631]: 2025-11-29 08:34:00.510 257641 DEBUG nova.network.neutron [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:01 np0005539550 nova_compute[257631]: 2025-11-29 08:34:01.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:01.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 722 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 41 KiB/s wr, 152 op/s
Nov 29 03:34:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:02.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:02 np0005539550 nova_compute[257631]: 2025-11-29 08:34:02.769 257641 DEBUG nova.network.neutron [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:02 np0005539550 nova_compute[257631]: 2025-11-29 08:34:02.770 257641 DEBUG nova.network.neutron [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:02 np0005539550 nova_compute[257631]: 2025-11-29 08:34:02.806 257641 DEBUG oslo_concurrency.lockutils [req-dc68d7e1-d575-4a62-8b70-fdb0cb462b7c req-5655c713-432a-45fd-ac69-7b995bb07736 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:03 np0005539550 nova_compute[257631]: 2025-11-29 08:34:03.780 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:03 np0005539550 nova_compute[257631]: 2025-11-29 08:34:03.980 257641 DEBUG nova.compute.manager [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.073 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.074 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.108 257641 DEBUG nova.objects.instance [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_requests' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.126 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.127 257641 INFO nova.compute.claims [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.127 257641 DEBUG nova.objects.instance [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.141 257641 DEBUG nova.objects.instance [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 724 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 38 KiB/s wr, 149 op/s
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.222 257641 INFO nova.compute.resource_tracker [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating resource usage from migration 130a277d-60ed-49bc-a634-8c6cc38feb56#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.223 257641 DEBUG nova.compute.resource_tracker [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Starting to track incoming migration 130a277d-60ed-49bc-a634-8c6cc38feb56 with flavor 709b029f-0458-4e40-a6ee-e1e02b48c06c _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.378 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284249512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.878 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.884 257641 DEBUG nova.compute.provider_tree [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.908 257641 DEBUG nova.scheduler.client.report [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.947 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:04 np0005539550 nova_compute[257631]: 2025-11-29 08:34:04.948 257641 INFO nova.compute.manager [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Migrating#033[00m
Nov 29 03:34:05 np0005539550 nova_compute[257631]: 2025-11-29 08:34:05.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:05.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 740 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 998 KiB/s rd, 1.8 MiB/s wr, 81 op/s
Nov 29 03:34:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:07.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 730 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 850 KiB/s rd, 2.9 MiB/s wr, 106 op/s
Nov 29 03:34:08 np0005539550 systemd[1]: Created slice User Slice of UID 42436.
Nov 29 03:34:08 np0005539550 systemd[1]: Starting User Runtime Directory /run/user/42436...
Nov 29 03:34:08 np0005539550 systemd-logind[788]: New session 72 of user nova.
Nov 29 03:34:08 np0005539550 systemd[1]: Finished User Runtime Directory /run/user/42436.
Nov 29 03:34:08 np0005539550 systemd[1]: Starting User Manager for UID 42436...
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:34:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:34:08 np0005539550 systemd[361818]: Queued start job for default target Main User Target.
Nov 29 03:34:08 np0005539550 systemd[361818]: Created slice User Application Slice.
Nov 29 03:34:08 np0005539550 systemd[361818]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:34:08 np0005539550 systemd[361818]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 03:34:08 np0005539550 systemd[361818]: Reached target Paths.
Nov 29 03:34:08 np0005539550 systemd[361818]: Reached target Timers.
Nov 29 03:34:08 np0005539550 systemd[361818]: Starting D-Bus User Message Bus Socket...
Nov 29 03:34:08 np0005539550 systemd[361818]: Starting Create User's Volatile Files and Directories...
Nov 29 03:34:08 np0005539550 systemd[361818]: Listening on D-Bus User Message Bus Socket.
Nov 29 03:34:08 np0005539550 systemd[361818]: Reached target Sockets.
Nov 29 03:34:08 np0005539550 systemd[361818]: Finished Create User's Volatile Files and Directories.
Nov 29 03:34:08 np0005539550 systemd[361818]: Reached target Basic System.
Nov 29 03:34:08 np0005539550 systemd[361818]: Reached target Main User Target.
Nov 29 03:34:08 np0005539550 systemd[361818]: Startup finished in 159ms.
Nov 29 03:34:08 np0005539550 systemd[1]: Started User Manager for UID 42436.
Nov 29 03:34:08 np0005539550 systemd[1]: Started Session 72 of User nova.
Nov 29 03:34:08 np0005539550 systemd[1]: session-72.scope: Deactivated successfully.
Nov 29 03:34:08 np0005539550 systemd-logind[788]: Session 72 logged out. Waiting for processes to exit.
Nov 29 03:34:08 np0005539550 systemd-logind[788]: Removed session 72.
Nov 29 03:34:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:08 np0005539550 systemd-logind[788]: New session 74 of user nova.
Nov 29 03:34:08 np0005539550 systemd[1]: Started Session 74 of User nova.
Nov 29 03:34:08 np0005539550 nova_compute[257631]: 2025-11-29 08:34:08.782 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:08 np0005539550 systemd[1]: session-74.scope: Deactivated successfully.
Nov 29 03:34:08 np0005539550 systemd-logind[788]: Session 74 logged out. Waiting for processes to exit.
Nov 29 03:34:08 np0005539550 systemd-logind[788]: Removed session 74.
Nov 29 03:34:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:34:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:34:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:34:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:34:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:34:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:09.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 730 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 816 KiB/s rd, 2.8 MiB/s wr, 101 op/s
Nov 29 03:34:10 np0005539550 nova_compute[257631]: 2025-11-29 08:34:10.404 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:11.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:12.069 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.069 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:12.070 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:34:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 719 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 950 KiB/s rd, 3.9 MiB/s wr, 140 op/s
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.321 257641 DEBUG nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.322 257641 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.322 257641 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.322 257641 DEBUG oslo_concurrency.lockutils [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.322 257641 DEBUG nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:12 np0005539550 nova_compute[257631]: 2025-11-29 08:34:12.322 257641 WARNING nova.compute.manager [req-95a85ac9-fe17-43e1-97f3-29b8ba022fbf req-d969957a-7653-4e79-8013-9d9d5137a965 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_migrating.#033[00m
Nov 29 03:34:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:12.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:13.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:13 np0005539550 nova_compute[257631]: 2025-11-29 08:34:13.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:13 np0005539550 nova_compute[257631]: 2025-11-29 08:34:13.822 257641 INFO nova.network.neutron [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating port a08044a1-40b5-4987-bfe0-a92ba0c13b97 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:34:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 482 KiB/s rd, 4.0 MiB/s wr, 143 op/s
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.591 257641 DEBUG nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.591 257641 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.591 257641 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.592 257641 DEBUG oslo_concurrency.lockutils [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.592 257641 DEBUG nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.592 257641 WARNING nova.compute.manager [req-b7aa40a0-3a8a-4050-8f91-4d6d3a9a4292 req-b7a0afb7-5bff-459d-896b-dbe9a4b22ef9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_migrated.#033[00m
Nov 29 03:34:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:14.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.979 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.980 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:14 np0005539550 nova_compute[257631]: 2025-11-29 08:34:14.980 257641 DEBUG nova.network.neutron [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:15 np0005539550 nova_compute[257631]: 2025-11-29 08:34:15.187 257641 DEBUG nova.compute.manager [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:15 np0005539550 nova_compute[257631]: 2025-11-29 08:34:15.187 257641 DEBUG nova.compute.manager [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:15 np0005539550 nova_compute[257631]: 2025-11-29 08:34:15.188 257641 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:15 np0005539550 nova_compute[257631]: 2025-11-29 08:34:15.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:15.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:16.073 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.0 MiB/s wr, 206 op/s
Nov 29 03:34:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:34:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378171707' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:34:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:34:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3378171707' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:34:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:16.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.430 257641 DEBUG nova.network.neutron [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.453 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.458 257641 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.458 257641 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.532 257641 DEBUG os_brick.utils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.534 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.547 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.547 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaa15b1-2909-4e29-9e59-52f3347b8b72]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.549 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.559 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.559 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c08293-944a-4977-aa42-62e030cd3f4c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.562 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.576 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.576 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[f14a968d-092c-402c-886c-bf8a194a66f7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.579 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[70423ff0-aeec-4ef3-9cae-60b64ecb5feb]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.580 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.622 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.626 257641 DEBUG os_brick.initiator.connectors.lightos [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.627 257641 DEBUG os_brick.initiator.connectors.lightos [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.627 257641 DEBUG os_brick.initiator.connectors.lightos [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:34:17 np0005539550 nova_compute[257631]: 2025-11-29 08:34:17.628 257641 DEBUG os_brick.utils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (94ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:34:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.036 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.036 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.129 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.216 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.216 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.225 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.225 257641 INFO nova.compute.claims [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:34:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 205 op/s
Nov 29 03:34:18 np0005539550 podman[361853]: 2025-11-29 08:34:18.330117666 +0000 UTC m=+0.072370798 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:34:18 np0005539550 podman[361854]: 2025-11-29 08:34:18.331719137 +0000 UTC m=+0.073587639 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:34:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:18.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.675 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:18 np0005539550 nova_compute[257631]: 2025-11-29 08:34:18.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:18.966 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:18 np0005539550 systemd[1]: Stopping User Manager for UID 42436...
Nov 29 03:34:18 np0005539550 systemd[361818]: Activating special unit Exit the Session...
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped target Main User Target.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped target Basic System.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped target Paths.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped target Sockets.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped target Timers.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 03:34:18 np0005539550 systemd[361818]: Closed D-Bus User Message Bus Socket.
Nov 29 03:34:18 np0005539550 systemd[361818]: Stopped Create User's Volatile Files and Directories.
Nov 29 03:34:18 np0005539550 systemd[361818]: Removed slice User Application Slice.
Nov 29 03:34:18 np0005539550 systemd[361818]: Reached target Shutdown.
Nov 29 03:34:18 np0005539550 systemd[361818]: Finished Exit the Session.
Nov 29 03:34:18 np0005539550 systemd[361818]: Reached target Exit the Session.
Nov 29 03:34:18 np0005539550 systemd[1]: user@42436.service: Deactivated successfully.
Nov 29 03:34:18 np0005539550 systemd[1]: Stopped User Manager for UID 42436.
Nov 29 03:34:19 np0005539550 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Nov 29 03:34:19 np0005539550 systemd[1]: run-user-42436.mount: Deactivated successfully.
Nov 29 03:34:19 np0005539550 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Nov 29 03:34:19 np0005539550 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Nov 29 03:34:19 np0005539550 systemd[1]: Removed slice User Slice of UID 42436.
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.088 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.093 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.094 257641 INFO nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Creating image(s)#033[00m
Nov 29 03:34:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/159548454' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.135 257641 DEBUG nova.storage.rbd_utils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] creating snapshot(nova-resize) on rbd image(f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:34:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.180 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.187 257641 DEBUG nova.compute.provider_tree [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.219 257641 DEBUG nova.scheduler.client.report [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.256 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.256 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.323 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.323 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.344 257641 INFO nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.370 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.411 257641 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.412 257641 DEBUG nova.network.neutron [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.495 257641 DEBUG oslo_concurrency.lockutils [req-115cb48f-043d-41c1-902b-34d6f10c36bf req-955c5b66-40f5-49b8-b317-ed12a1dab36e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.518 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.519 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.520 257641 INFO nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Creating image(s)#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.545 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.569 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.591 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.594 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.667 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.669 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.669 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.670 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.694 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:19 np0005539550 nova_compute[257631]: 2025-11-29 08:34:19.697 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9519ccf6-547e-4b98-8a54-543c4639d35a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:19.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.011 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 9519ccf6-547e-4b98-8a54-543c4639d35a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.048 257641 DEBUG nova.policy [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a57807acb02b45d082f242ec62cd5b6f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.090 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] resizing rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:34:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Nov 29 03:34:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Nov 29 03:34:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01403683638097897 of space, bias 1.0, pg target 4.211050914293691 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004322465917550716 of space, bias 1.0, pg target 1.279449911595012 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5614271080518333 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.214 257641 DEBUG nova.objects.instance [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 724 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 205 op/s
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.227 257641 DEBUG nova.objects.instance [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'migration_context' on Instance uuid 9519ccf6-547e-4b98-8a54-543c4639d35a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.265 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.267 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Ensure instance console log exists: /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.267 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.268 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.268 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.332 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.332 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Ensure instance console log exists: /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.333 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.333 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.334 257641 DEBUG oslo_concurrency.lockutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.337 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start _get_guest_xml network_info=[{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '0d15bfbf-e33d-40d1-af8b-67b699bb5dc6', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd51465d5-c782-4ab5-86e5-16500d7ed93e', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc', 'attached_at': '2025-11-29T08:34:18.000000', 'detached_at': '', 'volume_id': 'd51465d5-c782-4ab5-86e5-16500d7ed93e', 'multiattach': True, 'serial': 'd51465d5-c782-4ab5-86e5-16500d7ed93e'}, 'mount_device': '/dev/vdb', 'guest_format': None, 'boot_index': None, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.341 257641 WARNING nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.345 257641 DEBUG nova.virt.libvirt.host [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.346 257641 DEBUG nova.virt.libvirt.host [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.350 257641 DEBUG nova.virt.libvirt.host [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.351 257641 DEBUG nova.virt.libvirt.host [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.352 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.353 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='709b029f-0458-4e40-a6ee-e1e02b48c06c',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.353 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.353 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.353 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.354 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.355 257641 DEBUG nova.virt.hardware [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.355 257641 DEBUG nova.objects.instance [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.377 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756984588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.803 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:20 np0005539550 nova_compute[257631]: 2025-11-29 08:34:20.838 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.104 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Successfully created port: 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00760|binding|INFO|Releasing lport 2825232a-c8aa-46c3-8863-29674d562705 from this chassis (sb_readonly=0)
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00761|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.249 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3691749104' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.325 257641 DEBUG oslo_concurrency.processutils [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.351 257641 DEBUG nova.virt.libvirt.vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.352 257641 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.353 257641 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.356 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <uuid>f6eba09a-c0cf-4855-afd5-b265b2f2cadc</uuid>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <name>instance-000000a6</name>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <memory>196608</memory>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:name>multiattach-server-1</nova:name>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:34:20</nova:creationTime>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.micro">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:memory>192</nova:memory>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <nova:port uuid="a08044a1-40b5-4987-bfe0-a92ba0c13b97">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="serial">f6eba09a-c0cf-4855-afd5-b265b2f2cadc</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="uuid">f6eba09a-c0cf-4855-afd5-b265b2f2cadc</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_disk.config">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <target dev="vdb" bus="virtio"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <shareable/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:39:c1:ec"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <target dev="tapa08044a1-40"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc/console.log" append="off"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:34:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:34:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:34:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:34:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.357 257641 DEBUG nova.virt.libvirt.vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:06Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.357 257641 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "vif_mac": "fa:16:3e:39:c1:ec"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.358 257641 DEBUG nova.network.os_vif_util [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.358 257641 DEBUG os_vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.359 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.360 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.361 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.374 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.375 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa08044a1-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.376 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa08044a1-40, col_values=(('external_ids', {'iface-id': 'a08044a1-40b5-4987-bfe0-a92ba0c13b97', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:39:c1:ec', 'vm-uuid': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.378 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 NetworkManager[49039]: <info>  [1764405261.3793] manager: (tapa08044a1-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.389 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.391 257641 INFO os_vif [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40')#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.451 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.451 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.452 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.452 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:39:c1:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.452 257641 INFO nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Using config drive#033[00m
Nov 29 03:34:21 np0005539550 kernel: tapa08044a1-40: entered promiscuous mode
Nov 29 03:34:21 np0005539550 NetworkManager[49039]: <info>  [1764405261.5390] manager: (tapa08044a1-40): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00762|binding|INFO|Claiming lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 for this chassis.
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00763|binding|INFO|a08044a1-40b5-4987-bfe0-a92ba0c13b97: Claiming fa:16:3e:39:c1:ec 10.100.0.3
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.547 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c1:ec 10.100.0.3'], port_security=['fa:16:3e:39:c1:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a08044a1-40b5-4987-bfe0-a92ba0c13b97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.550 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a08044a1-40b5-4987-bfe0-a92ba0c13b97 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.553 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00764|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 ovn-installed in OVS
Nov 29 03:34:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:21Z|00765|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 up in Southbound
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.561 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.564 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 systemd-udevd[362252]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.573 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6abf7496-0747-41ab-8ee4-8a3554b1f448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 systemd-machined[216673]: New machine qemu-92-instance-000000a6.
Nov 29 03:34:21 np0005539550 NetworkManager[49039]: <info>  [1764405261.5863] device (tapa08044a1-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:21 np0005539550 NetworkManager[49039]: <info>  [1764405261.5874] device (tapa08044a1-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:21 np0005539550 systemd[1]: Started Virtual Machine qemu-92-instance-000000a6.
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.609 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4fc479-dfba-48f0-aa2f-31ceead0add6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.613 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b20c0e74-8942-41ae-8939-145b5aa44891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.645 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[398ce096-d03e-468e-b8a8-7f1362bb0ddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.662 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22f36537-0dbf-44fb-a21f-c49f44987aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 18915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362265, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.676 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[edcdd32d-3219-442d-ba1c-c8b9ec5a701c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362267, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362267, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.678 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.679 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 nova_compute[257631]: 2025-11-29 08:34:21.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.681 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.681 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.681 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:21.681 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:21.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.041 257641 DEBUG nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.041 257641 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.042 257641 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.042 257641 DEBUG oslo_concurrency.lockutils [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.042 257641 DEBUG nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.042 257641 WARNING nova.compute.manager [req-95e084c3-a3f8-4b45-b058-7013de2452c4 req-c500dc4c-0393-4c58-9be3-9890fe6148ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state active and task_state resize_finish.#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.139 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405262.1393914, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.140 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.142 257641 DEBUG nova.compute.manager [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.145 257641 INFO nova.virt.libvirt.driver [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance running successfully.#033[00m
Nov 29 03:34:22 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.148 257641 DEBUG nova.virt.libvirt.guest [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.148 257641 DEBUG nova.virt.libvirt.driver [None req-da432284-e76d-49bc-bde5-a08b33bf26a8 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.165 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.169 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.192 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.192 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405262.1413512, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.193 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 702 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.231 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.234 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.274 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.401 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Successfully updated port: 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.429 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.429 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquired lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.429 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.590 257641 DEBUG nova.compute.manager [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-changed-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.590 257641 DEBUG nova.compute.manager [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Refreshing instance network info cache due to event network-changed-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.591 257641 DEBUG oslo_concurrency.lockutils [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:22.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:22 np0005539550 nova_compute[257631]: 2025-11-29 08:34:22.693 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.624 257641 DEBUG nova.network.neutron [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Updating instance_info_cache with network_info: [{"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.651 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Releasing lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.652 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Instance network_info: |[{"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.653 257641 DEBUG oslo_concurrency.lockutils [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.654 257641 DEBUG nova.network.neutron [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Refreshing network info cache for port 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.658 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Start _get_guest_xml network_info=[{"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.664 257641 WARNING nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.668 257641 DEBUG nova.virt.libvirt.host [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.669 257641 DEBUG nova.virt.libvirt.host [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.673 257641 DEBUG nova.virt.libvirt.host [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.674 257641 DEBUG nova.virt.libvirt.host [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.675 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.675 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.676 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.676 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.676 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.677 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.677 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.677 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.677 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.678 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.678 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.678 257641 DEBUG nova.virt.hardware [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.682 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:23.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.990 257641 DEBUG nova.compute.manager [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-changed-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.991 257641 DEBUG nova.compute.manager [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing instance network info cache due to event network-changed-cb992772-2542-4728-9c87-11f05dddcf88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.991 257641 DEBUG oslo_concurrency.lockutils [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.991 257641 DEBUG oslo_concurrency.lockutils [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:23 np0005539550 nova_compute[257631]: 2025-11-29 08:34:23.992 257641 DEBUG nova.network.neutron [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Refreshing network info cache for port cb992772-2542-4728-9c87-11f05dddcf88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.069 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.069 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.070 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.070 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.070 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.071 257641 INFO nova.compute.manager [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Terminating instance#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.072 257641 DEBUG nova.compute.manager [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:24 np0005539550 kernel: tapcb992772-25 (unregistering): left promiscuous mode
Nov 29 03:34:24 np0005539550 NetworkManager[49039]: <info>  [1764405264.1326] device (tapcb992772-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2006964227' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:24Z|00766|binding|INFO|Releasing lport cb992772-2542-4728-9c87-11f05dddcf88 from this chassis (sb_readonly=0)
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.142 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:24Z|00767|binding|INFO|Setting lport cb992772-2542-4728-9c87-11f05dddcf88 down in Southbound
Nov 29 03:34:24 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:24Z|00768|binding|INFO|Removing iface tapcb992772-25 ovn-installed in OVS
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.146 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.152 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:84:b6 10.100.0.3'], port_security=['fa:16:3e:1b:84:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '373a37da-5f23-4b61-b901-b36e7b8a1f46', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ba924909-f58e-40ed-a02f-8b6d37fda8d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1ca008e3-9cc0-429b-8a94-8266d31e7151, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=cb992772-2542-4728-9c87-11f05dddcf88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.153 158978 INFO neutron.agent.ovn.metadata.agent [-] Port cb992772-2542-4728-9c87-11f05dddcf88 in datapath 0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 unbound from our chassis#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.154 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.155 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1251e0d-a4b1-4a50-a180-9a00531960c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.156 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 namespace which is not needed anymore#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.167 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:24 np0005539550 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a5.scope: Deactivated successfully.
Nov 29 03:34:24 np0005539550 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000a5.scope: Consumed 17.509s CPU time.
Nov 29 03:34:24 np0005539550 systemd-machined[216673]: Machine qemu-90-instance-000000a5 terminated.
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.207 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.212 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 691 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.1 MiB/s wr, 216 op/s
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.244 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.259 257641 DEBUG nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.259 257641 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.259 257641 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.260 257641 DEBUG oslo_concurrency.lockutils [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.260 257641 DEBUG nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.260 257641 WARNING nova.compute.manager [req-57e2d95e-4319-4bbd-9600-3bdd8e6161f8 req-4ce3934a-8d20-4bb1-9633-5cd9fdbb75bb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state resized and task_state None.#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.310 257641 INFO nova.virt.libvirt.driver [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Instance destroyed successfully.#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.311 257641 DEBUG nova.objects.instance [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 373a37da-5f23-4b61-b901-b36e7b8a1f46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.337 257641 DEBUG nova.virt.libvirt.vif [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1560374091',display_name='tempest-TestNetworkBasicOps-server-1560374091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1560374091',id=165,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI/rdKIQ1picW6k9yVP1vIGJCfhx1mYRA+qv1GHgxy6GSeRsc6iyJz5SRou7NDR5yggNyTPyxaWXld++AOQcDLbRNPEryDxwikAmrBHKlKjJKKJoztumbkIHw2GknoBAaw==',key_name='tempest-TestNetworkBasicOps-1173164016',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-0shnke0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:04Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=373a37da-5f23-4b61-b901-b36e7b8a1f46,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.337 257641 DEBUG nova.network.os_vif_util [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.338 257641 DEBUG nova.network.os_vif_util [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.339 257641 DEBUG os_vif [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.342 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.343 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcb992772-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [NOTICE]   (360259) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [NOTICE]   (360259) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [WARNING]  (360259) : Exiting Master process...
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [WARNING]  (360259) : Exiting Master process...
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [ALERT]    (360259) : Current worker (360261) exited with code 143 (Terminated)
Nov 29 03:34:24 np0005539550 neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738[360255]: [WARNING]  (360259) : All workers exited. Exiting... (0)
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.348 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 systemd[1]: libpod-b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39.scope: Deactivated successfully.
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.351 257641 INFO os_vif [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1b:84:b6,bridge_name='br-int',has_traffic_filtering=True,id=cb992772-2542-4728-9c87-11f05dddcf88,network=Network(0c12fb7c-b7e7-49dd-a7c7-9ead1e551738),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcb992772-25')#033[00m
Nov 29 03:34:24 np0005539550 podman[362388]: 2025-11-29 08:34:24.35938769 +0000 UTC m=+0.062890947 container died b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8d6d5c525e605f87924b567fd79fd1e43b479049e3cdff1045850564c94aa1e1-merged.mount: Deactivated successfully.
Nov 29 03:34:24 np0005539550 podman[362388]: 2025-11-29 08:34:24.427351746 +0000 UTC m=+0.130854973 container cleanup b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:24 np0005539550 systemd[1]: libpod-conmon-b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39.scope: Deactivated successfully.
Nov 29 03:34:24 np0005539550 podman[362463]: 2025-11-29 08:34:24.51221368 +0000 UTC m=+0.056395343 container remove b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.519 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ac33f9-5fee-4e64-b3ca-694fc11af050]: (4, ('Sat Nov 29 08:34:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 (b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39)\nb53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39\nSat Nov 29 08:34:24 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 (b53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39)\nb53f6da54e8536e3f015f8a1637f2f84498cb1f045925a0c41331b1681dcdb39\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.525 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7fa28d2b-7151-489e-ae01-947255d4f1fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.527 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c12fb7c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:24 np0005539550 kernel: tap0c12fb7c-b0: left promiscuous mode
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.546 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.553 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e22ac4-32ca-40e6-9f11-102267a13506]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.572 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2aeed8-8b76-4c37-b718-18912e702011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.573 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f6b632-1160-464f-ac07-071be665ac31]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.594 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[21bea055-e168-4826-ad90-ac48934346db]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 823151, 'reachable_time': 43352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362524, 'error': None, 'target': 'ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 systemd[1]: run-netns-ovnmeta\x2d0c12fb7c\x2db7e7\x2d49dd\x2da7c7\x2d9ead1e551738.mount: Deactivated successfully.
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.600 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0c12fb7c-b7e7-49dd-a7c7-9ead1e551738 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:24.600 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e835daab-c670-4cd2-8183-1818e8d795f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:24.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:34:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827398312' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.748 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.750 257641 DEBUG nova.virt.libvirt.vif [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1231532861',display_name='tempest-ServersNegativeTestJSON-server-1231532861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1231532861',id=171,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-cohi5hvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:19Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=9519ccf6-547e-4b98-8a54-543c4639d35a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.750 257641 DEBUG nova.network.os_vif_util [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.751 257641 DEBUG nova.network.os_vif_util [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.752 257641 DEBUG nova.objects.instance [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9519ccf6-547e-4b98-8a54-543c4639d35a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.777 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <uuid>9519ccf6-547e-4b98-8a54-543c4639d35a</uuid>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <name>instance-000000ab</name>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:name>tempest-ServersNegativeTestJSON-server-1231532861</nova:name>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:34:23</nova:creationTime>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:user uuid="a57807acb02b45d082f242ec62cd5b6f">tempest-ServersNegativeTestJSON-1016750887-project-member</nova:user>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:project uuid="96e72e7660da497a8b6bf9fdb03fe84c">tempest-ServersNegativeTestJSON-1016750887</nova:project>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <nova:port uuid="427af7f8-9f7d-4dfb-8b25-e66ccc99dd34">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="serial">9519ccf6-547e-4b98-8a54-543c4639d35a</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="uuid">9519ccf6-547e-4b98-8a54-543c4639d35a</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/9519ccf6-547e-4b98-8a54-543c4639d35a_disk">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:c9:bc:ec"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <target dev="tap427af7f8-9f"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/console.log" append="off"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:34:24 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:34:24 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:34:24 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:34:24 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.778 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Preparing to wait for external event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.778 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.778 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.778 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.780 257641 DEBUG nova.virt.libvirt.vif [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1231532861',display_name='tempest-ServersNegativeTestJSON-server-1231532861',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1231532861',id=171,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-cohi5hvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:34:19Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=9519ccf6-547e-4b98-8a54-543c4639d35a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.781 257641 DEBUG nova.network.os_vif_util [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.783 257641 DEBUG nova.network.os_vif_util [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.783 257641 DEBUG os_vif [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.786 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.787 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.794 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.795 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap427af7f8-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.796 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap427af7f8-9f, col_values=(('external_ids', {'iface-id': '427af7f8-9f7d-4dfb-8b25-e66ccc99dd34', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c9:bc:ec', 'vm-uuid': '9519ccf6-547e-4b98-8a54-543c4639d35a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:24 np0005539550 NetworkManager[49039]: <info>  [1764405264.7998] manager: (tap427af7f8-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.878 257641 INFO os_vif [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f')#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.983 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.984 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.984 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] No VIF found with MAC fa:16:3e:c9:bc:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:34:24 np0005539550 nova_compute[257631]: 2025-11-29 08:34:24.984 257641 INFO nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Using config drive#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.009 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.224 257641 INFO nova.virt.libvirt.driver [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Deleting instance files /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46_del#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.224 257641 INFO nova.virt.libvirt.driver [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Deletion of /var/lib/nova/instances/373a37da-5f23-4b61-b901-b36e7b8a1f46_del complete#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.291 257641 INFO nova.compute.manager [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.292 257641 DEBUG oslo.service.loopingcall [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.292 257641 DEBUG nova.compute.manager [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.292 257641 DEBUG nova.network.neutron [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.497 257641 INFO nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Creating config drive at /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.502 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptr25lbj3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.602 257641 DEBUG nova.network.neutron [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Updated VIF entry in instance network info cache for port 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.603 257641 DEBUG nova.network.neutron [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Updating instance_info_cache with network_info: [{"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.620 257641 DEBUG oslo_concurrency.lockutils [req-2cd5a350-27f1-412d-8d73-b36a3c972442 req-8fcd3f0d-7479-4be2-b7da-603415e9916f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9519ccf6-547e-4b98-8a54-543c4639d35a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.639 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptr25lbj3" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.666 257641 DEBUG nova.storage.rbd_utils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] rbd image 9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.669 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config 9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.738 257641 DEBUG nova.network.neutron [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updated VIF entry in instance network info cache for port cb992772-2542-4728-9c87-11f05dddcf88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.739 257641 DEBUG nova.network.neutron [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updating instance_info_cache with network_info: [{"id": "cb992772-2542-4728-9c87-11f05dddcf88", "address": "fa:16:3e:1b:84:b6", "network": {"id": "0c12fb7c-b7e7-49dd-a7c7-9ead1e551738", "bridge": "br-int", "label": "tempest-network-smoke--1260795988", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcb992772-25", "ovs_interfaceid": "cb992772-2542-4728-9c87-11f05dddcf88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:25.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.761 257641 DEBUG oslo_concurrency.lockutils [req-e8113416-1a17-40c9-acfb-82ade4bec136 req-644590d7-a02d-4af9-9afa-e6f53ca25d29 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-373a37da-5f23-4b61-b901-b36e7b8a1f46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.833 257641 DEBUG oslo_concurrency.processutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config 9519ccf6-547e-4b98-8a54-543c4639d35a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.834 257641 INFO nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Deleting local config drive /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:34:25 np0005539550 kernel: tap427af7f8-9f: entered promiscuous mode
Nov 29 03:34:25 np0005539550 systemd-udevd[362528]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:34:25 np0005539550 NetworkManager[49039]: <info>  [1764405265.8911] manager: (tap427af7f8-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Nov 29 03:34:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:25Z|00769|binding|INFO|Claiming lport 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 for this chassis.
Nov 29 03:34:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:25Z|00770|binding|INFO|427af7f8-9f7d-4dfb-8b25-e66ccc99dd34: Claiming fa:16:3e:c9:bc:ec 10.100.0.4
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.892 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:25 np0005539550 NetworkManager[49039]: <info>  [1764405265.9043] device (tap427af7f8-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:34:25 np0005539550 NetworkManager[49039]: <info>  [1764405265.9050] device (tap427af7f8-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.913 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:bc:ec 10.100.0.4'], port_security=['fa:16:3e:c9:bc:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9519ccf6-547e-4b98-8a54-543c4639d35a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.915 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 bound to our chassis#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.916 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a83b79bc-6262-43e7-a9e5-5e808a213726#033[00m
Nov 29 03:34:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:25Z|00771|binding|INFO|Setting lport 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 ovn-installed in OVS
Nov 29 03:34:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:25Z|00772|binding|INFO|Setting lport 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 up in Southbound
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.929 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b9a20fdf-d091-4f09-9189-50df1a972731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.930 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa83b79bc-61 in ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.930 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.932 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa83b79bc-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.933 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ff05c4c4-1939-4f24-903e-9b91d1d0342f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.933 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9493685-3974-4903-bbe7-d79ef32ee7ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.949 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2f520b-df2e-4abd-92ab-c5beacce976e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:25 np0005539550 systemd-machined[216673]: New machine qemu-93-instance-000000ab.
Nov 29 03:34:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:25.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[50015ac0-af75-4d5d-8c7d-ca00acbdb3e2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:25 np0005539550 systemd[1]: Started Virtual Machine qemu-93-instance-000000ab.
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.978 257641 DEBUG nova.network.neutron [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:25 np0005539550 nova_compute[257631]: 2025-11-29 08:34:25.994 257641 INFO nova.compute.manager [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Took 0.70 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.002 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6a552f53-c25d-404d-aac4-584ff4543cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.008 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc7dd39-37f9-4126-b521-4e33ac2ecc17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 NetworkManager[49039]: <info>  [1764405266.0122] manager: (tapa83b79bc-60): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.042 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.042 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.043 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec50ba6-f01f-41b8-9dcc-c3ba9fa162a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.046 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a282704c-57f6-4c54-a964-006c6eebdacd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 NetworkManager[49039]: <info>  [1764405266.0767] device (tapa83b79bc-60): carrier: link connected
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.081 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d308763a-5595-43e3-a5bd-09f308b8baa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.085 257641 DEBUG nova.compute.manager [req-7a1386c4-8683-476a-afbf-d2def3915d63 req-2478604a-4b42-4c2c-89a8-05a94d8ccce4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-vif-deleted-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.101 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[67e172bf-ad9b-48cf-87f3-881216d1f1c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831374, 'reachable_time': 44441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362641, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.118 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[27a0ae9a-8420-499e-bdf8-49006a198654]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:4191'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 831374, 'tstamp': 831374}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362642, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.137 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[048202e7-a7fb-4a45-a7a0-91cdf3643f75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa83b79bc-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:41:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831374, 'reachable_time': 44441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362643, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.160 257641 DEBUG nova.compute.manager [req-4e2c4acf-8ce4-4b7f-ab11-1324b31784ca req-6962185f-f1a0-4b36-a6ea-b158f1103776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.160 257641 DEBUG oslo_concurrency.lockutils [req-4e2c4acf-8ce4-4b7f-ab11-1324b31784ca req-6962185f-f1a0-4b36-a6ea-b158f1103776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.161 257641 DEBUG oslo_concurrency.lockutils [req-4e2c4acf-8ce4-4b7f-ab11-1324b31784ca req-6962185f-f1a0-4b36-a6ea-b158f1103776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.161 257641 DEBUG oslo_concurrency.lockutils [req-4e2c4acf-8ce4-4b7f-ab11-1324b31784ca req-6962185f-f1a0-4b36-a6ea-b158f1103776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.161 257641 DEBUG nova.compute.manager [req-4e2c4acf-8ce4-4b7f-ab11-1324b31784ca req-6962185f-f1a0-4b36-a6ea-b158f1103776 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Processing event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.177 257641 DEBUG oslo_concurrency.processutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dad0e8d5-18a7-473b-9a56-15f3523390c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 638 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 262 op/s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.263 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30fb5daf-45b5-41ed-bc5f-d016643a6701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.265 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.265 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.265 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa83b79bc-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:26 np0005539550 kernel: tapa83b79bc-60: entered promiscuous mode
Nov 29 03:34:26 np0005539550 NetworkManager[49039]: <info>  [1764405266.2676] manager: (tapa83b79bc-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.268 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.282 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa83b79bc-60, col_values=(('external_ids', {'iface-id': '5551fa67-e815-437e-8413-5562ca9c4d10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:26Z|00773|binding|INFO|Releasing lport 5551fa67-e815-437e-8413-5562ca9c4d10 from this chassis (sb_readonly=0)
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.283 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.300 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.301 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c5956a-ee74-4bcf-a3cc-e2a9996dda64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.302 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/a83b79bc-6262-43e7-a9e5-5e808a213726.pid.haproxy
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID a83b79bc-6262-43e7-a9e5-5e808a213726
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:34:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:26.303 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'env', 'PROCESS_TAG=haproxy-a83b79bc-6262-43e7-a9e5-5e808a213726', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a83b79bc-6262-43e7-a9e5-5e808a213726.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.360 257641 DEBUG nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-vif-unplugged-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.360 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.361 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.361 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.362 257641 DEBUG nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] No waiting events found dispatching network-vif-unplugged-cb992772-2542-4728-9c87-11f05dddcf88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.362 257641 WARNING nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received unexpected event network-vif-unplugged-cb992772-2542-4728-9c87-11f05dddcf88 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.362 257641 DEBUG nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.363 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.363 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.363 257641 DEBUG oslo_concurrency.lockutils [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.364 257641 DEBUG nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] No waiting events found dispatching network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.364 257641 WARNING nova.compute.manager [req-1daa72bc-f210-4ec5-b405-f1fef09e2cc7 req-af222d7a-0863-4a2a-a2bd-e06a612da6e6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Received unexpected event network-vif-plugged-cb992772-2542-4728-9c87-11f05dddcf88 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.453 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405266.4534278, 9519ccf6-547e-4b98-8a54-543c4639d35a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.454 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.457 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.461 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.465 257641 INFO nova.virt.libvirt.driver [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Instance spawned successfully.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.466 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.495 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.501 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.502 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.503 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.504 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.504 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.505 257641 DEBUG nova.virt.libvirt.driver [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.510 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.540 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.540 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405266.4535353, 9519ccf6-547e-4b98-8a54-543c4639d35a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.541 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.563 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.569 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405266.4615037, 9519ccf6-547e-4b98-8a54-543c4639d35a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.569 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.576 257641 INFO nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Took 7.06 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.577 257641 DEBUG nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.589 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.593 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.633 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:34:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898061968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:26.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.649 257641 INFO nova.compute.manager [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Took 8.46 seconds to build instance.#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.657 257641 DEBUG oslo_concurrency.processutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.666 257641 DEBUG nova.compute.provider_tree [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.677 257641 DEBUG oslo_concurrency.lockutils [None req-0d06f5ce-c611-422c-9b68-3fb48ccb1eea a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.691 257641 DEBUG nova.scheduler.client.report [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:26 np0005539550 podman[362735]: 2025-11-29 08:34:26.713407759 +0000 UTC m=+0.060156668 container create d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:34:26 np0005539550 systemd[1]: Started libpod-conmon-d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921.scope.
Nov 29 03:34:26 np0005539550 podman[362735]: 2025-11-29 08:34:26.689433301 +0000 UTC m=+0.036182200 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:34:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aafae16892ff7417dcb13d012dc05f15f6239292f1c3144fe716b903c6203a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:26 np0005539550 podman[362735]: 2025-11-29 08:34:26.814935226 +0000 UTC m=+0.161684155 container init d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:26 np0005539550 podman[362735]: 2025-11-29 08:34:26.820417535 +0000 UTC m=+0.167166444 container start d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:34:26 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [NOTICE]   (362756) : New worker (362758) forked
Nov 29 03:34:26 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [NOTICE]   (362756) : Loading success.
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.889 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:26 np0005539550 nova_compute[257631]: 2025-11-29 08:34:26.989 257641 INFO nova.scheduler.client.report [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 373a37da-5f23-4b61-b901-b36e7b8a1f46#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.049 257641 DEBUG oslo_concurrency.lockutils [None req-861ccccb-a082-453c-b982-1981ff36979d 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "373a37da-5f23-4b61-b901-b36e7b8a1f46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Nov 29 03:34:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Nov 29 03:34:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.687 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.688 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.689 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.689 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.690 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.692 257641 INFO nova.compute.manager [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Terminating instance#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.693 257641 DEBUG nova.compute.manager [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:34:27 np0005539550 kernel: tap427af7f8-9f (unregistering): left promiscuous mode
Nov 29 03:34:27 np0005539550 NetworkManager[49039]: <info>  [1764405267.7523] device (tap427af7f8-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:34:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:27.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:27Z|00774|binding|INFO|Releasing lport 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 from this chassis (sb_readonly=0)
Nov 29 03:34:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:27Z|00775|binding|INFO|Setting lport 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 down in Southbound
Nov 29 03:34:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:27Z|00776|binding|INFO|Removing iface tap427af7f8-9f ovn-installed in OVS
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.759 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.763 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:27.778 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c9:bc:ec 10.100.0.4'], port_security=['fa:16:3e:c9:bc:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9519ccf6-547e-4b98-8a54-543c4639d35a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83b79bc-6262-43e7-a9e5-5e808a213726', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96e72e7660da497a8b6bf9fdb03fe84c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e362fc3-a5d2-4518-8d56-0e9bbfbe70b6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89f1621a-4594-4d70-9442-76b3c597dffc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:34:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:27.780 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 in datapath a83b79bc-6262-43e7-a9e5-5e808a213726 unbound from our chassis#033[00m
Nov 29 03:34:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:27.781 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a83b79bc-6262-43e7-a9e5-5e808a213726, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:34:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:27.782 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b035d95-fa02-4408-9cd0-5871b11c0313]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:27.783 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 namespace which is not needed anymore#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.793 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000ab.scope: Deactivated successfully.
Nov 29 03:34:27 np0005539550 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000ab.scope: Consumed 1.673s CPU time.
Nov 29 03:34:27 np0005539550 systemd-machined[216673]: Machine qemu-93-instance-000000ab terminated.
Nov 29 03:34:27 np0005539550 podman[362767]: 2025-11-29 08:34:27.852998444 +0000 UTC m=+0.085617094 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:27 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [NOTICE]   (362756) : haproxy version is 2.8.14-c23fe91
Nov 29 03:34:27 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [NOTICE]   (362756) : path to executable is /usr/sbin/haproxy
Nov 29 03:34:27 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [WARNING]  (362756) : Exiting Master process...
Nov 29 03:34:27 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [ALERT]    (362756) : Current worker (362758) exited with code 143 (Terminated)
Nov 29 03:34:27 np0005539550 neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726[362752]: [WARNING]  (362756) : All workers exited. Exiting... (0)
Nov 29 03:34:27 np0005539550 systemd[1]: libpod-d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921.scope: Deactivated successfully.
Nov 29 03:34:27 np0005539550 conmon[362752]: conmon d946cacdaef1067c9ef4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921.scope/container/memory.events
Nov 29 03:34:27 np0005539550 podman[362814]: 2025-11-29 08:34:27.914824434 +0000 UTC m=+0.044039059 container died d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.940 257641 INFO nova.virt.libvirt.driver [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Instance destroyed successfully.#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.941 257641 DEBUG nova.objects.instance [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lazy-loading 'resources' on Instance uuid 9519ccf6-547e-4b98-8a54-543c4639d35a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921-userdata-shm.mount: Deactivated successfully.
Nov 29 03:34:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0aafae16892ff7417dcb13d012dc05f15f6239292f1c3144fe716b903c6203a3-merged.mount: Deactivated successfully.
Nov 29 03:34:27 np0005539550 podman[362814]: 2025-11-29 08:34:27.960729199 +0000 UTC m=+0.089943824 container cleanup d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.962 257641 DEBUG nova.virt.libvirt.vif [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:34:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1231532861',display_name='tempest-ServersNegativeTestJSON-server-1231532861',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1231532861',id=171,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96e72e7660da497a8b6bf9fdb03fe84c',ramdisk_id='',reservation_id='r-cohi5hvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1016750887',owner_user_name='tempest-ServersNegativeTestJSON-1016750887-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:26Z,user_data=None,user_id='a57807acb02b45d082f242ec62cd5b6f',uuid=9519ccf6-547e-4b98-8a54-543c4639d35a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.963 257641 DEBUG nova.network.os_vif_util [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converting VIF {"id": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "address": "fa:16:3e:c9:bc:ec", "network": {"id": "a83b79bc-6262-43e7-a9e5-5e808a213726", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1830064421-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96e72e7660da497a8b6bf9fdb03fe84c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap427af7f8-9f", "ovs_interfaceid": "427af7f8-9f7d-4dfb-8b25-e66ccc99dd34", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.964 257641 DEBUG nova.network.os_vif_util [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.964 257641 DEBUG os_vif [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.966 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.972 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap427af7f8-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:27 np0005539550 systemd[1]: libpod-conmon-d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921.scope: Deactivated successfully.
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.977 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:34:27 np0005539550 nova_compute[257631]: 2025-11-29 08:34:27.979 257641 INFO os_vif [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c9:bc:ec,bridge_name='br-int',has_traffic_filtering=True,id=427af7f8-9f7d-4dfb-8b25-e66ccc99dd34,network=Network(a83b79bc-6262-43e7-a9e5-5e808a213726),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap427af7f8-9f')#033[00m
Nov 29 03:34:28 np0005539550 podman[362855]: 2025-11-29 08:34:28.036794289 +0000 UTC m=+0.045840164 container remove d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.045 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[891263b4-3e6f-4068-845f-4933d71d5b5f]: (4, ('Sat Nov 29 08:34:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921)\nd946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921\nSat Nov 29 08:34:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 (d946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921)\nd946cacdaef1067c9ef4ae6361e52cee3288c91810479dfa3d2dae7d0c8c7921\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[426cefd1-0d8b-41d0-808f-dfb53ea9d43d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.049 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa83b79bc-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:34:28 np0005539550 kernel: tapa83b79bc-60: left promiscuous mode
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.058 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b06ebbb0-acdd-4796-90c7-ce1a9497d51d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.072 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.074 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3a071254-d391-4207-b227-cbf946156469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.075 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ae29e2-0945-45e0-bee8-518806715123]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.091 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e96a2cd-fa79-47ad-8db0-02d6629c638d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831366, 'reachable_time': 20545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362888, 'error': None, 'target': 'ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 systemd[1]: run-netns-ovnmeta\x2da83b79bc\x2d6262\x2d43e7\x2da9e5\x2d5e808a213726.mount: Deactivated successfully.
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.095 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a83b79bc-6262-43e7-a9e5-5e808a213726 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:34:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:34:28.095 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fb220fb6-0483-4dd0-9942-3289c0626857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:34:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 635 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 373 op/s
Nov 29 03:34:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.281 257641 DEBUG nova.compute.manager [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.281 257641 DEBUG oslo_concurrency.lockutils [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.283 257641 DEBUG oslo_concurrency.lockutils [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.284 257641 DEBUG oslo_concurrency.lockutils [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.284 257641 DEBUG nova.compute.manager [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] No waiting events found dispatching network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.285 257641 WARNING nova.compute.manager [req-8c82de22-4577-4357-a488-03c2f3b95561 req-65043dcf-2014-40f4-968d-d9077b19876a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received unexpected event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.354 257641 INFO nova.virt.libvirt.driver [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Deleting instance files /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a_del#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.354 257641 INFO nova.virt.libvirt.driver [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Deletion of /var/lib/nova/instances/9519ccf6-547e-4b98-8a54-543c4639d35a_del complete#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.411 257641 INFO nova.compute.manager [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.412 257641 DEBUG oslo.service.loopingcall [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.412 257641 DEBUG nova.compute.manager [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.412 257641 DEBUG nova.network.neutron [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:34:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:28 np0005539550 nova_compute[257631]: 2025-11-29 08:34:28.849 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.538 257641 DEBUG nova.network.neutron [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.557 257641 INFO nova.compute.manager [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Took 1.14 seconds to deallocate network for instance.#033[00m
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.609 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.610 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.625 257641 DEBUG nova.compute.manager [req-0aeed1d7-9a25-42fe-a0a6-8e0c6624b1ba req-5d35d0b5-56a8-4e4b-b324-b4088f172f4b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-vif-deleted-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:29 np0005539550 nova_compute[257631]: 2025-11-29 08:34:29.734 257641 DEBUG oslo_concurrency.processutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:29.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458336884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.182 257641 DEBUG oslo_concurrency.processutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.189 257641 DEBUG nova.compute.provider_tree [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.214 257641 DEBUG nova.scheduler.client.report [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 635 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.6 MiB/s wr, 302 op/s
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.239 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.261 257641 INFO nova.scheduler.client.report [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Deleted allocations for instance 9519ccf6-547e-4b98-8a54-543c4639d35a#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.340 257641 DEBUG oslo_concurrency.lockutils [None req-82e3004c-c67d-49c8-b763-7ed32e724757 a57807acb02b45d082f242ec62cd5b6f 96e72e7660da497a8b6bf9fdb03fe84c - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:30Z|00777|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.416 257641 DEBUG nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-vif-unplugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.416 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.417 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.417 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.417 257641 DEBUG nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] No waiting events found dispatching network-vif-unplugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.417 257641 WARNING nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received unexpected event network-vif-unplugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.418 257641 DEBUG nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.422 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.422 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.423 257641 DEBUG oslo_concurrency.lockutils [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9519ccf6-547e-4b98-8a54-543c4639d35a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.423 257641 DEBUG nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] No waiting events found dispatching network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.423 257641 WARNING nova.compute.manager [req-7021049f-7413-488e-873c-266678d67187 req-f13aa9b2-7a14-423e-80c4-dde82fbe580b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Received unexpected event network-vif-plugged-427af7f8-9f7d-4dfb-8b25-e66ccc99dd34 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:34:30 np0005539550 nova_compute[257631]: 2025-11-29 08:34:30.469 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:30.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:31.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 599 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 347 op/s
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.538 257641 DEBUG nova.compute.manager [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-changed-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.539 257641 DEBUG nova.compute.manager [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing instance network info cache due to event network-changed-5369324b-4a12-4cff-807c-444de53025fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.539 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.539 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.540 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Refreshing network info cache for port 5369324b-4a12-4cff-807c-444de53025fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:32.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:32 np0005539550 nova_compute[257631]: 2025-11-29 08:34:32.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.021 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:34:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.4 total, 600.0 interval#012Cumulative writes: 47K writes, 178K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 35K keys, 10K commit groups, 1.0 writes per commit group, ingest: 36.49 MB, 0.06 MB/s#012Interval WAL: 10K writes, 4136 syncs, 2.44 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:34:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:33.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.844 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updated VIF entry in instance network info cache for port 5369324b-4a12-4cff-807c-444de53025fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.845 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [{"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.856 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.870 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.870 257641 DEBUG nova.compute.manager [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.870 257641 DEBUG nova.compute.manager [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing instance network info cache due to event network-changed-a08044a1-40b5-4987-bfe0-a92ba0c13b97. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.870 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.871 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:33 np0005539550 nova_compute[257631]: 2025-11-29 08:34:33.871 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Refreshing network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:34:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Nov 29 03:34:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Nov 29 03:34:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Nov 29 03:34:34 np0005539550 nova_compute[257631]: 2025-11-29 08:34:34.215 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 1007 KiB/s wr, 200 op/s
Nov 29 03:34:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:34.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:35 np0005539550 nova_compute[257631]: 2025-11-29 08:34:35.165 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updated VIF entry in instance network info cache for port a08044a1-40b5-4987-bfe0-a92ba0c13b97. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:34:35 np0005539550 nova_compute[257631]: 2025-11-29 08:34:35.165 257641 DEBUG nova.network.neutron [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [{"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:35 np0005539550 nova_compute[257631]: 2025-11-29 08:34:35.182 257641 DEBUG oslo_concurrency.lockutils [req-d074a6e2-0b7f-498e-af2a-93a7a78dbcc0 req-8d80c035-681e-4e20-bfd6-ee015d6425d4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f6eba09a-c0cf-4855-afd5-b265b2f2cadc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:35.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 944 KiB/s wr, 222 op/s
Nov 29 03:34:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:34:36Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:39:c1:ec 10.100.0.3
Nov 29 03:34:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:36.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:37.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:38 np0005539550 nova_compute[257631]: 2025-11-29 08:34:38.024 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:38 np0005539550 nova_compute[257631]: 2025-11-29 08:34:38.058 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 98 KiB/s wr, 136 op/s
Nov 29 03:34:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:38.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:38 np0005539550 nova_compute[257631]: 2025-11-29 08:34:38.857 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:38 np0005539550 nova_compute[257631]: 2025-11-29 08:34:38.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:39 np0005539550 nova_compute[257631]: 2025-11-29 08:34:39.310 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405264.3088443, 373a37da-5f23-4b61-b901-b36e7b8a1f46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:39 np0005539550 nova_compute[257631]: 2025-11-29 08:34:39.311 257641 INFO nova.compute.manager [-] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:39 np0005539550 nova_compute[257631]: 2025-11-29 08:34:39.332 257641 DEBUG nova.compute.manager [None req-9760ea73-d6f4-4711-b9d2-fccf0b0bccef - - - - - -] [instance: 373a37da-5f23-4b61-b901-b36e7b8a1f46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:39.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 598 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 98 KiB/s wr, 136 op/s
Nov 29 03:34:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:40.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:41.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:41 np0005539550 nova_compute[257631]: 2025-11-29 08:34:41.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 636 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 663 KiB/s rd, 1.9 MiB/s wr, 94 op/s
Nov 29 03:34:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.939 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.939 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.941 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405267.9358816, 9519ccf6-547e-4b98-8a54-543c4639d35a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.942 257641 INFO nova.compute.manager [-] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:34:42 np0005539550 nova_compute[257631]: 2025-11-29 08:34:42.990 257641 DEBUG nova.compute.manager [None req-6d4b866e-ef59-4666-8408-5142403553e0 - - - - - -] [instance: 9519ccf6-547e-4b98-8a54-543c4639d35a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.213 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.213 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.214 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.214 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:43.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:43 np0005539550 nova_compute[257631]: 2025-11-29 08:34:43.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 647 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 656 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Nov 29 03:34:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:44.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1a6c3a0d-2def-405c-8a25-ad262668c0d0 does not exist
Nov 29 03:34:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e8b126ef-c458-4671-a605-c24ebf700877 does not exist
Nov 29 03:34:45 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 644f28ac-9058-4e82-9d82-1ad0ede01940 does not exist
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.365729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285367758, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 2203, "num_deletes": 253, "total_data_size": 3745999, "memory_usage": 3821248, "flush_reason": "Manual Compaction"}
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285587916, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 3688570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53620, "largest_seqno": 55822, "table_properties": {"data_size": 3678752, "index_size": 6120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21563, "raw_average_key_size": 20, "raw_value_size": 3658652, "raw_average_value_size": 3528, "num_data_blocks": 266, "num_entries": 1037, "num_filter_entries": 1037, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405077, "oldest_key_time": 1764405077, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 222218 microseconds, and 9967 cpu microseconds.
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.587973) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 3688570 bytes OK
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.587992) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.695616) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.695692) EVENT_LOG_v1 {"time_micros": 1764405285695681, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.695729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 3736913, prev total WAL file size 3736913, number of live WAL files 2.
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.698810) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(3602KB)], [119(10MB)]
Nov 29 03:34:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405285698935, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 14703583, "oldest_snapshot_seqno": -1}
Nov 29 03:34:45 np0005539550 podman[363241]: 2025-11-29 08:34:45.671715264 +0000 UTC m=+0.021397844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:45 np0005539550 podman[363241]: 2025-11-29 08:34:45.877502907 +0000 UTC m=+0.227185477 container create af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.200 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [{"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.225 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-1509e19f-b5e6-496d-a0d9-d6740970fad0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.226 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.227 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.227 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.228 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.228 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.228 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 647 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.255 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.256 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.256 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.257 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.257 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 9221 keys, 12793879 bytes, temperature: kUnknown
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405286272065, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 12793879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12733403, "index_size": 36300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23109, "raw_key_size": 239759, "raw_average_key_size": 26, "raw_value_size": 12570426, "raw_average_value_size": 1363, "num_data_blocks": 1411, "num_entries": 9221, "num_filter_entries": 9221, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405285, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.272578) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12793879 bytes
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.279170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 25.7 rd, 22.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 9746, records dropped: 525 output_compression: NoCompression
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.279218) EVENT_LOG_v1 {"time_micros": 1764405286279200, "job": 72, "event": "compaction_finished", "compaction_time_micros": 573002, "compaction_time_cpu_micros": 43883, "output_level": 6, "num_output_files": 1, "total_output_size": 12793879, "num_input_records": 9746, "num_output_records": 9221, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405286280381, "job": 72, "event": "table_file_deletion", "file_number": 121}
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405286282811, "job": 72, "event": "table_file_deletion", "file_number": 119}
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:45.698589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.283023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.283037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.283039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.283041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:34:46.283043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:34:46 np0005539550 systemd[1]: Started libpod-conmon-af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77.scope.
Nov 29 03:34:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:46 np0005539550 podman[363241]: 2025-11-29 08:34:46.36103562 +0000 UTC m=+0.710718200 container init af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:34:46 np0005539550 podman[363241]: 2025-11-29 08:34:46.371102046 +0000 UTC m=+0.720784616 container start af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:34:46 np0005539550 podman[363241]: 2025-11-29 08:34:46.378222105 +0000 UTC m=+0.727904685 container attach af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:34:46 np0005539550 boring_banzai[363259]: 167 167
Nov 29 03:34:46 np0005539550 systemd[1]: libpod-af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77.scope: Deactivated successfully.
Nov 29 03:34:46 np0005539550 conmon[363259]: conmon af661d05c389f3461e6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77.scope/container/memory.events
Nov 29 03:34:46 np0005539550 podman[363241]: 2025-11-29 08:34:46.381365705 +0000 UTC m=+0.731048265 container died af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:34:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3a205d9641b723c0b3a97778163690c7a5a8d9a031b8c6d875c0c476d09c299d-merged.mount: Deactivated successfully.
Nov 29 03:34:46 np0005539550 podman[363241]: 2025-11-29 08:34:46.42371588 +0000 UTC m=+0.773398430 container remove af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:34:46 np0005539550 systemd[1]: libpod-conmon-af661d05c389f3461e6abb9f8ab1f9e16f06f5a917d54a52b238dd84e0cebd77.scope: Deactivated successfully.
Nov 29 03:34:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:34:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:46.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:34:46 np0005539550 podman[363301]: 2025-11-29 08:34:46.58681024 +0000 UTC m=+0.024776290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592504460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.722 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:46 np0005539550 podman[363301]: 2025-11-29 08:34:46.805963062 +0000 UTC m=+0.243929092 container create 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.819 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.819 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.820 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.823 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.824 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.827 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.827 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.827 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.913 257641 DEBUG oslo_concurrency.lockutils [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.913 257641 DEBUG oslo_concurrency.lockutils [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:46 np0005539550 nova_compute[257631]: 2025-11-29 08:34:46.930 257641 INFO nova.compute.manager [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Detaching volume d51465d5-c782-4ab5-86e5-16500d7ed93e#033[00m
Nov 29 03:34:46 np0005539550 systemd[1]: Started libpod-conmon-916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7.scope.
Nov 29 03:34:46 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:46 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:47 np0005539550 podman[363301]: 2025-11-29 08:34:47.003830785 +0000 UTC m=+0.441796835 container init 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:47 np0005539550 podman[363301]: 2025-11-29 08:34:47.013151751 +0000 UTC m=+0.451117781 container start 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:34:47 np0005539550 podman[363301]: 2025-11-29 08:34:47.018550388 +0000 UTC m=+0.456516418 container attach 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.031 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.032 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=20.739410400390625GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.032 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.032 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.103 257641 INFO nova.virt.block_device [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Attempting to driver detach volume d51465d5-c782-4ab5-86e5-16500d7ed93e from mountpoint /dev/vdb#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.112 257641 DEBUG nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Attempting to detach device vdb from instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.113 257641 DEBUG nova.virt.libvirt.guest [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:34:47 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.144 257641 INFO nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a from the persistent domain config.#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.144 257641 DEBUG nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.145 257641 DEBUG nova.virt.libvirt.guest [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:34:47 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:34:47 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.173 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 1509e19f-b5e6-496d-a0d9-d6740970fad0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.174 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.174 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.174 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.174 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.204 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405287.203721, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.205 257641 DEBUG nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.207 257641 INFO nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a from the live domain config.#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.316 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.366 257641 INFO nova.virt.libvirt.driver [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Detected multiple connections on this host for volume: d51465d5-c782-4ab5-86e5-16500d7ed93e, skipping target disconnect.#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.519 257641 DEBUG nova.objects.instance [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.565 257641 DEBUG oslo_concurrency.lockutils [None req-7b21ae35-cda6-41c1-9425-28de5f82be54 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:34:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514707342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.748 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.753 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.767 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:34:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.789 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:34:47 np0005539550 nova_compute[257631]: 2025-11-29 08:34:47.790 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:34:47 np0005539550 laughing_mcnulty[363322]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:34:47 np0005539550 laughing_mcnulty[363322]: --> relative data size: 1.0
Nov 29 03:34:47 np0005539550 laughing_mcnulty[363322]: --> All data devices are unavailable
Nov 29 03:34:47 np0005539550 systemd[1]: libpod-916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7.scope: Deactivated successfully.
Nov 29 03:34:47 np0005539550 conmon[363322]: conmon 916e83ea4dac31ad4e87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7.scope/container/memory.events
Nov 29 03:34:47 np0005539550 podman[363361]: 2025-11-29 08:34:47.927468069 +0000 UTC m=+0.024308109 container died 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:34:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-04ff05c8aba34243c07121889c3dffbb4b0cf630b5e9cfae512c33be37746406-merged.mount: Deactivated successfully.
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:48 np0005539550 podman[363361]: 2025-11-29 08:34:48.058462393 +0000 UTC m=+0.155302403 container remove 916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mcnulty, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 03:34:48 np0005539550 systemd[1]: libpod-conmon-916e83ea4dac31ad4e878762ce760b228b1eb87243bd50276e1c03a860f875e7.scope: Deactivated successfully.
Nov 29 03:34:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 647 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 276 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Nov 29 03:34:48 np0005539550 podman[363476]: 2025-11-29 08:34:48.466225843 +0000 UTC m=+0.059530602 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:34:48 np0005539550 podman[363475]: 2025-11-29 08:34:48.466795038 +0000 UTC m=+0.060675381 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.629 257641 DEBUG oslo_concurrency.lockutils [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.629 257641 DEBUG oslo_concurrency.lockutils [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.645 257641 INFO nova.compute.manager [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Detaching volume d51465d5-c782-4ab5-86e5-16500d7ed93e#033[00m
Nov 29 03:34:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:48.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:48 np0005539550 podman[363557]: 2025-11-29 08:34:48.78639013 +0000 UTC m=+0.090789136 container create 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:34:48 np0005539550 podman[363557]: 2025-11-29 08:34:48.720348163 +0000 UTC m=+0.024747189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.820 257641 INFO nova.virt.block_device [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Attempting to driver detach volume d51465d5-c782-4ab5-86e5-16500d7ed93e from mountpoint /dev/vdb#033[00m
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.831 257641 DEBUG nova.virt.libvirt.driver [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Attempting to detach device vdb from instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.831 257641 DEBUG nova.virt.libvirt.guest [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:34:48 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:34:48 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:34:48 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:48 np0005539550 nova_compute[257631]: 2025-11-29 08:34:48.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.211 257641 INFO nova.virt.libvirt.driver [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc from the persistent domain config.#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.212 257641 DEBUG nova.virt.libvirt.driver [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.212 257641 DEBUG nova.virt.libvirt.guest [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d51465d5-c782-4ab5-86e5-16500d7ed93e">
Nov 29 03:34:49 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <serial>d51465d5-c782-4ab5-86e5-16500d7ed93e</serial>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <shareable/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:34:49 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:34:49 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:34:49 np0005539550 systemd[1]: Started libpod-conmon-9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b.scope.
Nov 29 03:34:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:49 np0005539550 podman[363557]: 2025-11-29 08:34:49.273663248 +0000 UTC m=+0.578062284 container init 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:49 np0005539550 podman[363557]: 2025-11-29 08:34:49.281798755 +0000 UTC m=+0.586197761 container start 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:34:49 np0005539550 podman[363557]: 2025-11-29 08:34:49.285053277 +0000 UTC m=+0.589452303 container attach 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:49 np0005539550 infallible_boyd[363574]: 167 167
Nov 29 03:34:49 np0005539550 systemd[1]: libpod-9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b.scope: Deactivated successfully.
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.325 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405289.325085, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:34:49 np0005539550 podman[363579]: 2025-11-29 08:34:49.32769771 +0000 UTC m=+0.027983252 container died 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.327 257641 DEBUG nova.virt.libvirt.driver [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.329 257641 INFO nova.virt.libvirt.driver [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully detached device vdb from instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc from the live domain config.#033[00m
Nov 29 03:34:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a83b790e089eb6b2d0358c3ce46ec79b265bd97b930c70fd613cf50d8a764702-merged.mount: Deactivated successfully.
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.675 257641 DEBUG nova.objects.instance [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'flavor' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.711 257641 DEBUG oslo_concurrency.lockutils [None req-2de187e9-3f70-4bba-8551-0db72d662d40 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:34:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:49.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:49 np0005539550 nova_compute[257631]: 2025-11-29 08:34:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:34:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 647 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Nov 29 03:34:50 np0005539550 podman[363579]: 2025-11-29 08:34:50.541159079 +0000 UTC m=+1.241444601 container remove 9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_boyd, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:34:50 np0005539550 systemd[1]: libpod-conmon-9a7f53894d67434c6604e0ddd6989d58abb8a27705778c3d2fd9ce19c604196b.scope: Deactivated successfully.
Nov 29 03:34:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:50.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:50 np0005539550 podman[363603]: 2025-11-29 08:34:50.699822846 +0000 UTC m=+0.019676920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:50 np0005539550 podman[363603]: 2025-11-29 08:34:50.919648206 +0000 UTC m=+0.239502250 container create 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:34:51 np0005539550 systemd[1]: Started libpod-conmon-23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0.scope.
Nov 29 03:34:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c5412b714e07f3e202f1831aa74572d5ed52b855ffbdbcd5b8e555f21fbfa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c5412b714e07f3e202f1831aa74572d5ed52b855ffbdbcd5b8e555f21fbfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c5412b714e07f3e202f1831aa74572d5ed52b855ffbdbcd5b8e555f21fbfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131c5412b714e07f3e202f1831aa74572d5ed52b855ffbdbcd5b8e555f21fbfa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:51 np0005539550 podman[363603]: 2025-11-29 08:34:51.564726829 +0000 UTC m=+0.884580883 container init 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:34:51 np0005539550 podman[363603]: 2025-11-29 08:34:51.573585374 +0000 UTC m=+0.893439418 container start 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:34:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:34:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:51.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:34:51 np0005539550 podman[363603]: 2025-11-29 08:34:51.885041939 +0000 UTC m=+1.204895983 container attach 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 678 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 106 op/s
Nov 29 03:34:52 np0005539550 frosty_payne[363621]: {
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:    "0": [
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:        {
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "devices": [
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "/dev/loop3"
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            ],
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "lv_name": "ceph_lv0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "lv_size": "7511998464",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "name": "ceph_lv0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "tags": {
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.cluster_name": "ceph",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.crush_device_class": "",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.encrypted": "0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.osd_id": "0",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.type": "block",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:                "ceph.vdo": "0"
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            },
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "type": "block",
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:            "vg_name": "ceph_vg0"
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:        }
Nov 29 03:34:52 np0005539550 frosty_payne[363621]:    ]
Nov 29 03:34:52 np0005539550 frosty_payne[363621]: }
Nov 29 03:34:52 np0005539550 podman[363603]: 2025-11-29 08:34:52.339346711 +0000 UTC m=+1.659200755 container died 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:34:52 np0005539550 systemd[1]: libpod-23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0.scope: Deactivated successfully.
Nov 29 03:34:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-131c5412b714e07f3e202f1831aa74572d5ed52b855ffbdbcd5b8e555f21fbfa-merged.mount: Deactivated successfully.
Nov 29 03:34:52 np0005539550 podman[363603]: 2025-11-29 08:34:52.48863393 +0000 UTC m=+1.808487974 container remove 23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:52 np0005539550 systemd[1]: libpod-conmon-23755d118f5f93632a22150101c3170404f64e6b1535e627497b22171984add0.scope: Deactivated successfully.
Nov 29 03:34:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:52.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:53 np0005539550 nova_compute[257631]: 2025-11-29 08:34:53.031 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.096415206 +0000 UTC m=+0.038600190 container create 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:53 np0005539550 systemd[1]: Started libpod-conmon-997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b.scope.
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.080217635 +0000 UTC m=+0.022402639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.202803587 +0000 UTC m=+0.144988581 container init 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.21041123 +0000 UTC m=+0.152596204 container start 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.213746534 +0000 UTC m=+0.155931508 container attach 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:34:53 np0005539550 fervent_edison[363799]: 167 167
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.215041117 +0000 UTC m=+0.157226081 container died 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:34:53 np0005539550 systemd[1]: libpod-997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b.scope: Deactivated successfully.
Nov 29 03:34:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-55535ef6c0560bc22b9d9a9ea1b946336cce7a2813ee4494d9d1ec4f9c71439f-merged.mount: Deactivated successfully.
Nov 29 03:34:53 np0005539550 podman[363782]: 2025-11-29 08:34:53.263763044 +0000 UTC m=+0.205948018 container remove 997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:34:53 np0005539550 systemd[1]: libpod-conmon-997bd3bc410a6df45bf102354dd290767d1993a8f85b0aec8fd54d5a271de49b.scope: Deactivated successfully.
Nov 29 03:34:53 np0005539550 podman[363824]: 2025-11-29 08:34:53.441555147 +0000 UTC m=+0.041397782 container create 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:34:53 np0005539550 systemd[1]: Started libpod-conmon-5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b.scope.
Nov 29 03:34:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:34:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62bcbf173a7da461ae46447852e8253c5a86758af4f56c8541a453c78f3db0d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62bcbf173a7da461ae46447852e8253c5a86758af4f56c8541a453c78f3db0d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62bcbf173a7da461ae46447852e8253c5a86758af4f56c8541a453c78f3db0d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62bcbf173a7da461ae46447852e8253c5a86758af4f56c8541a453c78f3db0d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:34:53 np0005539550 podman[363824]: 2025-11-29 08:34:53.424020832 +0000 UTC m=+0.023863487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:34:53 np0005539550 podman[363824]: 2025-11-29 08:34:53.520042679 +0000 UTC m=+0.119885314 container init 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:34:53 np0005539550 podman[363824]: 2025-11-29 08:34:53.528317029 +0000 UTC m=+0.128159664 container start 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:34:53 np0005539550 podman[363824]: 2025-11-29 08:34:53.532446834 +0000 UTC m=+0.132289469 container attach 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:34:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:53 np0005539550 nova_compute[257631]: 2025-11-29 08:34:53.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:53 np0005539550 nova_compute[257631]: 2025-11-29 08:34:53.939 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 693 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 87 op/s
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]: {
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:        "osd_id": 0,
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:        "type": "bluestore"
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]:    }
Nov 29 03:34:54 np0005539550 bold_aryabhata[363840]: }
Nov 29 03:34:54 np0005539550 systemd[1]: libpod-5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b.scope: Deactivated successfully.
Nov 29 03:34:54 np0005539550 podman[363824]: 2025-11-29 08:34:54.385100605 +0000 UTC m=+0.984943240 container died 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:34:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-62bcbf173a7da461ae46447852e8253c5a86758af4f56c8541a453c78f3db0d7-merged.mount: Deactivated successfully.
Nov 29 03:34:54 np0005539550 podman[363824]: 2025-11-29 08:34:54.542778877 +0000 UTC m=+1.142621512 container remove 5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:34:54 np0005539550 systemd[1]: libpod-conmon-5676cc529a847f1c9060b3f442c0a5cf3c2307df048fe9ed3dfa9e2e82361e8b.scope: Deactivated successfully.
Nov 29 03:34:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:34:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:54.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:34:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e0761f8c-505c-4448-b3d3-d46f19b61ecc does not exist
Nov 29 03:34:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f9664ac0-14a3-4b6d-9f13-8a38965dce5f does not exist
Nov 29 03:34:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fb8598a2-3786-4a3f-9aec-758dc4c8e391 does not exist
Nov 29 03:34:55 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:34:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:34:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 693 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Nov 29 03:34:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:56.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:57.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:57 np0005539550 nova_compute[257631]: 2025-11-29 08:34:57.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:34:58 np0005539550 nova_compute[257631]: 2025-11-29 08:34:58.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 695 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 168 op/s
Nov 29 03:34:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:34:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:34:58 np0005539550 podman[363926]: 2025-11-29 08:34:58.360124997 +0000 UTC m=+0.094978292 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:34:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:34:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:34:58 np0005539550 nova_compute[257631]: 2025-11-29 08:34:58.908 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:34:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:34:59
Nov 29 03:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images', 'volumes']
Nov 29 03:34:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:34:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:34:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:34:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:34:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 695 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.9 MiB/s wr, 164 op/s
Nov 29 03:35:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:00.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 727 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.2 MiB/s wr, 218 op/s
Nov 29 03:35:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:02.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:03 np0005539550 nova_compute[257631]: 2025-11-29 08:35:03.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:03 np0005539550 nova_compute[257631]: 2025-11-29 08:35:03.910 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.063 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.064 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.085 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.157 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.158 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.166 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.166 257641 INFO nova.compute.claims [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:35:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 744 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 3.1 MiB/s wr, 161 op/s
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.315 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:04.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197922213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.752 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.758 257641 DEBUG nova.compute.provider_tree [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.773 257641 DEBUG nova.scheduler.client.report [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.796 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.797 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.849 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.850 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.874 257641 INFO nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.896 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.946 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:35:04 np0005539550 nova_compute[257631]: 2025-11-29 08:35:04.960 257641 INFO nova.virt.block_device [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Booting with volume 9b4f6e7c-11e4-4465-94bc-84c3365ec3b9 at /dev/vda#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.066 257641 DEBUG nova.policy [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c5b0953fb7cc415fb26cf4ffdd5908c6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.096 257641 DEBUG os_brick.utils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.098 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.109 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.109 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[cb15fb3a-58d9-4e14-ba29-0ae1593359d3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.110 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.119 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.120 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c59d1b-193e-4fed-b5d6-0644b765e589]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.121 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.128 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.129 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[9b698bce-9714-4942-9761-5a75f1bbff17]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.130 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[50566c0c-2722-466c-b2d2-868ba8fd3a71]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.130 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.157 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.160 257641 DEBUG os_brick.initiator.connectors.lightos [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.160 257641 DEBUG os_brick.initiator.connectors.lightos [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.160 257641 DEBUG os_brick.initiator.connectors.lightos [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.160 257641 DEBUG os_brick.utils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] <== get_connector_properties: return (63ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:35:05 np0005539550 nova_compute[257631]: 2025-11-29 08:35:05.161 257641 DEBUG nova.virt.block_device [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Updating existing volume attachment record: 74a6cac2-6e82-4f8d-982b-70fe8a55fe6a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:35:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.102 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Successfully created port: 2d4aa035-2158-4404-80f0-9a1b2897399c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.216 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.218 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.218 257641 INFO nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Creating image(s)#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.219 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.219 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Ensure instance console log exists: /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.220 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.220 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:06 np0005539550 nova_compute[257631]: 2025-11-29 08:35:06.220 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 773 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Nov 29 03:35:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:06.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.340 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Successfully updated port: 2d4aa035-2158-4404-80f0-9a1b2897399c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.364 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.365 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquired lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.365 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.446 257641 DEBUG nova.compute.manager [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-changed-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.447 257641 DEBUG nova.compute.manager [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Refreshing instance network info cache due to event network-changed-2d4aa035-2158-4404-80f0-9a1b2897399c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.447 257641 DEBUG oslo_concurrency.lockutils [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:07 np0005539550 nova_compute[257631]: 2025-11-29 08:35:07.527 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:35:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 777 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 163 op/s
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:35:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.536 257641 DEBUG nova.network.neutron [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Updating instance_info_cache with network_info: [{"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.556 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Releasing lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.556 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Instance network_info: |[{"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.557 257641 DEBUG oslo_concurrency.lockutils [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.557 257641 DEBUG nova.network.neutron [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Refreshing network info cache for port 2d4aa035-2158-4404-80f0-9a1b2897399c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.561 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Start _get_guest_xml network_info=[{"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '74a6cac2-6e82-4f8d-982b-70fe8a55fe6a', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9b4f6e7c-11e4-4465-94bc-84c3365ec3b9', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9b4f6e7c-11e4-4465-94bc-84c3365ec3b9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2', 'attached_at': '', 'detached_at': '', 'volume_id': '9b4f6e7c-11e4-4465-94bc-84c3365ec3b9', 'serial': '9b4f6e7c-11e4-4465-94bc-84c3365ec3b9', 'multiattach': True}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.565 257641 WARNING nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.569 257641 DEBUG nova.virt.libvirt.host [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.570 257641 DEBUG nova.virt.libvirt.host [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.576 257641 DEBUG nova.virt.libvirt.host [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.577 257641 DEBUG nova.virt.libvirt.host [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.578 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.578 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.579 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.579 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.579 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.579 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.579 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.580 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.580 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.580 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.580 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.580 257641 DEBUG nova.virt.hardware [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.606 257641 DEBUG nova.storage.rbd_utils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.610 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:08 np0005539550 nova_compute[257631]: 2025-11-29 08:35:08.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2782328329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.020 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.057 257641 DEBUG nova.virt.libvirt.vif [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1414722667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1414722667',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-0l0sx287',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:04Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8aee8cc7-7ff1-451b-88f4-7804819fc2b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.058 257641 DEBUG nova.network.os_vif_util [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.059 257641 DEBUG nova.network.os_vif_util [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.062 257641 DEBUG nova.objects.instance [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:35:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:35:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:35:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:35:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.091 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <uuid>8aee8cc7-7ff1-451b-88f4-7804819fc2b2</uuid>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <name>instance-000000ae</name>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-1414722667</nova:name>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:35:08</nova:creationTime>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:user uuid="c5b0953fb7cc415fb26cf4ffdd5908c6">tempest-AttachVolumeMultiAttachTest-573425942-project-member</nova:user>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:project uuid="d4f6db81949d487b853d7567f8a2e6d4">tempest-AttachVolumeMultiAttachTest-573425942</nova:project>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <nova:port uuid="2d4aa035-2158-4404-80f0-9a1b2897399c">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="serial">8aee8cc7-7ff1-451b-88f4-7804819fc2b2</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="uuid">8aee8cc7-7ff1-451b-88f4-7804819fc2b2</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-9b4f6e7c-11e4-4465-94bc-84c3365ec3b9">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <serial>9b4f6e7c-11e4-4465-94bc-84c3365ec3b9</serial>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <shareable/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:94:78:14"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <target dev="tap2d4aa035-21"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/console.log" append="off"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:35:09 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:35:09 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:35:09 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:35:09 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.092 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Preparing to wait for external event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.092 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.093 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.093 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.094 257641 DEBUG nova.virt.libvirt.vif [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1414722667',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1414722667',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-0l0sx287',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:04Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8aee8cc7-7ff1-451b-88f4-7804819fc2b2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.094 257641 DEBUG nova.network.os_vif_util [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.095 257641 DEBUG nova.network.os_vif_util [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.096 257641 DEBUG os_vif [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.097 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.097 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.097 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.102 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d4aa035-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.103 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d4aa035-21, col_values=(('external_ids', {'iface-id': '2d4aa035-2158-4404-80f0-9a1b2897399c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:78:14', 'vm-uuid': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539550 NetworkManager[49039]: <info>  [1764405309.1056] manager: (tap2d4aa035-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.111 257641 INFO os_vif [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21')#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.162 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.163 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.164 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] No VIF found with MAC fa:16:3e:94:78:14, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.166 257641 INFO nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Using config drive#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.197 257641 DEBUG nova.storage.rbd_utils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.637 257641 INFO nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Creating config drive at /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.643 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpavoly0uj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.785 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpavoly0uj" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:09.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.821 257641 DEBUG nova.storage.rbd_utils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] rbd image 8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:09 np0005539550 nova_compute[257631]: 2025-11-29 08:35:09.826 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config 8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.035 257641 DEBUG oslo_concurrency.processutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config 8aee8cc7-7ff1-451b-88f4-7804819fc2b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.036 257641 INFO nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Deleting local config drive /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:35:10 np0005539550 kernel: tap2d4aa035-21: entered promiscuous mode
Nov 29 03:35:10 np0005539550 NetworkManager[49039]: <info>  [1764405310.0945] manager: (tap2d4aa035-21): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Nov 29 03:35:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:10Z|00778|binding|INFO|Claiming lport 2d4aa035-2158-4404-80f0-9a1b2897399c for this chassis.
Nov 29 03:35:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:10Z|00779|binding|INFO|2d4aa035-2158-4404-80f0-9a1b2897399c: Claiming fa:16:3e:94:78:14 10.100.0.10
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.096 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.103 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:78:14 10.100.0.10'], port_security=['fa:16:3e:94:78:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d4aa035-2158-4404-80f0-9a1b2897399c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.104 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d4aa035-2158-4404-80f0-9a1b2897399c in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 bound to our chassis#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.106 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:10Z|00780|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c ovn-installed in OVS
Nov 29 03:35:10 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:10Z|00781|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c up in Southbound
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.117 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.120 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.123 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[793e870c-0a8c-4e7a-8462-f3b75d041f41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 systemd-udevd[364154]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:10 np0005539550 systemd-machined[216673]: New machine qemu-94-instance-000000ae.
Nov 29 03:35:10 np0005539550 NetworkManager[49039]: <info>  [1764405310.1445] device (tap2d4aa035-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:35:10 np0005539550 NetworkManager[49039]: <info>  [1764405310.1458] device (tap2d4aa035-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:35:10 np0005539550 systemd[1]: Started Virtual Machine qemu-94-instance-000000ae.
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.154 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca102aa-c064-4a11-b4b3-c281a533ba75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.158 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8f4d14e8-0275-4494-a3a7-e489cd5de091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.184 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[725457bb-63d9-4fd1-8412-1e46d793bf8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.204 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef1b170-4caa-4f7a-87b3-200b36796ff3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364162, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.218 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2463e1e0-2eaf-4708-8b28-99ac9f58fefb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364166, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364166, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.219 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.221 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.222 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.223 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.223 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.223 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:10 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:10.224 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.241 257641 DEBUG nova.network.neutron [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Updated VIF entry in instance network info cache for port 2d4aa035-2158-4404-80f0-9a1b2897399c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.242 257641 DEBUG nova.network.neutron [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Updating instance_info_cache with network_info: [{"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 777 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.3 MiB/s wr, 116 op/s
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.260 257641 DEBUG oslo_concurrency.lockutils [req-b47c204a-7cac-41e5-87ef-04555a1965c1 req-b608ac2a-4091-42b1-beab-ea2113d197ed 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8aee8cc7-7ff1-451b-88f4-7804819fc2b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.341 257641 DEBUG nova.compute.manager [req-cc6335b6-3d77-4016-866c-3956c63bdacd req-509219f8-c290-48aa-96ca-82fb1b7fa394 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.341 257641 DEBUG oslo_concurrency.lockutils [req-cc6335b6-3d77-4016-866c-3956c63bdacd req-509219f8-c290-48aa-96ca-82fb1b7fa394 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.342 257641 DEBUG oslo_concurrency.lockutils [req-cc6335b6-3d77-4016-866c-3956c63bdacd req-509219f8-c290-48aa-96ca-82fb1b7fa394 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.342 257641 DEBUG oslo_concurrency.lockutils [req-cc6335b6-3d77-4016-866c-3956c63bdacd req-509219f8-c290-48aa-96ca-82fb1b7fa394 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.342 257641 DEBUG nova.compute.manager [req-cc6335b6-3d77-4016-866c-3956c63bdacd req-509219f8-c290-48aa-96ca-82fb1b7fa394 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Processing event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.517 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405310.516629, 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.517 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.520 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.523 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.525 257641 INFO nova.virt.libvirt.driver [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Instance spawned successfully.#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.526 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.561 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.566 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.570 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.570 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.570 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.571 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.571 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.571 257641 DEBUG nova.virt.libvirt.driver [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.599 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.599 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405310.5188768, 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.599 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.631 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.634 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405310.5222335, 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.635 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.645 257641 INFO nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Took 4.43 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.645 257641 DEBUG nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.656 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.659 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.680 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.703 257641 INFO nova.compute.manager [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Took 6.57 seconds to build instance.#033[00m
Nov 29 03:35:10 np0005539550 nova_compute[257631]: 2025-11-29 08:35:10.717 257641 DEBUG oslo_concurrency.lockutils [None req-58cc4d61-6729-412e-818f-b496c9d6b022 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:11.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.0 MiB/s wr, 187 op/s
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.436 257641 DEBUG nova.compute.manager [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.436 257641 DEBUG oslo_concurrency.lockutils [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.436 257641 DEBUG oslo_concurrency.lockutils [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.437 257641 DEBUG oslo_concurrency.lockutils [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.437 257641 DEBUG nova.compute.manager [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:12 np0005539550 nova_compute[257631]: 2025-11-29 08:35:12.437 257641 WARNING nova.compute.manager [req-9917252a-66d6-4e2c-b5e6-8296d5b9bbe3 req-fedff084-d6b7-4284-bf8b-6034bd00a12e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Nov 29 03:35:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Nov 29 03:35:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Nov 29 03:35:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:13.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:13 np0005539550 nova_compute[257631]: 2025-11-29 08:35:13.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.105 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 806 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.7 MiB/s wr, 174 op/s
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.683 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.683 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.684 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.685 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.685 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.687 257641 INFO nova.compute.manager [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Terminating instance#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.689 257641 DEBUG nova.compute.manager [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:14.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:14 np0005539550 kernel: tap2d4aa035-21 (unregistering): left promiscuous mode
Nov 29 03:35:14 np0005539550 NetworkManager[49039]: <info>  [1764405314.7389] device (tap2d4aa035-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00782|binding|INFO|Releasing lport 2d4aa035-2158-4404-80f0-9a1b2897399c from this chassis (sb_readonly=0)
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00783|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c down in Southbound
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00784|binding|INFO|Removing iface tap2d4aa035-21 ovn-installed in OVS
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.837 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:14.844 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:78:14 10.100.0.10'], port_security=['fa:16:3e:94:78:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d4aa035-2158-4404-80f0-9a1b2897399c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:14.847 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d4aa035-2158-4404-80f0-9a1b2897399c in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:14.849 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ae.scope: Deactivated successfully.
Nov 29 03:35:14 np0005539550 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000ae.scope: Consumed 4.700s CPU time.
Nov 29 03:35:14 np0005539550 systemd-machined[216673]: Machine qemu-94-instance-000000ae terminated.
Nov 29 03:35:14 np0005539550 kernel: tap2d4aa035-21: entered promiscuous mode
Nov 29 03:35:14 np0005539550 NetworkManager[49039]: <info>  [1764405314.9123] manager: (tap2d4aa035-21): new Tun device (/org/freedesktop/NetworkManager/Devices/346)
Nov 29 03:35:14 np0005539550 kernel: tap2d4aa035-21 (unregistering): left promiscuous mode
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00785|binding|INFO|Claiming lport 2d4aa035-2158-4404-80f0-9a1b2897399c for this chassis.
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00786|binding|INFO|2d4aa035-2158-4404-80f0-9a1b2897399c: Claiming fa:16:3e:94:78:14 10.100.0.10
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:14.926 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:78:14 10.100.0.10'], port_security=['fa:16:3e:94:78:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d4aa035-2158-4404-80f0-9a1b2897399c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.930 257641 INFO nova.virt.libvirt.driver [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Instance destroyed successfully.#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.931 257641 DEBUG nova.objects.instance [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00787|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c ovn-installed in OVS
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00788|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c up in Southbound
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00789|binding|INFO|Releasing lport 2d4aa035-2158-4404-80f0-9a1b2897399c from this chassis (sb_readonly=1)
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00790|if_status|INFO|Dropped 2 log messages in last 763 seconds (most recently, 763 seconds ago) due to excessive rate
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00791|if_status|INFO|Not setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c down as sb is readonly
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00792|binding|INFO|Removing iface tap2d4aa035-21 ovn-installed in OVS
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.942 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00793|binding|INFO|Releasing lport 2d4aa035-2158-4404-80f0-9a1b2897399c from this chassis (sb_readonly=0)
Nov 29 03:35:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:14Z|00794|binding|INFO|Setting lport 2d4aa035-2158-4404-80f0-9a1b2897399c down in Southbound
Nov 29 03:35:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:14.951 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:78:14 10.100.0.10'], port_security=['fa:16:3e:94:78:14 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8aee8cc7-7ff1-451b-88f4-7804819fc2b2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f152d713-a80c-4ab4-9e52-56ad227c55aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2d4aa035-2158-4404-80f0-9a1b2897399c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.949 257641 DEBUG nova.virt.libvirt.vif [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-1414722667',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-1414722667',id=174,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:35:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-0l0sx287',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:35:10Z,user_data=None,user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8aee8cc7-7ff1-451b-88f4-7804819fc2b2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.949 257641 DEBUG nova.network.os_vif_util [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "2d4aa035-2158-4404-80f0-9a1b2897399c", "address": "fa:16:3e:94:78:14", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d4aa035-21", "ovs_interfaceid": "2d4aa035-2158-4404-80f0-9a1b2897399c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.950 257641 DEBUG nova.network.os_vif_util [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.951 257641 DEBUG os_vif [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.953 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d4aa035-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.956 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:14 np0005539550 nova_compute[257631]: 2025-11-29 08:35:14.960 257641 INFO os_vif [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:78:14,bridge_name='br-int',has_traffic_filtering=True,id=2d4aa035-2158-4404-80f0-9a1b2897399c,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d4aa035-21')#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.014 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad69a57-8bc5-4361-ae8d-cf73cc9b3ea0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.042 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[805eb39f-2c3e-4a9e-970b-ce3f25ebf267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.045 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c0174621-f3a3-4e89-9c64-9f127915c5b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.069 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[68d7127f-2a94-4661-8af5-4710be14b6e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.092 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ebdbb50f-ee22-40ae-b05b-065f4ebe5aaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364233, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.109 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aeef9bd3-76ca-46c6-a5d3-509d2802aeb5]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364234, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364234, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.110 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.114 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.114 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.114 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.115 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.116 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d4aa035-2158-4404-80f0-9a1b2897399c in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.117 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.132 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7cee4ede-51b5-40e1-aafd-9ffd189c30a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.163 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d3630f33-eaf5-4da1-b2ec-f7d70e1ececa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.166 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[99e7bfae-f103-4dc5-9950-f6f4ef3ba01e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.196 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c12bcb03-d2aa-4355-ad14-63a93766e8e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.212 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8630d8b4-68bb-4dec-b64d-c57d2d64cc69]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364255, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.220 257641 DEBUG nova.compute.manager [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.220 257641 DEBUG oslo_concurrency.lockutils [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.221 257641 DEBUG oslo_concurrency.lockutils [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.221 257641 DEBUG oslo_concurrency.lockutils [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.221 257641 DEBUG nova.compute.manager [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.222 257641 DEBUG nova.compute.manager [req-677e9d11-e04d-49bd-b581-ae682b3e5ba0 req-f2dc69ef-72b2-412d-a2b9-500419bce7e8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.229 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6b52a3-5cca-4c66-9b9f-55238f4f8a93]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364259, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364259, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.231 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.232 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.233 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.234 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.234 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.234 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.234 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.235 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2d4aa035-2158-4404-80f0-9a1b2897399c in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.237 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.252 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[455a8080-f089-44b9-b3ec-54ce00603ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.279 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9228f9d8-48a3-450c-a1e3-903c970e9405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.282 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd24d2b-3a3d-4e28-9dcd-dee9f7b633b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.307 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d17240-d9f6-4d7e-8e5b-26d9c83bc5f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.327 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7e0d47-f597-4454-aa43-4d96ef29265a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364265, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.347 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58397f32-786f-4a3e-8eb2-9676c38c669a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364266, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364266, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.349 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.351 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.351 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.352 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:15.352 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.497 257641 INFO nova.virt.libvirt.driver [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Deleting instance files /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2_del#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.499 257641 INFO nova.virt.libvirt.driver [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Deletion of /var/lib/nova/instances/8aee8cc7-7ff1-451b-88f4-7804819fc2b2_del complete#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.560 257641 INFO nova.compute.manager [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.561 257641 DEBUG oslo.service.loopingcall [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.561 257641 DEBUG nova.compute.manager [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:15 np0005539550 nova_compute[257631]: 2025-11-29 08:35:15.562 257641 DEBUG nova.network.neutron [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:15.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 196 op/s
Nov 29 03:35:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:16.312 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:16.313 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.313 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.333 257641 DEBUG nova.network.neutron [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.351 257641 INFO nova.compute.manager [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Took 0.79 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.464 257641 DEBUG nova.compute.manager [req-d5174131-e54a-401c-90e3-851b3a87d3ad req-71cb8f4c-25bc-4504-ac05-05ac68bc60d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-deleted-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.577 257641 INFO nova.compute.manager [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Took 0.23 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.662 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:16 np0005539550 nova_compute[257631]: 2025-11-29 08:35:16.663 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:16.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.011 257641 DEBUG oslo_concurrency.processutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.356 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.357 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.357 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.358 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.358 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.358 257641 WARNING nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.359 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.359 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.360 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.360 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.360 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.361 257641 WARNING nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.361 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.361 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.362 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.362 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.363 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.363 257641 WARNING nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.363 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.364 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.364 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.364 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.365 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.365 257641 WARNING nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-unplugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.366 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.366 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.366 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.367 257641 DEBUG oslo_concurrency.lockutils [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.367 257641 DEBUG nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] No waiting events found dispatching network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.368 257641 WARNING nova.compute.manager [req-61487224-4bf5-457f-bb7f-f4c840b83094 req-a061d035-2226-4131-85c7-708317941b89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Received unexpected event network-vif-plugged-2d4aa035-2158-4404-80f0-9a1b2897399c for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305517834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.494 257641 DEBUG oslo_concurrency.processutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.502 257641 DEBUG nova.compute.provider_tree [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.519 257641 DEBUG nova.scheduler.client.report [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.537 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.556 257641 INFO nova.scheduler.client.report [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance 8aee8cc7-7ff1-451b-88f4-7804819fc2b2#033[00m
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.662 257641 DEBUG oslo_concurrency.lockutils [None req-91cc7dd8-a8d1-4dcb-8052-75383af1ab25 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8aee8cc7-7ff1-451b-88f4-7804819fc2b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:17 np0005539550 nova_compute[257631]: 2025-11-29 08:35:17.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.0 MiB/s wr, 195 op/s
Nov 29 03:35:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Nov 29 03:35:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Nov 29 03:35:18 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Nov 29 03:35:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:18 np0005539550 nova_compute[257631]: 2025-11-29 08:35:18.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:19 np0005539550 podman[364293]: 2025-11-29 08:35:19.328922651 +0000 UTC m=+0.061008869 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:35:19 np0005539550 podman[364292]: 2025-11-29 08:35:19.339465179 +0000 UTC m=+0.071551427 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:35:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Nov 29 03:35:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Nov 29 03:35:19 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Nov 29 03:35:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:19.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:19 np0005539550 nova_compute[257631]: 2025-11-29 08:35:19.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.015210238111855574 of space, bias 1.0, pg target 4.563071433556672 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005311387667504187 of space, bias 1.0, pg target 1.5721707495812394 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5615879600549042 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 16)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Nov 29 03:35:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 805 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 40 KiB/s wr, 156 op/s
Nov 29 03:35:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.934 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.935 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.935 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.936 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.936 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.937 257641 INFO nova.compute.manager [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Terminating instance#033[00m
Nov 29 03:35:21 np0005539550 nova_compute[257631]: 2025-11-29 08:35:21.938 257641 DEBUG nova.compute.manager [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:21 np0005539550 kernel: tapa08044a1-40 (unregistering): left promiscuous mode
Nov 29 03:35:21 np0005539550 NetworkManager[49039]: <info>  [1764405321.9990] device (tapa08044a1-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:22Z|00795|binding|INFO|Releasing lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 from this chassis (sb_readonly=0)
Nov 29 03:35:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:22Z|00796|binding|INFO|Setting lport a08044a1-40b5-4987-bfe0-a92ba0c13b97 down in Southbound
Nov 29 03:35:22 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:22Z|00797|binding|INFO|Removing iface tapa08044a1-40 ovn-installed in OVS
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.011 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.013 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.020 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:39:c1:ec 10.100.0.3'], port_security=['fa:16:3e:39:c1:ec 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f6eba09a-c0cf-4855-afd5-b265b2f2cadc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a08044a1-40b5-4987-bfe0-a92ba0c13b97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.023 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a08044a1-40b5-4987-bfe0-a92ba0c13b97 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.026 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.051 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d948cd93-2506-471d-b171-11dac7f4f0f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000a6.scope: Deactivated successfully.
Nov 29 03:35:22 np0005539550 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000a6.scope: Consumed 16.158s CPU time.
Nov 29 03:35:22 np0005539550 systemd-machined[216673]: Machine qemu-92-instance-000000a6 terminated.
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.091 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[937859a1-9b82-47dc-bca2-3227b78e6132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.094 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[295769f8-aff0-4a1f-bd6a-0eabaf75debf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.126 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[69d28ba9-8532-4387-acb4-42fd587e686b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.146 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49a15639-c6b6-433a-92a1-8cfe927313d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 17, 'rx_bytes': 784, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364343, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.171 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[93b3bd4b-5dc2-4035-9aec-36945ef4215b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364346, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364346, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.174 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.176 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.181 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.181 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.181 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.181 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:22.181 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.183 257641 INFO nova.virt.libvirt.driver [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Instance destroyed successfully.#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.183 257641 DEBUG nova.objects.instance [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid f6eba09a-c0cf-4855-afd5-b265b2f2cadc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.198 257641 DEBUG nova.virt.libvirt.vif [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=166,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:34:22Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-jvsv1b4g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:34:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=f6eba09a-c0cf-4855-afd5-b265b2f2cadc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.199 257641 DEBUG nova.network.os_vif_util [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "address": "fa:16:3e:39:c1:ec", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa08044a1-40", "ovs_interfaceid": "a08044a1-40b5-4987-bfe0-a92ba0c13b97", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.199 257641 DEBUG nova.network.os_vif_util [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.200 257641 DEBUG os_vif [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.202 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa08044a1-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.203 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.205 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:22 np0005539550 nova_compute[257631]: 2025-11-29 08:35:22.208 257641 INFO os_vif [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:39:c1:ec,bridge_name='br-int',has_traffic_filtering=True,id=a08044a1-40b5-4987-bfe0-a92ba0c13b97,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa08044a1-40')#033[00m
Nov 29 03:35:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 774 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 36 KiB/s wr, 140 op/s
Nov 29 03:35:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.107 257641 INFO nova.virt.libvirt.driver [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Deleting instance files /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_del#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.109 257641 INFO nova.virt.libvirt.driver [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Deletion of /var/lib/nova/instances/f6eba09a-c0cf-4855-afd5-b265b2f2cadc_del complete#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.192 257641 INFO nova.compute.manager [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.194 257641 DEBUG oslo.service.loopingcall [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.195 257641 DEBUG nova.compute.manager [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.195 257641 DEBUG nova.network.neutron [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:23.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.977 257641 DEBUG nova.compute.manager [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.977 257641 DEBUG oslo_concurrency.lockutils [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.977 257641 DEBUG oslo_concurrency.lockutils [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.978 257641 DEBUG oslo_concurrency.lockutils [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.978 257641 DEBUG nova.compute.manager [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:23 np0005539550 nova_compute[257631]: 2025-11-29 08:35:23.978 257641 DEBUG nova.compute.manager [req-7d55b8be-1b8f-4a3c-bdc1-56ff28d1d967 req-53cc857e-2c16-48e2-9c45-5c161accba3d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-unplugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 735 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 58 KiB/s rd, 16 KiB/s wr, 80 op/s
Nov 29 03:35:24 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:24.314 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:24 np0005539550 nova_compute[257631]: 2025-11-29 08:35:24.722 257641 DEBUG nova.network.neutron [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:24 np0005539550 nova_compute[257631]: 2025-11-29 08:35:24.740 257641 INFO nova.compute.manager [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Took 1.54 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:24 np0005539550 nova_compute[257631]: 2025-11-29 08:35:24.788 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:24 np0005539550 nova_compute[257631]: 2025-11-29 08:35:24.789 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:24 np0005539550 nova_compute[257631]: 2025-11-29 08:35:24.877 257641 DEBUG oslo_concurrency.processutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1184908221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.370 257641 DEBUG oslo_concurrency.processutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.378 257641 DEBUG nova.compute.provider_tree [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.399 257641 DEBUG nova.scheduler.client.report [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.429 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.463 257641 INFO nova.scheduler.client.report [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance f6eba09a-c0cf-4855-afd5-b265b2f2cadc#033[00m
Nov 29 03:35:25 np0005539550 nova_compute[257631]: 2025-11-29 08:35:25.535 257641 DEBUG oslo_concurrency.lockutils [None req-b93796c1-ccb9-497c-a6dc-88d6f0b837ff c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.095 257641 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.095 257641 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.095 257641 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.096 257641 DEBUG oslo_concurrency.lockutils [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f6eba09a-c0cf-4855-afd5-b265b2f2cadc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.096 257641 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] No waiting events found dispatching network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.096 257641 WARNING nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received unexpected event network-vif-plugged-a08044a1-40b5-4987-bfe0-a92ba0c13b97 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.096 257641 DEBUG nova.compute.manager [req-6877105d-f646-469d-8e43-675467eec034 req-40abf393-883d-4d7a-8559-7cc601c4f33f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Received event network-vif-deleted-a08044a1-40b5-4987-bfe0-a92ba0c13b97 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 678 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 9.7 KiB/s wr, 118 op/s
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.533 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.533 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.534 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.534 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.534 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.535 257641 INFO nova.compute.manager [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Terminating instance#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.536 257641 DEBUG nova.compute.manager [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:26 np0005539550 kernel: tap5369324b-4a (unregistering): left promiscuous mode
Nov 29 03:35:26 np0005539550 NetworkManager[49039]: <info>  [1764405326.5943] device (tap5369324b-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:26Z|00798|binding|INFO|Releasing lport 5369324b-4a12-4cff-807c-444de53025fa from this chassis (sb_readonly=0)
Nov 29 03:35:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:26Z|00799|binding|INFO|Setting lport 5369324b-4a12-4cff-807c-444de53025fa down in Southbound
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:26Z|00800|binding|INFO|Removing iface tap5369324b-4a ovn-installed in OVS
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.610 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:51:12 10.100.0.11'], port_security=['fa:16:3e:a3:51:12 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '8', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=5369324b-4a12-4cff-807c-444de53025fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.611 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 5369324b-4a12-4cff-807c-444de53025fa in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.613 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.630 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef94a32-4b42-4589-9ad5-5f6ba2aaf5aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Nov 29 03:35:26 np0005539550 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000a4.scope: Consumed 17.756s CPU time.
Nov 29 03:35:26 np0005539550 systemd-machined[216673]: Machine qemu-91-instance-000000a4 terminated.
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.658 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8491c5-b904-48ba-9059-049723bb6fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.660 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[02efb62d-efe5-4773-9c3f-c0574ced8769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.694 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[aa69d8c1-4602-4459-a98e-23531fde78aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.711 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[de95774f-495a-4a2f-a65d-7328a62388d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'taped50ff83-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:60:f2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 19, 'rx_bytes': 784, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 217], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816028, 'reachable_time': 27254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364462, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.725 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[36327d1c-5523-42fe-b68b-d6be7aa5b275]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816038, 'tstamp': 816038}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364463, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'taped50ff83-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 816041, 'tstamp': 816041}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364463, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.727 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.732 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.733 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=taped50ff83-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.733 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.733 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=taped50ff83-50, col_values=(('external_ids', {'iface-id': '3b04b2c4-a6da-4677-b446-82ad68652b56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:26.734 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.755 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.760 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.774 257641 INFO nova.virt.libvirt.driver [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Instance destroyed successfully.#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.775 257641 DEBUG nova.objects.instance [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.796 257641 DEBUG nova.virt.libvirt.vif [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:32:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=164,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:33:42Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-3f2qzfjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:33:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.796 257641 DEBUG nova.network.os_vif_util [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "5369324b-4a12-4cff-807c-444de53025fa", "address": "fa:16:3e:a3:51:12", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5369324b-4a", "ovs_interfaceid": "5369324b-4a12-4cff-807c-444de53025fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.797 257641 DEBUG nova.network.os_vif_util [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.797 257641 DEBUG os_vif [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.799 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5369324b-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:26 np0005539550 nova_compute[257631]: 2025-11-29 08:35:26.806 257641 INFO os_vif [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a3:51:12,bridge_name='br-int',has_traffic_filtering=True,id=5369324b-4a12-4cff-807c-444de53025fa,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5369324b-4a')#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.027 257641 DEBUG nova.compute.manager [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.027 257641 DEBUG oslo_concurrency.lockutils [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.028 257641 DEBUG oslo_concurrency.lockutils [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.028 257641 DEBUG oslo_concurrency.lockutils [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.028 257641 DEBUG nova.compute.manager [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:27 np0005539550 nova_compute[257631]: 2025-11-29 08:35:27.028 257641 DEBUG nova.compute.manager [req-d4e5157a-7147-4499-a56a-833638673dda req-21f33d2d-24d1-4fe5-b773-567cfca0baa1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-unplugged-5369324b-4a12-4cff-807c-444de53025fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3239828883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:27.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 648 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 83 KiB/s rd, 8.4 KiB/s wr, 111 op/s
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.489 257641 INFO nova.virt.libvirt.driver [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Deleting instance files /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_del#033[00m
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.490 257641 INFO nova.virt.libvirt.driver [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Deletion of /var/lib/nova/instances/8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a_del complete#033[00m
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.547 257641 INFO nova.compute.manager [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Took 2.01 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.548 257641 DEBUG oslo.service.loopingcall [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.549 257641 DEBUG nova.compute.manager [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.549 257641 DEBUG nova.network.neutron [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:28.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:28 np0005539550 nova_compute[257631]: 2025-11-29 08:35:28.923 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.122 257641 DEBUG nova.compute.manager [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.122 257641 DEBUG oslo_concurrency.lockutils [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.123 257641 DEBUG oslo_concurrency.lockutils [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.124 257641 DEBUG oslo_concurrency.lockutils [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.125 257641 DEBUG nova.compute.manager [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] No waiting events found dispatching network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.125 257641 WARNING nova.compute.manager [req-6df48c70-ba33-4d95-90f3-797406e269f5 req-f66d117f-b102-4699-9ae9-bc90fd589681 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received unexpected event network-vif-plugged-5369324b-4a12-4cff-807c-444de53025fa for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.206 257641 DEBUG nova.network.neutron [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.232 257641 INFO nova.compute.manager [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Took 0.68 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.299 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.299 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.370 257641 DEBUG oslo_concurrency.processutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:29 np0005539550 podman[364495]: 2025-11-29 08:35:29.406839947 +0000 UTC m=+0.135159030 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:35:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:29.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070190506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.873 257641 DEBUG oslo_concurrency.processutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.886 257641 DEBUG nova.compute.provider_tree [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.916 257641 DEBUG nova.scheduler.client.report [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.929 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405314.9281757, 8aee8cc7-7ff1-451b-88f4-7804819fc2b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.929 257641 INFO nova.compute.manager [-] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.950 257641 DEBUG nova.compute.manager [None req-25ed12b1-3499-41e9-862a-bbe878a8e014 - - - - - -] [instance: 8aee8cc7-7ff1-451b-88f4-7804819fc2b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.958 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:29 np0005539550 nova_compute[257631]: 2025-11-29 08:35:29.982 257641 INFO nova.scheduler.client.report [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a#033[00m
Nov 29 03:35:30 np0005539550 nova_compute[257631]: 2025-11-29 08:35:30.057 257641 DEBUG oslo_concurrency.lockutils [None req-cc1876de-e12c-499a-88f4-1046f1766f1b c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 648 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 80 KiB/s rd, 8.1 KiB/s wr, 107 op/s
Nov 29 03:35:30 np0005539550 nova_compute[257631]: 2025-11-29 08:35:30.309 257641 DEBUG nova.compute.manager [req-22bf66d4-9b88-40b5-a082-f224cea35536 req-ac9e3cfc-b79c-4967-b5b5-c29d3533c198 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Received event network-vif-deleted-5369324b-4a12-4cff-807c-444de53025fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:31 np0005539550 nova_compute[257631]: 2025-11-29 08:35:31.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:31.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 538 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 130 op/s
Nov 29 03:35:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.698 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.698 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.716 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.793 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.793 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.801 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.801 257641 INFO nova.compute.claims [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:35:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:33.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.910 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:33 np0005539550 nova_compute[257631]: 2025-11-29 08:35:33.970 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 512 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 17 KiB/s wr, 106 op/s
Nov 29 03:35:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155703498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.446 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.453 257641 DEBUG nova.compute.provider_tree [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.472 257641 DEBUG nova.scheduler.client.report [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.499 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.500 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.562 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.563 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.588 257641 INFO nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.621 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:35:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.732 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.733 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.734 257641 INFO nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Creating image(s)#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.867 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.902 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.932 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.937 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:34 np0005539550 nova_compute[257631]: 2025-11-29 08:35:34.977 257641 DEBUG nova.policy [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdbcdbdc435844ee8d866288c969331b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368e3a44279843f5947188dd045d65b6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.017 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.018 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.019 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.019 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.046 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.051 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d05fcfb7-a367-40e8-990a-670bcd45288f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:35Z|00801|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.088 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:35Z|00802|binding|INFO|Releasing lport 3b04b2c4-a6da-4677-b446-82ad68652b56 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.282 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.336 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 d05fcfb7-a367-40e8-990a-670bcd45288f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.431 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] resizing rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.474 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.474 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.474 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.475 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.475 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.476 257641 INFO nova.compute.manager [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Terminating instance#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.478 257641 DEBUG nova.compute.manager [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:35:35 np0005539550 kernel: tap39164f5e-3f (unregistering): left promiscuous mode
Nov 29 03:35:35 np0005539550 NetworkManager[49039]: <info>  [1764405335.5326] device (tap39164f5e-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:35:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:35Z|00803|binding|INFO|Releasing lport 39164f5e-3f66-4cf5-8cc3-3903f7387b53 from this chassis (sb_readonly=0)
Nov 29 03:35:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:35Z|00804|binding|INFO|Setting lport 39164f5e-3f66-4cf5-8cc3-3903f7387b53 down in Southbound
Nov 29 03:35:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:35Z|00805|binding|INFO|Removing iface tap39164f5e-3f ovn-installed in OVS
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.554 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:f9:49 10.100.0.12'], port_security=['fa:16:3e:8c:f9:49 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1509e19f-b5e6-496d-a0d9-d6740970fad0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4f6db81949d487b853d7567f8a2e6d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b7aa4d-4e93-4da8-a338-5b87494d2fcd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=794eeb47-266a-47f4-b2a1-7a89e6c6ba82, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=39164f5e-3f66-4cf5-8cc3-3903f7387b53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.554 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.555 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 39164f5e-3f66-4cf5-8cc3-3903f7387b53 in datapath ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 unbound from our chassis#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.557 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.558 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b172a1c2-d101-4245-98ee-de4d9547595e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.559 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 namespace which is not needed anymore#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.572 257641 DEBUG nova.objects.instance [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'migration_context' on Instance uuid d05fcfb7-a367-40e8-990a-670bcd45288f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.574 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.590 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.590 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Ensure instance console log exists: /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.591 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539550 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539550 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000a1.scope: Consumed 24.277s CPU time.
Nov 29 03:35:35 np0005539550 systemd-machined[216673]: Machine qemu-87-instance-000000a1 terminated.
Nov 29 03:35:35 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [NOTICE]   (357639) : haproxy version is 2.8.14-c23fe91
Nov 29 03:35:35 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [NOTICE]   (357639) : path to executable is /usr/sbin/haproxy
Nov 29 03:35:35 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [WARNING]  (357639) : Exiting Master process...
Nov 29 03:35:35 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [ALERT]    (357639) : Current worker (357648) exited with code 143 (Terminated)
Nov 29 03:35:35 np0005539550 neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6[357635]: [WARNING]  (357639) : All workers exited. Exiting... (0)
Nov 29 03:35:35 np0005539550 systemd[1]: libpod-ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.717 257641 INFO nova.virt.libvirt.driver [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Instance destroyed successfully.#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.717 257641 DEBUG nova.objects.instance [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lazy-loading 'resources' on Instance uuid 1509e19f-b5e6-496d-a0d9-d6740970fad0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:35 np0005539550 podman[364758]: 2025-11-29 08:35:35.718466619 +0000 UTC m=+0.050186015 container died ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.731 257641 DEBUG nova.virt.libvirt.vif [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:31:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=161,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEFZDUAh1tFHT85mctamdge/Jlh9j7Mmalvlf2a+E48/dJ4b3TzL46vHd8+krJsRkbdr2BabH5xlFnXxT+hxq+KJlLzOnOaQuAWI18v9sbbjA8bZzR2tugMjasg7rWhFwg==',key_name='tempest-keypair-2058861619',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:31:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4f6db81949d487b853d7567f8a2e6d4',ramdisk_id='',reservation_id='r-od0z31z7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-573425942',owner_user_name='tempest-AttachVolumeMultiAttachTest-573425942-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:31:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c5b0953fb7cc415fb26cf4ffdd5908c6',uuid=1509e19f-b5e6-496d-a0d9-d6740970fad0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.731 257641 DEBUG nova.network.os_vif_util [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converting VIF {"id": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "address": "fa:16:3e:8c:f9:49", "network": {"id": "ed50ff83-51d1-4b35-b85c-1cbe6fb812c6", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-524811921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d4f6db81949d487b853d7567f8a2e6d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39164f5e-3f", "ovs_interfaceid": "39164f5e-3f66-4cf5-8cc3-3903f7387b53", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.732 257641 DEBUG nova.network.os_vif_util [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.732 257641 DEBUG os_vif [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.734 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39164f5e-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.735 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.736 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.739 257641 INFO os_vif [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8c:f9:49,bridge_name='br-int',has_traffic_filtering=True,id=39164f5e-3f66-4cf5-8cc3-3903f7387b53,network=Network(ed50ff83-51d1-4b35-b85c-1cbe6fb812c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39164f5e-3f')#033[00m
Nov 29 03:35:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003-userdata-shm.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-21eda19a5090f65cffd31d1cf6ddb29d1dc0970aa66fd2bd58627cd4c02ef9c2-merged.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539550 podman[364758]: 2025-11-29 08:35:35.765869802 +0000 UTC m=+0.097589198 container cleanup ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:35:35 np0005539550 systemd[1]: libpod-conmon-ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003.scope: Deactivated successfully.
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.806 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Successfully created port: 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.811 257641 DEBUG nova.compute.manager [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-unplugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.812 257641 DEBUG oslo_concurrency.lockutils [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.812 257641 DEBUG oslo_concurrency.lockutils [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.812 257641 DEBUG oslo_concurrency.lockutils [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.812 257641 DEBUG nova.compute.manager [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] No waiting events found dispatching network-vif-unplugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.812 257641 DEBUG nova.compute.manager [req-b76e555e-31b7-40bd-8769-f88d603988e5 req-7f967e9a-3d2e-4d30-9c02-c223b0682203 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-unplugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:35:35 np0005539550 podman[364815]: 2025-11-29 08:35:35.82291549 +0000 UTC m=+0.037575585 container remove ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.828 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4fca10-b301-486c-bc99-d23eb2dba643]: (4, ('Sat Nov 29 08:35:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003)\nccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003\nSat Nov 29 08:35:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 (ccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003)\nccfa31919ab2b5a79b76608018823ca7bc31a2d156d83dd84353e27337563003\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.831 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[87167288-7936-4ea8-a898-f1253a6b928d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.832 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=taped50ff83-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.834 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 kernel: taped50ff83-50: left promiscuous mode
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.850 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 nova_compute[257631]: 2025-11-29 08:35:35.852 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.856 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[00cb5685-a798-4b1d-9bad-fd417cbd80b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.873 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[810b1d4d-3cac-4e39-9de3-a98e3d0b0735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.874 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7379fc1-2f56-4bc6-8fff-b2faa9f2e796]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.890 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1089b5eb-d197-4af7-aefa-8d9a064f8f73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 816021, 'reachable_time': 23306, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364833, 'error': None, 'target': 'ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:35 np0005539550 systemd[1]: run-netns-ovnmeta\x2ded50ff83\x2d51d1\x2d4b35\x2db85c\x2d1cbe6fb812c6.mount: Deactivated successfully.
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.893 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ed50ff83-51d1-4b35-b85c-1cbe6fb812c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:35:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:35.894 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[098fe84e-1889-40a5-9a19-e4a360686c44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.168 257641 INFO nova.virt.libvirt.driver [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Deleting instance files /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0_del#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.169 257641 INFO nova.virt.libvirt.driver [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Deletion of /var/lib/nova/instances/1509e19f-b5e6-496d-a0d9-d6740970fad0_del complete#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.216 257641 INFO nova.compute.manager [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.216 257641 DEBUG oslo.service.loopingcall [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.217 257641 DEBUG nova.compute.manager [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.217 257641 DEBUG nova.network.neutron [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:35:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 444 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 112 KiB/s rd, 1.6 MiB/s wr, 154 op/s
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.516 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Successfully updated port: 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.533 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.533 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquired lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.534 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.613 257641 DEBUG nova.compute.manager [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-changed-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.613 257641 DEBUG nova.compute.manager [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Refreshing instance network info cache due to event network-changed-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.614 257641 DEBUG oslo_concurrency.lockutils [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:36 np0005539550 nova_compute[257631]: 2025-11-29 08:35:36.998 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.096 257641 DEBUG nova.network.neutron [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.116 257641 INFO nova.compute.manager [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Took 0.90 seconds to deallocate network for instance.#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.152 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.153 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.181 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405322.1803331, f6eba09a-c0cf-4855-afd5-b265b2f2cadc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.181 257641 INFO nova.compute.manager [-] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.200 257641 DEBUG nova.compute.manager [None req-32fff566-4151-40f4-bd55-347690790cf3 - - - - - -] [instance: f6eba09a-c0cf-4855-afd5-b265b2f2cadc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.220 257641 DEBUG oslo_concurrency.processutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2610274950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.670 257641 DEBUG oslo_concurrency.processutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.675 257641 DEBUG nova.compute.provider_tree [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.693 257641 DEBUG nova.scheduler.client.report [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.713 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.755 257641 INFO nova.scheduler.client.report [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Deleted allocations for instance 1509e19f-b5e6-496d-a0d9-d6740970fad0#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.822 257641 DEBUG oslo_concurrency.lockutils [None req-2a80aebe-bd38-4b5a-aa7a-931a1f7f97e9 c5b0953fb7cc415fb26cf4ffdd5908c6 d4f6db81949d487b853d7567f8a2e6d4 - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.833 257641 DEBUG nova.network.neutron [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updating instance_info_cache with network_info: [{"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:37.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.867 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Releasing lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.868 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Instance network_info: |[{"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.869 257641 DEBUG oslo_concurrency.lockutils [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.870 257641 DEBUG nova.network.neutron [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Refreshing network info cache for port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.875 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Start _get_guest_xml network_info=[{"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.881 257641 WARNING nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.886 257641 DEBUG nova.virt.libvirt.host [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.887 257641 DEBUG nova.virt.libvirt.host [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.890 257641 DEBUG nova.virt.libvirt.host [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.890 257641 DEBUG nova.virt.libvirt.host [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.891 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.891 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.892 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.892 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.892 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.893 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.893 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.893 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.893 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.893 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.894 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.894 257641 DEBUG nova.virt.hardware [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.897 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Nov 29 03:35:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.935 257641 DEBUG nova.compute.manager [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.936 257641 DEBUG oslo_concurrency.lockutils [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.936 257641 DEBUG oslo_concurrency.lockutils [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.937 257641 DEBUG oslo_concurrency.lockutils [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1509e19f-b5e6-496d-a0d9-d6740970fad0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.937 257641 DEBUG nova.compute.manager [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] No waiting events found dispatching network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.937 257641 WARNING nova.compute.manager [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received unexpected event network-vif-plugged-39164f5e-3f66-4cf5-8cc3-3903f7387b53 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:35:37 np0005539550 nova_compute[257631]: 2025-11-29 08:35:37.938 257641 DEBUG nova.compute.manager [req-7a347908-bdae-4c01-941e-621f8b318b12 req-25b6c87f-a26a-4ece-8f42-2b8388b226d2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Received event network-vif-deleted-39164f5e-3f66-4cf5-8cc3-3903f7387b53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 443 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 117 KiB/s rd, 2.4 MiB/s wr, 163 op/s
Nov 29 03:35:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2400580228' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.329 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.398 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.402 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:38.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:35:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039987664' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.869 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.871 257641 DEBUG nova.virt.libvirt.vif [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-436148882',display_name='tempest-AttachVolumeNegativeTest-server-436148882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-436148882',id=175,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCa4PGtbp8ahYwB3qC97gPKXr6OBO5ctbB0DolBrrH+TZ2Tddv/f5c9qn4awX2Z5XCmKY7030fZhqyS/fo6PNAw0OhlTifDTnQShCXUzUT0/flncE3V67fmU0MEVzeTjYg==',key_name='tempest-keypair-1952799986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-8fqoocvz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=d05fcfb7-a367-40e8-990a-670bcd45288f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.872 257641 DEBUG nova.network.os_vif_util [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.873 257641 DEBUG nova.network.os_vif_util [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.874 257641 DEBUG nova.objects.instance [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid d05fcfb7-a367-40e8-990a-670bcd45288f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.899 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <uuid>d05fcfb7-a367-40e8-990a-670bcd45288f</uuid>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <name>instance-000000af</name>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeNegativeTest-server-436148882</nova:name>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:35:37</nova:creationTime>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:user uuid="bdbcdbdc435844ee8d866288c969331b">tempest-AttachVolumeNegativeTest-1895715059-project-member</nova:user>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:project uuid="368e3a44279843f5947188dd045d65b6">tempest-AttachVolumeNegativeTest-1895715059</nova:project>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <nova:port uuid="9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="serial">d05fcfb7-a367-40e8-990a-670bcd45288f</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="uuid">d05fcfb7-a367-40e8-990a-670bcd45288f</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d05fcfb7-a367-40e8-990a-670bcd45288f_disk">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:a7:f9:f5"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <target dev="tap9b9d47a1-5a"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/console.log" append="off"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:35:38 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:35:38 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:35:38 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:35:38 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.900 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Preparing to wait for external event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.900 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.900 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.901 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.901 257641 DEBUG nova.virt.libvirt.vif [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-436148882',display_name='tempest-AttachVolumeNegativeTest-server-436148882',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-436148882',id=175,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCa4PGtbp8ahYwB3qC97gPKXr6OBO5ctbB0DolBrrH+TZ2Tddv/f5c9qn4awX2Z5XCmKY7030fZhqyS/fo6PNAw0OhlTifDTnQShCXUzUT0/flncE3V67fmU0MEVzeTjYg==',key_name='tempest-keypair-1952799986',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-8fqoocvz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:35:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=d05fcfb7-a367-40e8-990a-670bcd45288f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.901 257641 DEBUG nova.network.os_vif_util [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.902 257641 DEBUG nova.network.os_vif_util [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.902 257641 DEBUG os_vif [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.903 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.904 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.906 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.906 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b9d47a1-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.907 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b9d47a1-5a, col_values=(('external_ids', {'iface-id': '9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:f9:f5', 'vm-uuid': 'd05fcfb7-a367-40e8-990a-670bcd45288f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.908 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539550 NetworkManager[49039]: <info>  [1764405338.9095] manager: (tap9b9d47a1-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/347)
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.916 257641 INFO os_vif [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a')#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.928 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.985 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.985 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.986 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:a7:f9:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:35:38 np0005539550 nova_compute[257631]: 2025-11-29 08:35:38.986 257641 INFO nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Using config drive#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.009 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Nov 29 03:35:39 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Nov 29 03:35:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.418 257641 INFO nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Creating config drive at /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.428 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3n2v2zv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.568 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz3n2v2zv" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.601 257641 DEBUG nova.storage.rbd_utils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.605 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.759 257641 DEBUG nova.network.neutron [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updated VIF entry in instance network info cache for port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.760 257641 DEBUG nova.network.neutron [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updating instance_info_cache with network_info: [{"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.787 257641 DEBUG oslo_concurrency.lockutils [req-7dc704a7-9b06-468d-843d-e93ed8983259 req-57db938a-3d71-4fec-9683-39bad5a0d502 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:39.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.992 257641 DEBUG oslo_concurrency.processutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config d05fcfb7-a367-40e8-990a-670bcd45288f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:39 np0005539550 nova_compute[257631]: 2025-11-29 08:35:39.993 257641 INFO nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Deleting local config drive /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:35:40 np0005539550 kernel: tap9b9d47a1-5a: entered promiscuous mode
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.0520] manager: (tap9b9d47a1-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/348)
Nov 29 03:35:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:40Z|00806|binding|INFO|Claiming lport 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de for this chassis.
Nov 29 03:35:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:40Z|00807|binding|INFO|9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de: Claiming fa:16:3e:a7:f9:f5 10.100.0.13
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.058 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.061 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.0646] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.0650] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/350)
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.068 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:f9:f5 10.100.0.13'], port_security=['fa:16:3e:a7:f9:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd05fcfb7-a367-40e8-990a-670bcd45288f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b416ca11-2659-4dee-b2d8-449c4248058d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.069 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b bound to our chassis#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.071 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.084 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8746bda2-203b-4b21-af05-a3e77eb5295b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.085 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0183ad73-01 in ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.087 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0183ad73-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.087 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[77787635-7dc7-4b59-84ac-9275e9857885]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.088 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c40eb4be-9148-4eaf-838a-7f53e2c7cd8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 systemd-machined[216673]: New machine qemu-95-instance-000000af.
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.101 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7f38531c-a9aa-4c50-b948-d6b63c6ca4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 systemd[1]: Started Virtual Machine qemu-95-instance-000000af.
Nov 29 03:35:40 np0005539550 systemd-udevd[364998]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.127 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22c01ed4-3cfd-485c-92c6-9e55163d15aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.1418] device (tap9b9d47a1-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.1430] device (tap9b9d47a1-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:35:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Nov 29 03:35:40 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.176 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[119df7e5-af60-41cd-a54a-394143e15b1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.186 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2179db32-7051-403e-a29f-d39d7f5cfc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.1876] manager: (tap0183ad73-00): new Veth device (/org/freedesktop/NetworkManager/Devices/351)
Nov 29 03:35:40 np0005539550 systemd-udevd[365004]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.231 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[beccf4f7-5a00-4ff2-9f9b-535dbea34492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.237 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.236 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[be5a48d4-a621-4532-b7ce-3d428fb89b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.242 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 443 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 3.6 MiB/s wr, 144 op/s
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.2663] device (tap0183ad73-00): carrier: link connected
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.269 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e8661c8d-3379-47a4-b4db-8e825f2f99a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:40Z|00808|binding|INFO|Setting lport 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de ovn-installed in OVS
Nov 29 03:35:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:40Z|00809|binding|INFO|Setting lport 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de up in Southbound
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.287 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.297 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a744a6d7-15a8-408e-96ba-7ffa0abf6709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838793, 'reachable_time': 29276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365028, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.318 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4155c5fa-dbfc-4590-8ef4-ecd3d3b15a6e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:aad0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 838793, 'tstamp': 838793}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365029, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.336 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9ffaa0f2-7cd8-4d55-8921-fe8edc0f677c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838793, 'reachable_time': 29276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365030, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.369 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3919346e-c670-4187-88d4-5a252cb0c94f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.439 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f3983b0c-8d8f-4ec8-9ee2-2aaa5f5799cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.440 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.440 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.441 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0183ad73-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.442 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 kernel: tap0183ad73-00: entered promiscuous mode
Nov 29 03:35:40 np0005539550 NetworkManager[49039]: <info>  [1764405340.4432] manager: (tap0183ad73-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.445 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0183ad73-00, col_values=(('external_ids', {'iface-id': 'c88b07d7-f4c8-49a1-9950-8275afef03b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:35:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:40Z|00810|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.464 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.465 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.466 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f50638-eaec-45a4-98e9-b56da3b48cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.467 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:35:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:35:40.468 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'env', 'PROCESS_TAG=haproxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:35:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:40.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:40 np0005539550 podman[365063]: 2025-11-29 08:35:40.8355933 +0000 UTC m=+0.056556737 container create 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:35:40 np0005539550 systemd[1]: Started libpod-conmon-98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf.scope.
Nov 29 03:35:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:35:40 np0005539550 podman[365063]: 2025-11-29 08:35:40.802046568 +0000 UTC m=+0.023010035 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:35:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa6c708963911e3890a2e3f21a6b2cfb8a859d261fdeece94845e7e6269f9902/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:40 np0005539550 podman[365063]: 2025-11-29 08:35:40.91796219 +0000 UTC m=+0.138925647 container init 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:35:40 np0005539550 podman[365063]: 2025-11-29 08:35:40.923732787 +0000 UTC m=+0.144696224 container start 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:35:40 np0005539550 nova_compute[257631]: 2025-11-29 08:35:40.934 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:40 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [NOTICE]   (365083) : New worker (365085) forked
Nov 29 03:35:40 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [NOTICE]   (365083) : Loading success.
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.236 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405341.2360737, d05fcfb7-a367-40e8-990a-670bcd45288f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.236 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.259 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.264 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405341.2392366, d05fcfb7-a367-40e8-990a-670bcd45288f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.264 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.289 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.293 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.325 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.494 257641 DEBUG nova.compute.manager [req-f9ec03b9-ae6c-434a-b4e6-3481f20cb4fb req-fab3b9ad-59f4-46c1-b7dc-e7bf2a6905cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.495 257641 DEBUG oslo_concurrency.lockutils [req-f9ec03b9-ae6c-434a-b4e6-3481f20cb4fb req-fab3b9ad-59f4-46c1-b7dc-e7bf2a6905cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.496 257641 DEBUG oslo_concurrency.lockutils [req-f9ec03b9-ae6c-434a-b4e6-3481f20cb4fb req-fab3b9ad-59f4-46c1-b7dc-e7bf2a6905cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.497 257641 DEBUG oslo_concurrency.lockutils [req-f9ec03b9-ae6c-434a-b4e6-3481f20cb4fb req-fab3b9ad-59f4-46c1-b7dc-e7bf2a6905cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.497 257641 DEBUG nova.compute.manager [req-f9ec03b9-ae6c-434a-b4e6-3481f20cb4fb req-fab3b9ad-59f4-46c1-b7dc-e7bf2a6905cf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Processing event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.498 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.504 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405341.5029538, d05fcfb7-a367-40e8-990a-670bcd45288f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.504 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.509 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.514 257641 INFO nova.virt.libvirt.driver [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Instance spawned successfully.#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.515 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.535 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.544 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.549 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.550 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.551 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.552 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.552 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.553 257641 DEBUG nova.virt.libvirt.driver [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.568 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.630 257641 INFO nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Took 6.90 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.631 257641 DEBUG nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.738 257641 INFO nova.compute.manager [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Took 7.96 seconds to build instance.#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.766 257641 DEBUG oslo_concurrency.lockutils [None req-229ef18e-c3b4-47a8-a891-fe7b869767bb bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.772 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405326.7714689, 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.773 257641 INFO nova.compute.manager [-] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.792 257641 DEBUG nova.compute.manager [None req-ba198f12-75e2-4293-82e6-158cdf288dce - - - - - -] [instance: 8effb5bc-4bb3-46de-82ab-c8a7a7da2c4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:41 np0005539550 nova_compute[257631]: 2025-11-29 08:35:41.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 466 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.8 MiB/s wr, 241 op/s
Nov 29 03:35:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:42.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:42 np0005539550 nova_compute[257631]: 2025-11-29 08:35:42.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:42 np0005539550 nova_compute[257631]: 2025-11-29 08:35:42.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:35:42 np0005539550 nova_compute[257631]: 2025-11-29 08:35:42.964 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.678 257641 DEBUG nova.compute.manager [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.678 257641 DEBUG oslo_concurrency.lockutils [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.678 257641 DEBUG oslo_concurrency.lockutils [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.679 257641 DEBUG oslo_concurrency.lockutils [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.679 257641 DEBUG nova.compute.manager [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] No waiting events found dispatching network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.679 257641 WARNING nova.compute.manager [req-40007d96-f4fc-4138-a0e6-ea72bf024734 req-4741cd8b-f6b8-474b-b285-2dc6f3c43357 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received unexpected event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de for instance with vm_state active and task_state None.#033[00m
Nov 29 03:35:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:43.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.910 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:43 np0005539550 nova_compute[257631]: 2025-11-29 08:35:43.956 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 484 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 8.3 MiB/s wr, 270 op/s
Nov 29 03:35:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.732 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:44.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.750 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Triggering sync for uuid d05fcfb7-a367-40e8-990a-670bcd45288f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.751 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.751 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.777 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:35:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4293257989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:35:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:35:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4293257989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:44 np0005539550 nova_compute[257631]: 2025-11-29 08:35:44.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.807 257641 DEBUG nova.compute.manager [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-changed-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.808 257641 DEBUG nova.compute.manager [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Refreshing instance network info cache due to event network-changed-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.808 257641 DEBUG oslo_concurrency.lockutils [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.808 257641 DEBUG oslo_concurrency.lockutils [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.808 257641 DEBUG nova.network.neutron [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Refreshing network info cache for port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:35:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:45.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.976 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.976 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.976 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.977 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:35:45 np0005539550 nova_compute[257631]: 2025-11-29 08:35:45.977 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 5.8 MiB/s wr, 327 op/s
Nov 29 03:35:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850097264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.409 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.483 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.484 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000af as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.652 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.653 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4058MB free_disk=20.87621307373047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.653 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.653 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:35:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:46.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.795 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance d05fcfb7-a367-40e8-990a-670bcd45288f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.795 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.795 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.816 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.846 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.847 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.859 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.877 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:35:46 np0005539550 nova_compute[257631]: 2025-11-29 08:35:46.912 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:35:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:35:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1384786927' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:35:47 np0005539550 nova_compute[257631]: 2025-11-29 08:35:47.369 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:35:47 np0005539550 nova_compute[257631]: 2025-11-29 08:35:47.374 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:35:47 np0005539550 nova_compute[257631]: 2025-11-29 08:35:47.697 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:35:47 np0005539550 nova_compute[257631]: 2025-11-29 08:35:47.752 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:35:47 np0005539550 nova_compute[257631]: 2025-11-29 08:35:47.753 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:35:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:47.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 5.1 MiB/s wr, 292 op/s
Nov 29 03:35:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:48 np0005539550 nova_compute[257631]: 2025-11-29 08:35:48.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:48 np0005539550 nova_compute[257631]: 2025-11-29 08:35:48.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:48 np0005539550 nova_compute[257631]: 2025-11-29 08:35:48.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:48 np0005539550 nova_compute[257631]: 2025-11-29 08:35:48.960 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Nov 29 03:35:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Nov 29 03:35:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Nov 29 03:35:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:35:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:35:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 368 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 4.7 MiB/s wr, 266 op/s
Nov 29 03:35:50 np0005539550 podman[365236]: 2025-11-29 08:35:50.313803224 +0000 UTC m=+0.052900794 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:35:50 np0005539550 podman[365235]: 2025-11-29 08:35:50.345696373 +0000 UTC m=+0.084929337 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Nov 29 03:35:50 np0005539550 nova_compute[257631]: 2025-11-29 08:35:50.714 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405335.713703, 1509e19f-b5e6-496d-a0d9-d6740970fad0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:35:50 np0005539550 nova_compute[257631]: 2025-11-29 08:35:50.715 257641 INFO nova.compute.manager [-] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:35:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:51 np0005539550 nova_compute[257631]: 2025-11-29 08:35:51.047 257641 DEBUG nova.network.neutron [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updated VIF entry in instance network info cache for port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:35:51 np0005539550 nova_compute[257631]: 2025-11-29 08:35:51.048 257641 DEBUG nova.network.neutron [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updating instance_info_cache with network_info: [{"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:35:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:51.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.2 MiB/s wr, 141 op/s
Nov 29 03:35:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:52.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:52 np0005539550 nova_compute[257631]: 2025-11-29 08:35:52.798 257641 DEBUG nova.compute.manager [None req-b208365b-bfe8-46a1-9bb2-d2a81c75eae2 - - - - - -] [instance: 1509e19f-b5e6-496d-a0d9-d6740970fad0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:35:52 np0005539550 nova_compute[257631]: 2025-11-29 08:35:52.805 257641 DEBUG oslo_concurrency.lockutils [req-57b2c60a-f3cb-46b6-af4d-3e6362d60398 req-6710d9c0-b125-4526-a475-4c73111696ca 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-d05fcfb7-a367-40e8-990a-670bcd45288f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:35:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:53.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:53 np0005539550 nova_compute[257631]: 2025-11-29 08:35:53.951 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:53 np0005539550 nova_compute[257631]: 2025-11-29 08:35:53.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 325 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 116 op/s
Nov 29 03:35:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:54Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:f9:f5 10.100.0.13
Nov 29 03:35:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:54Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:f9:f5 10.100.0.13
Nov 29 03:35:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:54.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:54 np0005539550 nova_compute[257631]: 2025-11-29 08:35:54.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:35:55 np0005539550 ovn_controller[148680]: 2025-11-29T08:35:55Z|00811|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:35:55 np0005539550 nova_compute[257631]: 2025-11-29 08:35:55.654 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 343 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 197 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Nov 29 03:35:56 np0005539550 podman[365449]: 2025-11-29 08:35:56.452304862 +0000 UTC m=+0.087148213 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:35:56 np0005539550 podman[365449]: 2025-11-29 08:35:56.598270332 +0000 UTC m=+0.233113663 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:35:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:35:57 np0005539550 podman[365603]: 2025-11-29 08:35:57.311499941 +0000 UTC m=+0.073797491 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:35:57 np0005539550 podman[365603]: 2025-11-29 08:35:57.32437369 +0000 UTC m=+0.086671170 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:57 np0005539550 podman[365668]: 2025-11-29 08:35:57.557642856 +0000 UTC m=+0.064921702 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, name=keepalived, release=1793, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Nov 29 03:35:57 np0005539550 podman[365668]: 2025-11-29 08:35:57.578512123 +0000 UTC m=+0.085790959 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:57.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 2.5 MiB/s wr, 78 op/s
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:35:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:35:58.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d8eceb93-b329-4784-b66e-f391687aa22c does not exist
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a585e061-d043-48de-97b2-bd212c3779c3 does not exist
Nov 29 03:35:58 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2dfd0d0d-75a0-45b6-96ad-2699797c4d64 does not exist
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:35:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:35:59 np0005539550 nova_compute[257631]: 2025-11-29 08:35:58.998 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.324634) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359324704, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1025, "num_deletes": 256, "total_data_size": 1520386, "memory_usage": 1546448, "flush_reason": "Manual Compaction"}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359339328, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1492367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55823, "largest_seqno": 56847, "table_properties": {"data_size": 1487202, "index_size": 2625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10711, "raw_average_key_size": 18, "raw_value_size": 1476740, "raw_average_value_size": 2586, "num_data_blocks": 114, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405286, "oldest_key_time": 1764405286, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 14748 microseconds, and 5268 cpu microseconds.
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.339373) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1492367 bytes OK
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.339402) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.342215) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.342229) EVENT_LOG_v1 {"time_micros": 1764405359342224, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.342253) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1515535, prev total WAL file size 1523696, number of live WAL files 2.
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.342703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353034' seq:0, type:0; will stop at (end)
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1457KB)], [122(12MB)]
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359342743, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 14286246, "oldest_snapshot_seqno": -1}
Nov 29 03:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:35:59
Nov 29 03:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta']
Nov 29 03:35:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 9261 keys, 13172880 bytes, temperature: kUnknown
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359428439, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 13172880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13111710, "index_size": 36930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 242465, "raw_average_key_size": 26, "raw_value_size": 12947386, "raw_average_value_size": 1398, "num_data_blocks": 1421, "num_entries": 9261, "num_filter_entries": 9261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405359, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.428761) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 13172880 bytes
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.430425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.4 rd, 153.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.2 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(18.4) write-amplify(8.8) OK, records in: 9792, records dropped: 531 output_compression: NoCompression
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.430445) EVENT_LOG_v1 {"time_micros": 1764405359430436, "job": 74, "event": "compaction_finished", "compaction_time_micros": 85833, "compaction_time_cpu_micros": 30010, "output_level": 6, "num_output_files": 1, "total_output_size": 13172880, "num_input_records": 9792, "num_output_records": 9261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359430966, "job": 74, "event": "table_file_deletion", "file_number": 124}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405359433831, "job": 74, "event": "table_file_deletion", "file_number": 122}
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.342626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433930) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:35:59.433932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.487800847 +0000 UTC m=+0.051130598 container create 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:35:59 np0005539550 systemd[1]: Started libpod-conmon-07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7.scope.
Nov 29 03:35:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.555982488 +0000 UTC m=+0.119312269 container init 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.464848508 +0000 UTC m=+0.028178309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.56452441 +0000 UTC m=+0.127854181 container start 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.568661832 +0000 UTC m=+0.131991673 container attach 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:35:59 np0005539550 suspicious_ride[365987]: 167 167
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.590953345 +0000 UTC m=+0.154283106 container died 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:35:59 np0005539550 systemd[1]: libpod-07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7.scope: Deactivated successfully.
Nov 29 03:35:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0e0e8ea223a4331fc2cdd2657005d6ed7bd20c4d1f20e2d1e4244bff7790a4aa-merged.mount: Deactivated successfully.
Nov 29 03:35:59 np0005539550 podman[365970]: 2025-11-29 08:35:59.629725506 +0000 UTC m=+0.193055267 container remove 07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:35:59 np0005539550 podman[365984]: 2025-11-29 08:35:59.633156951 +0000 UTC m=+0.101634561 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:35:59 np0005539550 systemd[1]: libpod-conmon-07989744ab10df9bc669b9a60acbde4050004f2b895bd6d6ed94e9e97e39a1f7.scope: Deactivated successfully.
Nov 29 03:35:59 np0005539550 podman[366036]: 2025-11-29 08:35:59.8359445 +0000 UTC m=+0.045865089 container create ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:35:59 np0005539550 systemd[1]: Started libpod-conmon-ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31.scope.
Nov 29 03:35:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:35:59 np0005539550 podman[366036]: 2025-11-29 08:35:59.81540316 +0000 UTC m=+0.025323679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:35:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:35:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:35:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:35:59.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:35:59 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:35:59 np0005539550 podman[366036]: 2025-11-29 08:35:59.948513501 +0000 UTC m=+0.158434020 container init ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:35:59 np0005539550 podman[366036]: 2025-11-29 08:35:59.955392102 +0000 UTC m=+0.165312601 container start ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:35:59 np0005539550 podman[366036]: 2025-11-29 08:35:59.959101264 +0000 UTC m=+0.169021793 container attach ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:36:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 352 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 2.3 MiB/s wr, 71 op/s
Nov 29 03:36:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:00.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:00 np0005539550 pedantic_gould[366052]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:36:00 np0005539550 pedantic_gould[366052]: --> relative data size: 1.0
Nov 29 03:36:00 np0005539550 pedantic_gould[366052]: --> All data devices are unavailable
Nov 29 03:36:00 np0005539550 systemd[1]: libpod-ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31.scope: Deactivated successfully.
Nov 29 03:36:00 np0005539550 podman[366036]: 2025-11-29 08:36:00.805256029 +0000 UTC m=+1.015176548 container died ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:36:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d23f186fd52e2d0096ab1a120132cd1a5b7ae3691b0eb8c83e5adfe789679ab6-merged.mount: Deactivated successfully.
Nov 29 03:36:00 np0005539550 podman[366036]: 2025-11-29 08:36:00.927121632 +0000 UTC m=+1.137042131 container remove ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:36:00 np0005539550 systemd[1]: libpod-conmon-ebe26c829fa8155a7c88b2c98954163d8e6581f4f8c21380a8705697c7fffd31.scope: Deactivated successfully.
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.530774553 +0000 UTC m=+0.043887129 container create 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:36:01 np0005539550 systemd[1]: Started libpod-conmon-9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1.scope.
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.511507475 +0000 UTC m=+0.024620071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.633369336 +0000 UTC m=+0.146481952 container init 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.640562575 +0000 UTC m=+0.153675161 container start 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.645943868 +0000 UTC m=+0.159056484 container attach 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 29 03:36:01 np0005539550 recursing_hodgkin[366239]: 167 167
Nov 29 03:36:01 np0005539550 systemd[1]: libpod-9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1.scope: Deactivated successfully.
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.647710702 +0000 UTC m=+0.160823318 container died 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:36:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4fd74478e6cde7159a0ba88674dee4027d6279f1e1118ee98d52d3f0f039c973-merged.mount: Deactivated successfully.
Nov 29 03:36:01 np0005539550 podman[366222]: 2025-11-29 08:36:01.872451496 +0000 UTC m=+0.385564062 container remove 9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:36:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:01 np0005539550 systemd[1]: libpod-conmon-9e27940676c7c0864e22604b48938967701a09c35dea0977b468879bc70d2ca1.scope: Deactivated successfully.
Nov 29 03:36:02 np0005539550 podman[366263]: 2025-11-29 08:36:02.066506879 +0000 UTC m=+0.059820025 container create 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:02 np0005539550 systemd[1]: Started libpod-conmon-14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8.scope.
Nov 29 03:36:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:36:02 np0005539550 podman[366263]: 2025-11-29 08:36:02.042505574 +0000 UTC m=+0.035818740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6b40bc588144e76ecff2443953e31aefb3cdb3016f71cdbc392d465fcae0f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6b40bc588144e76ecff2443953e31aefb3cdb3016f71cdbc392d465fcae0f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6b40bc588144e76ecff2443953e31aefb3cdb3016f71cdbc392d465fcae0f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab6b40bc588144e76ecff2443953e31aefb3cdb3016f71cdbc392d465fcae0f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:02 np0005539550 podman[366263]: 2025-11-29 08:36:02.200263256 +0000 UTC m=+0.193576422 container init 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:02 np0005539550 podman[366263]: 2025-11-29 08:36:02.20686646 +0000 UTC m=+0.200179606 container start 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:36:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 268 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 29 03:36:02 np0005539550 podman[366263]: 2025-11-29 08:36:02.305517317 +0000 UTC m=+0.298830463 container attach 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:36:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:02 np0005539550 serene_jennings[366280]: {
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:    "0": [
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:        {
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "devices": [
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "/dev/loop3"
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            ],
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "lv_name": "ceph_lv0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "lv_size": "7511998464",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "name": "ceph_lv0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "tags": {
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.cluster_name": "ceph",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.crush_device_class": "",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.encrypted": "0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.osd_id": "0",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.type": "block",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:                "ceph.vdo": "0"
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            },
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "type": "block",
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:            "vg_name": "ceph_vg0"
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:        }
Nov 29 03:36:02 np0005539550 serene_jennings[366280]:    ]
Nov 29 03:36:02 np0005539550 serene_jennings[366280]: }
Nov 29 03:36:03 np0005539550 systemd[1]: libpod-14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8.scope: Deactivated successfully.
Nov 29 03:36:03 np0005539550 podman[366263]: 2025-11-29 08:36:03.021569726 +0000 UTC m=+1.014882872 container died 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 03:36:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ab6b40bc588144e76ecff2443953e31aefb3cdb3016f71cdbc392d465fcae0f7-merged.mount: Deactivated successfully.
Nov 29 03:36:03 np0005539550 podman[366263]: 2025-11-29 08:36:03.113983568 +0000 UTC m=+1.107296724 container remove 14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:36:03 np0005539550 systemd[1]: libpod-conmon-14e452afb31a08625cb01ed9b1193ab983e10c98537fbb883b2026f4526245f8.scope: Deactivated successfully.
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.427 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.429 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.429 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.429 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.429 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.431 257641 INFO nova.compute.manager [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Terminating instance#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.432 257641 DEBUG nova.compute.manager [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:36:03 np0005539550 kernel: tap9b9d47a1-5a (unregistering): left promiscuous mode
Nov 29 03:36:03 np0005539550 NetworkManager[49039]: <info>  [1764405363.4950] device (tap9b9d47a1-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:03Z|00812|binding|INFO|Releasing lport 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de from this chassis (sb_readonly=0)
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:03Z|00813|binding|INFO|Setting lport 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de down in Southbound
Nov 29 03:36:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:03Z|00814|binding|INFO|Removing iface tap9b9d47a1-5a ovn-installed in OVS
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.512 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:03.517 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:f9:f5 10.100.0.13'], port_security=['fa:16:3e:a7:f9:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd05fcfb7-a367-40e8-990a-670bcd45288f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b416ca11-2659-4dee-b2d8-449c4248058d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:03.519 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b unbound from our chassis#033[00m
Nov 29 03:36:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:03.520 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:03.523 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[22bdc808-f133-4e5a-b88f-3b329573ea9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:03.524 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace which is not needed anymore#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.532 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000af.scope: Deactivated successfully.
Nov 29 03:36:03 np0005539550 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000af.scope: Consumed 15.184s CPU time.
Nov 29 03:36:03 np0005539550 systemd-machined[216673]: Machine qemu-95-instance-000000af terminated.
Nov 29 03:36:03 np0005539550 kernel: tap9b9d47a1-5a: entered promiscuous mode
Nov 29 03:36:03 np0005539550 kernel: tap9b9d47a1-5a (unregistering): left promiscuous mode
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.721 257641 INFO nova.virt.libvirt.driver [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Instance destroyed successfully.#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.722 257641 DEBUG nova.objects.instance [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'resources' on Instance uuid d05fcfb7-a367-40e8-990a-670bcd45288f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.763 257641 DEBUG nova.virt.libvirt.vif [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:35:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-436148882',display_name='tempest-AttachVolumeNegativeTest-server-436148882',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-436148882',id=175,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCa4PGtbp8ahYwB3qC97gPKXr6OBO5ctbB0DolBrrH+TZ2Tddv/f5c9qn4awX2Z5XCmKY7030fZhqyS/fo6PNAw0OhlTifDTnQShCXUzUT0/flncE3V67fmU0MEVzeTjYg==',key_name='tempest-keypair-1952799986',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:35:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-8fqoocvz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:35:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=d05fcfb7-a367-40e8-990a-670bcd45288f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.764 257641 DEBUG nova.network.os_vif_util [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "address": "fa:16:3e:a7:f9:f5", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b9d47a1-5a", "ovs_interfaceid": "9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.766 257641 DEBUG nova.network.os_vif_util [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.767 257641 DEBUG os_vif [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.770 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.771 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b9d47a1-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.781 257641 INFO os_vif [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:f9:f5,bridge_name='br-int',has_traffic_filtering=True,id=9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b9d47a1-5a')#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.824 257641 DEBUG nova.compute.manager [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-unplugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.824 257641 DEBUG oslo_concurrency.lockutils [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.825 257641 DEBUG oslo_concurrency.lockutils [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.825 257641 DEBUG oslo_concurrency.lockutils [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.825 257641 DEBUG nova.compute.manager [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] No waiting events found dispatching network-vif-unplugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:03 np0005539550 nova_compute[257631]: 2025-11-29 08:36:03.826 257641 DEBUG nova.compute.manager [req-ebc48526-a8ff-4267-99ac-d31c4abacbb8 req-75500149-6099-4ba5-856b-dfa780fb63bf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-unplugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [NOTICE]   (365083) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [NOTICE]   (365083) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [WARNING]  (365083) : Exiting Master process...
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [WARNING]  (365083) : Exiting Master process...
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [ALERT]    (365083) : Current worker (365085) exited with code 143 (Terminated)
Nov 29 03:36:03 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[365078]: [WARNING]  (365083) : All workers exited. Exiting... (0)
Nov 29 03:36:03 np0005539550 systemd[1]: libpod-98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf.scope: Deactivated successfully.
Nov 29 03:36:03 np0005539550 podman[366438]: 2025-11-29 08:36:03.859894347 +0000 UTC m=+0.187520442 container died 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:36:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:04 np0005539550 nova_compute[257631]: 2025-11-29 08:36:04.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fa6c708963911e3890a2e3f21a6b2cfb8a859d261fdeece94845e7e6269f9902-merged.mount: Deactivated successfully.
Nov 29 03:36:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 358 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 29 03:36:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:04 np0005539550 podman[366438]: 2025-11-29 08:36:04.58131573 +0000 UTC m=+0.908941825 container cleanup 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:36:04 np0005539550 systemd[1]: libpod-conmon-98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf.scope: Deactivated successfully.
Nov 29 03:36:04 np0005539550 podman[366514]: 2025-11-29 08:36:04.697714046 +0000 UTC m=+0.077894152 container remove 98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.703 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d928dbd5-eca9-4a92-a1bf-4c5c746ff310]: (4, ('Sat Nov 29 08:36:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf)\n98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf\nSat Nov 29 08:36:04 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf)\n98666cbcb798f873f4115d6ef2ef512159f39f5891db98d02e82addcc7149dbf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.704 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c34cb5-8440-4ce4-9035-4b44d587acac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.706 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:04 np0005539550 nova_compute[257631]: 2025-11-29 08:36:04.707 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:04 np0005539550 kernel: tap0183ad73-00: left promiscuous mode
Nov 29 03:36:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:04.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:04 np0005539550 nova_compute[257631]: 2025-11-29 08:36:04.765 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.769 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dba3c445-b7a1-4234-a037-cde7674faacd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6e405425-79c0-49d4-a93f-05dd63d499f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.786 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1a1f52-5b50-4e4b-8a04-7426416b748f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 podman[366543]: 2025-11-29 08:36:04.788895578 +0000 UTC m=+0.047719465 container create 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.805 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[831de7a6-e071-4b21-9b4e-b34626e3bb82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 838782, 'reachable_time': 30357, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366559, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.808 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:04.808 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e6900c01-4047-4d82-9bfd-fc85f7407aa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:04 np0005539550 systemd[1]: run-netns-ovnmeta\x2d0183ad73\x2d05c1\x2d46e4\x2dba3e\x2db87d7a948c3b.mount: Deactivated successfully.
Nov 29 03:36:04 np0005539550 systemd[1]: Started libpod-conmon-00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04.scope.
Nov 29 03:36:04 np0005539550 podman[366543]: 2025-11-29 08:36:04.76238962 +0000 UTC m=+0.021213507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:36:04 np0005539550 podman[366543]: 2025-11-29 08:36:04.981744381 +0000 UTC m=+0.240568268 container init 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:36:04 np0005539550 podman[366543]: 2025-11-29 08:36:04.989530954 +0000 UTC m=+0.248354821 container start 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:36:04 np0005539550 stoic_mahavira[366563]: 167 167
Nov 29 03:36:04 np0005539550 systemd[1]: libpod-00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04.scope: Deactivated successfully.
Nov 29 03:36:05 np0005539550 podman[366543]: 2025-11-29 08:36:05.030814498 +0000 UTC m=+0.289638385 container attach 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:36:05 np0005539550 podman[366543]: 2025-11-29 08:36:05.031964146 +0000 UTC m=+0.290788013 container died 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:36:05 np0005539550 nova_compute[257631]: 2025-11-29 08:36:05.064 257641 INFO nova.virt.libvirt.driver [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Deleting instance files /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f_del#033[00m
Nov 29 03:36:05 np0005539550 nova_compute[257631]: 2025-11-29 08:36:05.065 257641 INFO nova.virt.libvirt.driver [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Deletion of /var/lib/nova/instances/d05fcfb7-a367-40e8-990a-670bcd45288f_del complete#033[00m
Nov 29 03:36:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-37745c0d8d0e8551228f5718ef01fdee1480ff4f39a078bb383188111380cc1c-merged.mount: Deactivated successfully.
Nov 29 03:36:05 np0005539550 podman[366543]: 2025-11-29 08:36:05.279575676 +0000 UTC m=+0.538399583 container remove 00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:36:05 np0005539550 systemd[1]: libpod-conmon-00eeafccc22d2a90feab47f35c0e28181273b78bd96eed09931c105f27c9ca04.scope: Deactivated successfully.
Nov 29 03:36:05 np0005539550 podman[366587]: 2025-11-29 08:36:05.486316344 +0000 UTC m=+0.078196491 container create 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:36:05 np0005539550 podman[366587]: 2025-11-29 08:36:05.438729213 +0000 UTC m=+0.030609380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:36:05 np0005539550 systemd[1]: Started libpod-conmon-270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6.scope.
Nov 29 03:36:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:36:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85b0e427e9379aab2112e1908f1f906b3ec89700128b699d8ac221d47b0985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85b0e427e9379aab2112e1908f1f906b3ec89700128b699d8ac221d47b0985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85b0e427e9379aab2112e1908f1f906b3ec89700128b699d8ac221d47b0985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85b0e427e9379aab2112e1908f1f906b3ec89700128b699d8ac221d47b0985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:05 np0005539550 podman[366587]: 2025-11-29 08:36:05.697592244 +0000 UTC m=+0.289472411 container init 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:36:05 np0005539550 podman[366587]: 2025-11-29 08:36:05.706011852 +0000 UTC m=+0.297892009 container start 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:36:05 np0005539550 podman[366587]: 2025-11-29 08:36:05.910224297 +0000 UTC m=+0.502104464 container attach 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:36:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 304 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]: {
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:        "osd_id": 0,
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:        "type": "bluestore"
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]:    }
Nov 29 03:36:06 np0005539550 nervous_vaughan[366651]: }
Nov 29 03:36:06 np0005539550 systemd[1]: libpod-270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6.scope: Deactivated successfully.
Nov 29 03:36:06 np0005539550 podman[366587]: 2025-11-29 08:36:06.573536088 +0000 UTC m=+1.165416235 container died 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:36:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bc85b0e427e9379aab2112e1908f1f906b3ec89700128b699d8ac221d47b0985-merged.mount: Deactivated successfully.
Nov 29 03:36:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:06 np0005539550 podman[366587]: 2025-11-29 08:36:06.799515302 +0000 UTC m=+1.391395479 container remove 270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_vaughan, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:06 np0005539550 systemd[1]: libpod-conmon-270d423f6665b8757d37e882dd2d7c12ae1380c1f9a7c3cd951a1eaa9e2367d6.scope: Deactivated successfully.
Nov 29 03:36:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:36:06 np0005539550 nova_compute[257631]: 2025-11-29 08:36:06.900 257641 INFO nova.compute.manager [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Took 3.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:36:06 np0005539550 nova_compute[257631]: 2025-11-29 08:36:06.901 257641 DEBUG oslo.service.loopingcall [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:36:06 np0005539550 nova_compute[257631]: 2025-11-29 08:36:06.901 257641 DEBUG nova.compute.manager [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:36:06 np0005539550 nova_compute[257631]: 2025-11-29 08:36:06.902 257641 DEBUG nova.network.neutron [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:36:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:36:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f528ff9-d86d-4643-acfb-c145a7ef0c68 does not exist
Nov 29 03:36:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 62f1150c-76ba-420f-b121-704a38e5f99d does not exist
Nov 29 03:36:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 84442fc4-5c8f-4bdc-a7a3-0a6ef5340683 does not exist
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.563 257641 DEBUG nova.compute.manager [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.563 257641 DEBUG oslo_concurrency.lockutils [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.564 257641 DEBUG oslo_concurrency.lockutils [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.564 257641 DEBUG oslo_concurrency.lockutils [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.564 257641 DEBUG nova.compute.manager [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] No waiting events found dispatching network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:07 np0005539550 nova_compute[257631]: 2025-11-29 08:36:07.564 257641 WARNING nova.compute.manager [req-ecd814b5-878d-47cc-8c64-2f1ece313c2e req-038b4743-ac8f-4dbb-9c01-000dee2cd349 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received unexpected event network-vif-plugged-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:36:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:07.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.144 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.145 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.164 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.223 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.224 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.234 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.235 257641 INFO nova.compute.claims [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:36:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 124 KiB/s rd, 895 KiB/s wr, 57 op/s
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.309 257641 DEBUG nova.network.neutron [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.326 257641 INFO nova.compute.manager [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Took 1.42 seconds to deallocate network for instance.#033[00m
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:36:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.369 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.405 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3697992587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.819 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.827 257641 DEBUG nova.compute.provider_tree [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.843 257641 DEBUG nova.scheduler.client.report [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.865 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.866 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:36:08 np0005539550 nova_compute[257631]: 2025-11-29 08:36:08.870 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.465s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.040 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.041 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.057 257641 INFO nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:36:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.070 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:36:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:36:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:36:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:36:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.113 257641 DEBUG oslo_concurrency.processutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.197 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.200 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.201 257641 INFO nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Creating image(s)#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.245 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.286 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.315 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.319 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.345 257641 DEBUG nova.policy [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.384 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.385 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.386 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.386 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.415 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.419 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671910080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.569 257641 DEBUG oslo_concurrency.processutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.575 257641 DEBUG nova.compute.provider_tree [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.590 257641 DEBUG nova.scheduler.client.report [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.619 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.665 257641 INFO nova.scheduler.client.report [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Deleted allocations for instance d05fcfb7-a367-40e8-990a-670bcd45288f#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.704 257641 DEBUG nova.compute.manager [req-59147e1d-3295-43f9-8260-3f8dc4649689 req-15073380-95de-4d22-a53f-06ab2caae63f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Received event network-vif-deleted-9b9d47a1-5aa7-4fbe-b5a3-a5ced381b1de external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.780 257641 DEBUG oslo_concurrency.lockutils [None req-60a2f0e4-4e05-4e25-b142-9c8a9bbcde64 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "d05fcfb7-a367-40e8-990a-670bcd45288f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.841 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:09 np0005539550 nova_compute[257631]: 2025-11-29 08:36:09.917 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:36:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:36:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:09.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.034 257641 DEBUG nova.objects.instance [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.053 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.053 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Ensure instance console log exists: /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.054 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.054 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:10 np0005539550 nova_compute[257631]: 2025-11-29 08:36:10.055 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 63 KiB/s rd, 31 KiB/s wr, 37 op/s
Nov 29 03:36:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:10.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:11 np0005539550 nova_compute[257631]: 2025-11-29 08:36:11.372 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Successfully created port: 29e91ea5-9f80-4cbc-9d41-14400204e77d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:36:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 324 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.315 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Successfully updated port: 29e91ea5-9f80-4cbc-9d41-14400204e77d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.334 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.334 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.334 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.432 257641 DEBUG nova.compute.manager [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.433 257641 DEBUG nova.compute.manager [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing instance network info cache due to event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.433 257641 DEBUG oslo_concurrency.lockutils [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:12 np0005539550 nova_compute[257631]: 2025-11-29 08:36:12.546 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:36:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.613 257641 DEBUG nova.network.neutron [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updating instance_info_cache with network_info: [{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.642 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.642 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Instance network_info: |[{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.643 257641 DEBUG oslo_concurrency.lockutils [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.643 257641 DEBUG nova.network.neutron [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.647 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Start _get_guest_xml network_info=[{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.652 257641 WARNING nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.659 257641 DEBUG nova.virt.libvirt.host [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.660 257641 DEBUG nova.virt.libvirt.host [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.666 257641 DEBUG nova.virt.libvirt.host [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.667 257641 DEBUG nova.virt.libvirt.host [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.668 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.669 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.669 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.670 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.670 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.670 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.670 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.671 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.671 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.672 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.672 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.672 257641 DEBUG nova.virt.hardware [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.676 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:13 np0005539550 nova_compute[257631]: 2025-11-29 08:36:13.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2365117360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.099 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.130 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.135 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 355 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 111 op/s
Nov 29 03:36:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/34143464' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.596 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.598 257641 DEBUG nova.virt.libvirt.vif [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-543805539',display_name='tempest-TestNetworkBasicOps-server-543805539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-543805539',id=176,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFF/RxjmHeYJoBKtMkJZRKki643MI/hU7Na+xYDUOEhEDBg7sBCj9QxFUwnk/E7QtHRYVpORA08hDdxarUP7tgCpKNPzQOSXYVe75FTyCsnu1eJ/9do+X7jFxH8624Jjtg==',key_name='tempest-TestNetworkBasicOps-266677676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-9gi9z70d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:09Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e7deef4-dac8-4a79-b471-0dc9f8fe15d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.599 257641 DEBUG nova.network.os_vif_util [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.600 257641 DEBUG nova.network.os_vif_util [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.601 257641 DEBUG nova.objects.instance [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.618 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <uuid>1e7deef4-dac8-4a79-b471-0dc9f8fe15d1</uuid>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <name>instance-000000b0</name>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-543805539</nova:name>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:36:13</nova:creationTime>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <nova:port uuid="29e91ea5-9f80-4cbc-9d41-14400204e77d">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="serial">1e7deef4-dac8-4a79-b471-0dc9f8fe15d1</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="uuid">1e7deef4-dac8-4a79-b471-0dc9f8fe15d1</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:26:ba:c9"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <target dev="tap29e91ea5-9f"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/console.log" append="off"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:36:14 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:36:14 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:36:14 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:36:14 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.620 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Preparing to wait for external event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.620 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.621 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.621 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.622 257641 DEBUG nova.virt.libvirt.vif [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:36:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-543805539',display_name='tempest-TestNetworkBasicOps-server-543805539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-543805539',id=176,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFF/RxjmHeYJoBKtMkJZRKki643MI/hU7Na+xYDUOEhEDBg7sBCj9QxFUwnk/E7QtHRYVpORA08hDdxarUP7tgCpKNPzQOSXYVe75FTyCsnu1eJ/9do+X7jFxH8624Jjtg==',key_name='tempest-TestNetworkBasicOps-266677676',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-9gi9z70d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:36:09Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e7deef4-dac8-4a79-b471-0dc9f8fe15d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.623 257641 DEBUG nova.network.os_vif_util [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.623 257641 DEBUG nova.network.os_vif_util [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.624 257641 DEBUG os_vif [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.624 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.625 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.625 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.629 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.629 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29e91ea5-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.630 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29e91ea5-9f, col_values=(('external_ids', {'iface-id': '29e91ea5-9f80-4cbc-9d41-14400204e77d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:ba:c9', 'vm-uuid': '1e7deef4-dac8-4a79-b471-0dc9f8fe15d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.631 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539550 NetworkManager[49039]: <info>  [1764405374.6327] manager: (tap29e91ea5-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.640 257641 INFO os_vif [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f')#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.715 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.715 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.715 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:26:ba:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.716 257641 INFO nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Using config drive#033[00m
Nov 29 03:36:14 np0005539550 nova_compute[257631]: 2025-11-29 08:36:14.742 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.289 257641 INFO nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Creating config drive at /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.294 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16stxrxv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Nov 29 03:36:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Nov 29 03:36:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.427 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16stxrxv" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.456 257641 DEBUG nova.storage.rbd_utils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.460 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.500 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.620 257641 DEBUG oslo_concurrency.processutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.620 257641 INFO nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Deleting local config drive /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1/disk.config because it was imported into RBD.#033[00m
Nov 29 03:36:15 np0005539550 kernel: tap29e91ea5-9f: entered promiscuous mode
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.6743] manager: (tap29e91ea5-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:15Z|00815|binding|INFO|Claiming lport 29e91ea5-9f80-4cbc-9d41-14400204e77d for this chassis.
Nov 29 03:36:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:15Z|00816|binding|INFO|29e91ea5-9f80-4cbc-9d41-14400204e77d: Claiming fa:16:3e:26:ba:c9 10.100.0.4
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.687 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ba:c9 10.100.0.4'], port_security=['fa:16:3e:26:ba:c9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e7deef4-dac8-4a79-b471-0dc9f8fe15d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5cddafe1-0828-4840-9d29-ccd0781cf88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06147688-112f-4da4-a556-bcb085929d88, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=29e91ea5-9f80-4cbc-9d41-14400204e77d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.688 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 29e91ea5-9f80-4cbc-9d41-14400204e77d in datapath efb56740-84ec-4ebe-9be0-b9334b3a420a bound to our chassis#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.689 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network efb56740-84ec-4ebe-9be0-b9334b3a420a#033[00m
Nov 29 03:36:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:15Z|00817|binding|INFO|Setting lport 29e91ea5-9f80-4cbc-9d41-14400204e77d ovn-installed in OVS
Nov 29 03:36:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:15Z|00818|binding|INFO|Setting lport 29e91ea5-9f80-4cbc-9d41-14400204e77d up in Southbound
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:15 np0005539550 systemd-udevd[367088]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a287e2c-7188-4aa9-a3b9-80bcb608de4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.703 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapefb56740-81 in ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.705 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapefb56740-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.705 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2f7b8836-c0f2-4d62-b4a8-5ec40f595cf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.706 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6afb36f1-5730-4ad1-ad60-cd9417e11bfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 systemd-machined[216673]: New machine qemu-96-instance-000000b0.
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.7152] device (tap29e91ea5-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.7165] device (tap29e91ea5-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.717 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[929bf0b7-182d-4de7-8e64-a3ca7bfa9370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 systemd[1]: Started Virtual Machine qemu-96-instance-000000b0.
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.742 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[93b51a2c-9056-4a34-b9c8-cf66902ea3ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.774 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2545b03f-2c2a-4295-97c5-ada6f78a0a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.7817] manager: (tapefb56740-80): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.780 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99eba617-fe17-415b-9bca-bbb740d38960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.811 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[41b8350a-03aa-40b5-9577-d19d5ea48445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.813 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[63031839-a407-4ffc-819a-2123ba182d46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.8342] device (tapefb56740-80): carrier: link connected
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.840 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fb185d57-0c12-48f2-8e98-76473699b243]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.860 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e45f9a-f5fb-493d-b634-ad1776728cf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefb56740-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:79:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842349, 'reachable_time': 42694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367121, 'error': None, 'target': 'ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.879 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5d728699-e999-45a8-b018-a353d912b139]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:7995'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842349, 'tstamp': 842349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367122, 'error': None, 'target': 'ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.899 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc4dc67-2241-47cd-aa27-88435d4f1339]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapefb56740-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:79:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 235], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842349, 'reachable_time': 42694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367123, 'error': None, 'target': 'ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.932 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a2856bb0-7363-4cf7-9d30-012e92918b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.991 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[136d693b-4a72-415d-9a03-9e91dec19d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.993 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefb56740-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.993 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:36:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:15.994 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefb56740-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.996 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:15 np0005539550 NetworkManager[49039]: <info>  [1764405375.9974] manager: (tapefb56740-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Nov 29 03:36:15 np0005539550 kernel: tapefb56740-80: entered promiscuous mode
Nov 29 03:36:15 np0005539550 nova_compute[257631]: 2025-11-29 08:36:15.999 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:16.003 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapefb56740-80, col_values=(('external_ids', {'iface-id': 'e42db986-ad96-4aeb-890a-59686188af34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.004 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.004 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:16Z|00819|binding|INFO|Releasing lport e42db986-ad96-4aeb-890a-59686188af34 from this chassis (sb_readonly=0)
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:16.007 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/efb56740-84ec-4ebe-9be0-b9334b3a420a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/efb56740-84ec-4ebe-9be0-b9334b3a420a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:16.019 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c623cf3b-7316-4634-acaa-6d029cb5a2da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:16.020 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-efb56740-84ec-4ebe-9be0-b9334b3a420a
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/efb56740-84ec-4ebe-9be0-b9334b3a420a.pid.haproxy
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID efb56740-84ec-4ebe-9be0-b9334b3a420a
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.021 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:16.022 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'env', 'PROCESS_TAG=haproxy-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/efb56740-84ec-4ebe-9be0-b9334b3a420a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.041 257641 DEBUG nova.network.neutron [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updated VIF entry in instance network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.041 257641 DEBUG nova.network.neutron [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updating instance_info_cache with network_info: [{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.065 257641 DEBUG oslo_concurrency.lockutils [req-3b526588-88ce-47a0-a27d-d919a5d83437 req-364b6541-6cc1-45d0-a8dd-7397c4d40273 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.8 MiB/s wr, 233 op/s
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.403 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405376.4030209, 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.403 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] VM Started (Lifecycle Event)#033[00m
Nov 29 03:36:16 np0005539550 podman[367193]: 2025-11-29 08:36:16.412164805 +0000 UTC m=+0.063381693 container create 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.426 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.431 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405376.4055939, 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.432 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:36:16 np0005539550 systemd[1]: Started libpod-conmon-687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56.scope.
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.464 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.470 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:16 np0005539550 podman[367193]: 2025-11-29 08:36:16.384051668 +0000 UTC m=+0.035268576 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:36:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:36:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22674373a7b621265177dd354a029d243bfb1ed70af03d0b01b65d285deae0fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:36:16 np0005539550 nova_compute[257631]: 2025-11-29 08:36:16.494 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:16 np0005539550 podman[367193]: 2025-11-29 08:36:16.499751917 +0000 UTC m=+0.150968825 container init 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:36:16 np0005539550 podman[367193]: 2025-11-29 08:36:16.50508582 +0000 UTC m=+0.156302698 container start 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:36:16 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [NOTICE]   (367214) : New worker (367216) forked
Nov 29 03:36:16 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [NOTICE]   (367214) : Loading success.
Nov 29 03:36:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 274 op/s
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.314 257641 DEBUG nova.compute.manager [req-44fc7eaf-09eb-4cba-bef1-cfecffe3f611 req-e9c46d8c-595d-4efc-8a8d-a829e562aa58 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.315 257641 DEBUG oslo_concurrency.lockutils [req-44fc7eaf-09eb-4cba-bef1-cfecffe3f611 req-e9c46d8c-595d-4efc-8a8d-a829e562aa58 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.315 257641 DEBUG oslo_concurrency.lockutils [req-44fc7eaf-09eb-4cba-bef1-cfecffe3f611 req-e9c46d8c-595d-4efc-8a8d-a829e562aa58 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.315 257641 DEBUG oslo_concurrency.lockutils [req-44fc7eaf-09eb-4cba-bef1-cfecffe3f611 req-e9c46d8c-595d-4efc-8a8d-a829e562aa58 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.316 257641 DEBUG nova.compute.manager [req-44fc7eaf-09eb-4cba-bef1-cfecffe3f611 req-e9c46d8c-595d-4efc-8a8d-a829e562aa58 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Processing event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.317 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.322 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405378.3217282, 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.322 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.324 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.328 257641 INFO nova.virt.libvirt.driver [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Instance spawned successfully.#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.328 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.344 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.348 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.352 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.353 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.353 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.354 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.354 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.354 257641 DEBUG nova.virt.libvirt.driver [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.365 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.427 257641 INFO nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Took 9.23 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.427 257641 DEBUG nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.523 257641 INFO nova.compute.manager [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Took 10.32 seconds to build instance.#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.550 257641 DEBUG oslo_concurrency.lockutils [None req-979596e9-631c-4ae1-b834-54248fbaeba3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.717 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405363.715567, d05fcfb7-a367-40e8-990a-670bcd45288f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.717 257641 INFO nova.compute.manager [-] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:36:18 np0005539550 nova_compute[257631]: 2025-11-29 08:36:18.746 257641 DEBUG nova.compute.manager [None req-04a1c656-cace-4802-934e-4914969af1e9 - - - - - -] [instance: d05fcfb7-a367-40e8-990a-670bcd45288f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:18.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:18.967 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:18.968 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:18.968 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:19 np0005539550 nova_compute[257631]: 2025-11-29 08:36:19.007 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:19 np0005539550 nova_compute[257631]: 2025-11-29 08:36:19.631 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003165967278522241 of space, bias 1.0, pg target 0.9497901835566722 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.6487515703517588 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002319176903033687 of space, bias 1.0, pg target 0.6957530709101061 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:36:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.8 MiB/s wr, 274 op/s
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.340 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.412 257641 DEBUG nova.compute.manager [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.413 257641 DEBUG oslo_concurrency.lockutils [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.413 257641 DEBUG oslo_concurrency.lockutils [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.413 257641 DEBUG oslo_concurrency.lockutils [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.413 257641 DEBUG nova.compute.manager [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] No waiting events found dispatching network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:20 np0005539550 nova_compute[257631]: 2025-11-29 08:36:20.413 257641 WARNING nova.compute.manager [req-79d319ef-fb7c-437e-a8fd-389a9d23f666 req-208093a8-4b6d-47ee-86b7-b360dc4f99ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received unexpected event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d for instance with vm_state active and task_state None.#033[00m
Nov 29 03:36:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:20.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:21 np0005539550 podman[367229]: 2025-11-29 08:36:21.333848968 +0000 UTC m=+0.070610882 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 03:36:21 np0005539550 podman[367228]: 2025-11-29 08:36:21.339322724 +0000 UTC m=+0.076050727 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:36:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 4.3 MiB/s wr, 288 op/s
Nov 29 03:36:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:22.294 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:22 np0005539550 nova_compute[257631]: 2025-11-29 08:36:22.295 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:22 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:22.295 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:36:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:22.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:24 np0005539550 nova_compute[257631]: 2025-11-29 08:36:24.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.9 MiB/s wr, 262 op/s
Nov 29 03:36:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Nov 29 03:36:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Nov 29 03:36:24 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Nov 29 03:36:24 np0005539550 nova_compute[257631]: 2025-11-29 08:36:24.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:24.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:25 np0005539550 nova_compute[257631]: 2025-11-29 08:36:25.340 257641 DEBUG nova.compute.manager [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:25 np0005539550 nova_compute[257631]: 2025-11-29 08:36:25.341 257641 DEBUG nova.compute.manager [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing instance network info cache due to event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:25 np0005539550 nova_compute[257631]: 2025-11-29 08:36:25.341 257641 DEBUG oslo_concurrency.lockutils [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:25 np0005539550 nova_compute[257631]: 2025-11-29 08:36:25.341 257641 DEBUG oslo_concurrency.lockutils [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:25 np0005539550 nova_compute[257631]: 2025-11-29 08:36:25.341 257641 DEBUG nova.network.neutron [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:25.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 921 B/s wr, 130 op/s
Nov 29 03:36:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:26.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:27 np0005539550 nova_compute[257631]: 2025-11-29 08:36:27.395 257641 DEBUG nova.network.neutron [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updated VIF entry in instance network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:27 np0005539550 nova_compute[257631]: 2025-11-29 08:36:27.396 257641 DEBUG nova.network.neutron [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updating instance_info_cache with network_info: [{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:27 np0005539550 nova_compute[257631]: 2025-11-29 08:36:27.427 257641 DEBUG oslo_concurrency.lockutils [req-5c71b387-ca0e-4bdd-aa62-2d57f87c0e7f req-e7300257-46a3-477f-826e-45d82a00f596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:27 np0005539550 nova_compute[257631]: 2025-11-29 08:36:27.434 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 87 op/s
Nov 29 03:36:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:28.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:29 np0005539550 nova_compute[257631]: 2025-11-29 08:36:29.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:29 np0005539550 nova_compute[257631]: 2025-11-29 08:36:29.637 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 246 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 87 op/s
Nov 29 03:36:30 np0005539550 podman[367322]: 2025-11-29 08:36:30.373223993 +0000 UTC m=+0.107328323 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:36:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:31 np0005539550 nova_compute[257631]: 2025-11-29 08:36:31.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:31Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:ba:c9 10.100.0.4
Nov 29 03:36:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:31Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:ba:c9 10.100.0.4
Nov 29 03:36:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:31.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 284 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 121 KiB/s rd, 2.5 MiB/s wr, 48 op/s
Nov 29 03:36:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:32.297 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Nov 29 03:36:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Nov 29 03:36:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Nov 29 03:36:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:33.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:34 np0005539550 nova_compute[257631]: 2025-11-29 08:36:34.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 312 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 66 KiB/s rd, 4.0 MiB/s wr, 58 op/s
Nov 29 03:36:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:34 np0005539550 nova_compute[257631]: 2025-11-29 08:36:34.640 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:35.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 342 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 756 KiB/s rd, 6.7 MiB/s wr, 175 op/s
Nov 29 03:36:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:37.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:38 np0005539550 nova_compute[257631]: 2025-11-29 08:36:38.049 257641 INFO nova.compute.manager [None req-b5e8e1f1-71d4-47ed-8f37-368b2cca0535 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Get console output#033[00m
Nov 29 03:36:38 np0005539550 nova_compute[257631]: 2025-11-29 08:36:38.056 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:36:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.8 MiB/s wr, 300 op/s
Nov 29 03:36:38 np0005539550 nova_compute[257631]: 2025-11-29 08:36:38.568 257641 INFO nova.compute.manager [None req-d9d58767-476e-4a39-88f7-42fac27ca2e3 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Get console output#033[00m
Nov 29 03:36:38 np0005539550 nova_compute[257631]: 2025-11-29 08:36:38.575 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:36:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.017 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:36:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949269328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:36:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:36:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/949269328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:36:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.504 257641 DEBUG nova.compute.manager [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.505 257641 DEBUG nova.compute.manager [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing instance network info cache due to event network-changed-29e91ea5-9f80-4cbc-9d41-14400204e77d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.505 257641 DEBUG oslo_concurrency.lockutils [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.505 257641 DEBUG oslo_concurrency.lockutils [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.505 257641 DEBUG nova.network.neutron [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Refreshing network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.603 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.604 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.604 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.604 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.605 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.606 257641 INFO nova.compute.manager [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Terminating instance#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.607 257641 DEBUG nova.compute.manager [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.642 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 kernel: tap29e91ea5-9f (unregistering): left promiscuous mode
Nov 29 03:36:39 np0005539550 NetworkManager[49039]: <info>  [1764405399.6710] device (tap29e91ea5-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.681 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:39Z|00820|binding|INFO|Releasing lport 29e91ea5-9f80-4cbc-9d41-14400204e77d from this chassis (sb_readonly=0)
Nov 29 03:36:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:39Z|00821|binding|INFO|Setting lport 29e91ea5-9f80-4cbc-9d41-14400204e77d down in Southbound
Nov 29 03:36:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:36:39Z|00822|binding|INFO|Removing iface tap29e91ea5-9f ovn-installed in OVS
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.683 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:39.688 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:ba:c9 10.100.0.4'], port_security=['fa:16:3e:26:ba:c9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e7deef4-dac8-4a79-b471-0dc9f8fe15d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5cddafe1-0828-4840-9d29-ccd0781cf88a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=06147688-112f-4da4-a556-bcb085929d88, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=29e91ea5-9f80-4cbc-9d41-14400204e77d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:36:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:39.689 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 29e91ea5-9f80-4cbc-9d41-14400204e77d in datapath efb56740-84ec-4ebe-9be0-b9334b3a420a unbound from our chassis#033[00m
Nov 29 03:36:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:39.690 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network efb56740-84ec-4ebe-9be0-b9334b3a420a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:36:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:39.692 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1f65f318-dcec-4640-8488-57f0dacf4f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:39.693 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a namespace which is not needed anymore#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Nov 29 03:36:39 np0005539550 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000b0.scope: Consumed 14.366s CPU time.
Nov 29 03:36:39 np0005539550 systemd-machined[216673]: Machine qemu-96-instance-000000b0 terminated.
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.826 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.831 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.844 257641 INFO nova.virt.libvirt.driver [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Instance destroyed successfully.#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.845 257641 DEBUG nova.objects.instance [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:36:39 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [NOTICE]   (367214) : haproxy version is 2.8.14-c23fe91
Nov 29 03:36:39 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [NOTICE]   (367214) : path to executable is /usr/sbin/haproxy
Nov 29 03:36:39 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [WARNING]  (367214) : Exiting Master process...
Nov 29 03:36:39 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [ALERT]    (367214) : Current worker (367216) exited with code 143 (Terminated)
Nov 29 03:36:39 np0005539550 neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a[367210]: [WARNING]  (367214) : All workers exited. Exiting... (0)
Nov 29 03:36:39 np0005539550 systemd[1]: libpod-687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56.scope: Deactivated successfully.
Nov 29 03:36:39 np0005539550 podman[367380]: 2025-11-29 08:36:39.863337467 +0000 UTC m=+0.068053718 container died 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.895 257641 DEBUG nova.virt.libvirt.vif [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:36:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-543805539',display_name='tempest-TestNetworkBasicOps-server-543805539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-543805539',id=176,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFF/RxjmHeYJoBKtMkJZRKki643MI/hU7Na+xYDUOEhEDBg7sBCj9QxFUwnk/E7QtHRYVpORA08hDdxarUP7tgCpKNPzQOSXYVe75FTyCsnu1eJ/9do+X7jFxH8624Jjtg==',key_name='tempest-TestNetworkBasicOps-266677676',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:36:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-9gi9z70d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:36:18Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=1e7deef4-dac8-4a79-b471-0dc9f8fe15d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.896 257641 DEBUG nova.network.os_vif_util [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.897 257641 DEBUG nova.network.os_vif_util [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.897 257641 DEBUG os_vif [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.899 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.899 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29e91ea5-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.901 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.904 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.911 257641 INFO os_vif [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:ba:c9,bridge_name='br-int',has_traffic_filtering=True,id=29e91ea5-9f80-4cbc-9d41-14400204e77d,network=Network(efb56740-84ec-4ebe-9be0-b9334b3a420a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29e91ea5-9f')#033[00m
Nov 29 03:36:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-22674373a7b621265177dd354a029d243bfb1ed70af03d0b01b65d285deae0fc-merged.mount: Deactivated successfully.
Nov 29 03:36:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56-userdata-shm.mount: Deactivated successfully.
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.975 257641 DEBUG nova.compute.manager [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-unplugged-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.976 257641 DEBUG oslo_concurrency.lockutils [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.976 257641 DEBUG oslo_concurrency.lockutils [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.976 257641 DEBUG oslo_concurrency.lockutils [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.976 257641 DEBUG nova.compute.manager [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] No waiting events found dispatching network-vif-unplugged-29e91ea5-9f80-4cbc-9d41-14400204e77d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:39 np0005539550 nova_compute[257631]: 2025-11-29 08:36:39.977 257641 DEBUG nova.compute.manager [req-ef57ad4a-f9a5-46f7-8ffb-1f6bd20b869e req-aa376308-edf8-46f0-a45e-41d5786910b7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-unplugged-29e91ea5-9f80-4cbc-9d41-14400204e77d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:36:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:39 np0005539550 podman[367380]: 2025-11-29 08:36:39.989327692 +0000 UTC m=+0.194043943 container cleanup 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:39 np0005539550 systemd[1]: libpod-conmon-687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56.scope: Deactivated successfully.
Nov 29 03:36:40 np0005539550 podman[367436]: 2025-11-29 08:36:40.107267007 +0000 UTC m=+0.094663289 container remove 687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.114 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b4738f4e-0b66-4bd1-b8c6-29ed1b4fc10d]: (4, ('Sat Nov 29 08:36:39 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a (687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56)\n687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56\nSat Nov 29 08:36:39 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a (687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56)\n687ed6d45daa309499d0d79fe76b9562d27161d0afd0b27e50e7ea883e6e1f56\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.116 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e598b031-b5cd-47c9-a9b1-11cf940f146e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.117 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefb56740-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:40 np0005539550 kernel: tapefb56740-80: left promiscuous mode
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.133 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.138 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8b922f52-609c-4aa2-9dc8-2b370f1e5289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.153 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7558fe4f-9289-4c4c-a2bb-da02e42b539e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.154 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[52026329-75ac-4da1-be2c-0c23c55ecf29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.174 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d37f4bf0-3749-4cf6-b63d-456704324bb4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842343, 'reachable_time': 34232, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367451, 'error': None, 'target': 'ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.177 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-efb56740-84ec-4ebe-9be0-b9334b3a420a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:36:40 np0005539550 systemd[1]: run-netns-ovnmeta\x2defb56740\x2d84ec\x2d4ebe\x2d9be0\x2db9334b3a420a.mount: Deactivated successfully.
Nov 29 03:36:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:36:40.178 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d88e59a6-6928-4007-b22a-d5517a74ad92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:36:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 6.8 MiB/s wr, 300 op/s
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.775 257641 INFO nova.virt.libvirt.driver [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Deleting instance files /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_del#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.776 257641 INFO nova.virt.libvirt.driver [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Deletion of /var/lib/nova/instances/1e7deef4-dac8-4a79-b471-0dc9f8fe15d1_del complete#033[00m
Nov 29 03:36:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.835 257641 INFO nova.compute.manager [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.835 257641 DEBUG oslo.service.loopingcall [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.836 257641 DEBUG nova.compute.manager [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.836 257641 DEBUG nova.network.neutron [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:36:40 np0005539550 nova_compute[257631]: 2025-11-29 08:36:40.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:36:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:41.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.189 257641 DEBUG nova.compute.manager [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.190 257641 DEBUG oslo_concurrency.lockutils [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.190 257641 DEBUG oslo_concurrency.lockutils [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.190 257641 DEBUG oslo_concurrency.lockutils [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.190 257641 DEBUG nova.compute.manager [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] No waiting events found dispatching network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.191 257641 WARNING nova.compute.manager [req-956ad375-9164-4d13-8ea0-c26f28a43764 req-43f4131b-96c8-4e01-a224-1609e2a03fd6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received unexpected event network-vif-plugged-29e91ea5-9f80-4cbc-9d41-14400204e77d for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:36:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 317 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 6.3 MiB/s wr, 489 op/s
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.691 257641 DEBUG nova.network.neutron [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.717 257641 INFO nova.compute.manager [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Took 1.88 seconds to deallocate network for instance.#033[00m
Nov 29 03:36:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.836 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.837 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:42 np0005539550 nova_compute[257631]: 2025-11-29 08:36:42.928 257641 DEBUG oslo_concurrency.processutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721966401' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.378 257641 DEBUG oslo_concurrency.processutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.388 257641 DEBUG nova.compute.provider_tree [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.404 257641 DEBUG nova.scheduler.client.report [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.434 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.470 257641 INFO nova.scheduler.client.report [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.529 257641 DEBUG oslo_concurrency.lockutils [None req-6ff77a67-4bea-485a-9795-4e9d8f1d5134 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.536 257641 DEBUG nova.network.neutron [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updated VIF entry in instance network info cache for port 29e91ea5-9f80-4cbc-9d41-14400204e77d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.537 257641 DEBUG nova.network.neutron [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Updating instance_info_cache with network_info: [{"id": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "address": "fa:16:3e:26:ba:c9", "network": {"id": "efb56740-84ec-4ebe-9be0-b9334b3a420a", "bridge": "br-int", "label": "tempest-network-smoke--182630800", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29e91ea5-9f", "ovs_interfaceid": "29e91ea5-9f80-4cbc-9d41-14400204e77d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/161833045' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:36:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/161833045' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.555 257641 DEBUG oslo_concurrency.lockutils [req-0636c080-1f16-4803-b78a-83d9fdd3183b req-6714ee46-e936-4262-875e-34114052a317 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-1e7deef4-dac8-4a79-b471-0dc9f8fe15d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.937 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:36:43 np0005539550 nova_compute[257631]: 2025-11-29 08:36:43.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:43.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:44 np0005539550 nova_compute[257631]: 2025-11-29 08:36:44.020 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.7 MiB/s wr, 470 op/s
Nov 29 03:36:44 np0005539550 nova_compute[257631]: 2025-11-29 08:36:44.308 257641 DEBUG nova.compute.manager [req-6d0ddf82-5f38-418e-992b-d8f4fbcaec9f req-dae75a74-e51b-4c43-aa3a-73a34532ec76 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Received event network-vif-deleted-29e91ea5-9f80-4cbc-9d41-14400204e77d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.338075) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404338124, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 708, "num_deletes": 259, "total_data_size": 856656, "memory_usage": 871416, "flush_reason": "Manual Compaction"}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404345516, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 846779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56848, "largest_seqno": 57555, "table_properties": {"data_size": 843135, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8701, "raw_average_key_size": 19, "raw_value_size": 835517, "raw_average_value_size": 1856, "num_data_blocks": 63, "num_entries": 450, "num_filter_entries": 450, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405359, "oldest_key_time": 1764405359, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 7494 microseconds, and 3220 cpu microseconds.
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.345580) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 846779 bytes OK
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.345597) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.347500) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.347520) EVENT_LOG_v1 {"time_micros": 1764405404347516, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.347534) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 852953, prev total WAL file size 852953, number of live WAL files 2.
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.348154) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303130' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(826KB)], [125(12MB)]
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404348223, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 14019659, "oldest_snapshot_seqno": -1}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 9177 keys, 13876871 bytes, temperature: kUnknown
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404434167, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 13876871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13815118, "index_size": 37704, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 241768, "raw_average_key_size": 26, "raw_value_size": 13651100, "raw_average_value_size": 1487, "num_data_blocks": 1450, "num_entries": 9177, "num_filter_entries": 9177, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.434805) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 13876871 bytes
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.437119) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.9 rd, 161.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(32.9) write-amplify(16.4) OK, records in: 9711, records dropped: 534 output_compression: NoCompression
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.437148) EVENT_LOG_v1 {"time_micros": 1764405404437135, "job": 76, "event": "compaction_finished", "compaction_time_micros": 86079, "compaction_time_cpu_micros": 34048, "output_level": 6, "num_output_files": 1, "total_output_size": 13876871, "num_input_records": 9711, "num_output_records": 9177, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404437592, "job": 76, "event": "table_file_deletion", "file_number": 127}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405404441781, "job": 76, "event": "table_file_deletion", "file_number": 125}
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.348041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.441927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.441935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.441936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.441938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:36:44.441939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:36:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:44 np0005539550 nova_compute[257631]: 2025-11-29 08:36:44.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.945 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:36:45 np0005539550 nova_compute[257631]: 2025-11-29 08:36:45.946 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:45.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 4.2 MiB/s wr, 443 op/s
Nov 29 03:36:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4036876353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.449 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.615 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.616 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4225MB free_disk=20.90103530883789GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.617 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.617 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.675 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.676 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:36:46 np0005539550 nova_compute[257631]: 2025-11-29 08:36:46.706 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:36:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:36:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/688401139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:36:47 np0005539550 nova_compute[257631]: 2025-11-29 08:36:47.193 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:36:47 np0005539550 nova_compute[257631]: 2025-11-29 08:36:47.199 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:36:47 np0005539550 nova_compute[257631]: 2025-11-29 08:36:47.325 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:36:47 np0005539550 nova_compute[257631]: 2025-11-29 08:36:47.348 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:36:47 np0005539550 nova_compute[257631]: 2025-11-29 08:36:47.349 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:36:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:48 np0005539550 nova_compute[257631]: 2025-11-29 08:36:48.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 315 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.8 MiB/s wr, 344 op/s
Nov 29 03:36:48 np0005539550 nova_compute[257631]: 2025-11-29 08:36:48.350 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:48 np0005539550 nova_compute[257631]: 2025-11-29 08:36:48.350 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:48 np0005539550 nova_compute[257631]: 2025-11-29 08:36:48.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:48.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:49 np0005539550 nova_compute[257631]: 2025-11-29 08:36:49.022 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:49 np0005539550 nova_compute[257631]: 2025-11-29 08:36:49.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:49 np0005539550 nova_compute[257631]: 2025-11-29 08:36:49.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 289 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 267 op/s
Nov 29 03:36:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:50.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.2 MiB/s wr, 391 op/s
Nov 29 03:36:52 np0005539550 podman[367581]: 2025-11-29 08:36:52.329579191 +0000 UTC m=+0.063661709 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 03:36:52 np0005539550 podman[367580]: 2025-11-29 08:36:52.339057506 +0000 UTC m=+0.073623486 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:36:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:52.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:36:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2410836735' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:36:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:54 np0005539550 nova_compute[257631]: 2025-11-29 08:36:54.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 264 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 207 op/s
Nov 29 03:36:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:54 np0005539550 nova_compute[257631]: 2025-11-29 08:36:54.844 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405399.8419983, 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:36:54 np0005539550 nova_compute[257631]: 2025-11-29 08:36:54.844 257641 INFO nova.compute.manager [-] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:36:54 np0005539550 nova_compute[257631]: 2025-11-29 08:36:54.859 257641 DEBUG nova.compute.manager [None req-83dcf9c3-e4df-4aa7-9ffa-b9f4cf630259 - - - - - -] [instance: 1e7deef4-dac8-4a79-b471-0dc9f8fe15d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:36:54 np0005539550 nova_compute[257631]: 2025-11-29 08:36:54.907 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:55 np0005539550 nova_compute[257631]: 2025-11-29 08:36:55.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:36:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:36:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:36:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Nov 29 03:36:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:56.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Nov 29 03:36:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Nov 29 03:36:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Nov 29 03:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:36:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:36:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:36:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.6 MiB/s wr, 211 op/s
Nov 29 03:36:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:36:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:36:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:36:58.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:36:59 np0005539550 nova_compute[257631]: 2025-11-29 08:36:59.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:36:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:36:59
Nov 29 03:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta']
Nov 29 03:36:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:36:59 np0005539550 nova_compute[257631]: 2025-11-29 08:36:59.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Nov 29 03:37:00 np0005539550 podman[367625]: 2025-11-29 08:37:00.575001367 +0000 UTC m=+0.105593710 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 03:37:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 286 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 3.0 MiB/s wr, 67 op/s
Nov 29 03:37:02 np0005539550 nova_compute[257631]: 2025-11-29 08:37:02.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:03.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:04 np0005539550 nova_compute[257631]: 2025-11-29 08:37:04.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 288 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 225 KiB/s rd, 2.5 MiB/s wr, 65 op/s
Nov 29 03:37:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:04 np0005539550 nova_compute[257631]: 2025-11-29 08:37:04.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:05.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 452 KiB/s rd, 2.5 MiB/s wr, 85 op/s
Nov 29 03:37:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:08.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 299 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 431 KiB/s rd, 2.4 MiB/s wr, 81 op/s
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 35f2b243-7502-436b-ae72-f0b8ff065831 does not exist
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9adb664f-5208-4d35-a806-30d6c530782b does not exist
Nov 29 03:37:08 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a1ef3bb5-89f9-42a1-bfbc-587211bd1148 does not exist
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:37:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:37:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:37:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:37:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:37:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:37:09 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.497 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.498 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.516 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.585 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.586 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.594 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.594 257641 INFO nova.compute.claims [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:37:09 np0005539550 podman[367979]: 2025-11-29 08:37:09.579366443 +0000 UTC m=+0.025249467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.719 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:09 np0005539550 nova_compute[257631]: 2025-11-29 08:37:09.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:10.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 300 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Nov 29 03:37:10 np0005539550 podman[367979]: 2025-11-29 08:37:10.642258134 +0000 UTC m=+1.088141138 container create 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:37:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2352694884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:10 np0005539550 nova_compute[257631]: 2025-11-29 08:37:10.950 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:10 np0005539550 nova_compute[257631]: 2025-11-29 08:37:10.956 257641 DEBUG nova.compute.provider_tree [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:10 np0005539550 nova_compute[257631]: 2025-11-29 08:37:10.971 257641 DEBUG nova.scheduler.client.report [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:10 np0005539550 nova_compute[257631]: 2025-11-29 08:37:10.992 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:10 np0005539550 nova_compute[257631]: 2025-11-29 08:37:10.993 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.052 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.053 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.075 257641 INFO nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.089 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.185 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.186 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.186 257641 INFO nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Creating image(s)#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.214 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.242 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.269 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.272 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.298 257641 DEBUG nova.policy [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:37:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:37:11 np0005539550 systemd[1]: Started libpod-conmon-64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13.scope.
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.354 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.355 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.356 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.356 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.509 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:11 np0005539550 nova_compute[257631]: 2025-11-29 08:37:11.515 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:11 np0005539550 podman[367979]: 2025-11-29 08:37:11.868151908 +0000 UTC m=+2.314034932 container init 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:37:11 np0005539550 podman[367979]: 2025-11-29 08:37:11.876177537 +0000 UTC m=+2.322060541 container start 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:37:11 np0005539550 elegant_moore[368073]: 167 167
Nov 29 03:37:11 np0005539550 systemd[1]: libpod-64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13.scope: Deactivated successfully.
Nov 29 03:37:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:37:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:11.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:37:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:12.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.141 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Successfully created port: c5532af0-4f88-4953-8da5-1553ef8d3b8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:37:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 300 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 378 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Nov 29 03:37:12 np0005539550 podman[367979]: 2025-11-29 08:37:12.430340432 +0000 UTC m=+2.876223466 container attach 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:12 np0005539550 podman[367979]: 2025-11-29 08:37:12.432785793 +0000 UTC m=+2.878668847 container died 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:37:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3b0cc1b4915300280a6452d1ff0a1eed9cae1ca5d4d657740fc52054700cc433-merged.mount: Deactivated successfully.
Nov 29 03:37:12 np0005539550 podman[367979]: 2025-11-29 08:37:12.587501449 +0000 UTC m=+3.033384453 container remove 64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_moore, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:37:12 np0005539550 systemd[1]: libpod-conmon-64ed28eca313748b596089f14d7aca7b87a085b77cb5c6072106130b57e8ae13.scope: Deactivated successfully.
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.692 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.782 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:37:12 np0005539550 podman[368153]: 2025-11-29 08:37:12.798588164 +0000 UTC m=+0.073805501 container create 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:37:12 np0005539550 podman[368153]: 2025-11-29 08:37:12.763657738 +0000 UTC m=+0.038875105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:12 np0005539550 systemd[1]: Started libpod-conmon-2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6.scope.
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.910 257641 DEBUG nova.objects.instance [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 46075bef-e2f4-434f-8d14-deccfa05cd2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.936 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.936 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Ensure instance console log exists: /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.937 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.937 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:12 np0005539550 nova_compute[257631]: 2025-11-29 08:37:12.938 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:13 np0005539550 podman[368153]: 2025-11-29 08:37:13.001038145 +0000 UTC m=+0.276255512 container init 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:37:13 np0005539550 podman[368153]: 2025-11-29 08:37:13.007314761 +0000 UTC m=+0.282532108 container start 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:13 np0005539550 podman[368153]: 2025-11-29 08:37:13.012010498 +0000 UTC m=+0.287227845 container attach 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.032 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Successfully updated port: c5532af0-4f88-4953-8da5-1553ef8d3b8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.053 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.053 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.053 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.223 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.387 257641 DEBUG nova.compute.manager [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.388 257641 DEBUG nova.compute.manager [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing instance network info cache due to event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:13 np0005539550 nova_compute[257631]: 2025-11-29 08:37:13.389 257641 DEBUG oslo_concurrency.lockutils [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:13 np0005539550 objective_lovelace[368215]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:37:13 np0005539550 objective_lovelace[368215]: --> relative data size: 1.0
Nov 29 03:37:13 np0005539550 objective_lovelace[368215]: --> All data devices are unavailable
Nov 29 03:37:13 np0005539550 systemd[1]: libpod-2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6.scope: Deactivated successfully.
Nov 29 03:37:13 np0005539550 podman[368153]: 2025-11-29 08:37:13.863173477 +0000 UTC m=+1.138390824 container died 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8d1a904cfbc01435bcb2d73f7cce3862cf30eccd8413b9675c5c5059d34b7280-merged.mount: Deactivated successfully.
Nov 29 03:37:13 np0005539550 podman[368153]: 2025-11-29 08:37:13.934949197 +0000 UTC m=+1.210166534 container remove 2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lovelace, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:37:13 np0005539550 systemd[1]: libpod-conmon-2c8497f6d2b7c03b8dff11e92f98e3ac641c7a323affbce40c9fb1daebe389f6.scope: Deactivated successfully.
Nov 29 03:37:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.031 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1206113258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 305 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 265 KiB/s rd, 867 KiB/s wr, 46 op/s
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.501412946 +0000 UTC m=+0.037574143 container create 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:37:14 np0005539550 systemd[1]: Started libpod-conmon-9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d.scope.
Nov 29 03:37:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.577020121 +0000 UTC m=+0.113181378 container init 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.486411514 +0000 UTC m=+0.022572731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.581 257641 DEBUG nova.network.neutron [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.585249665 +0000 UTC m=+0.121410862 container start 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.589031719 +0000 UTC m=+0.125192976 container attach 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:37:14 np0005539550 nifty_hoover[368408]: 167 167
Nov 29 03:37:14 np0005539550 systemd[1]: libpod-9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d.scope: Deactivated successfully.
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.59029096 +0000 UTC m=+0.126452157 container died 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.602 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.603 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Instance network_info: |[{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.604 257641 DEBUG oslo_concurrency.lockutils [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.605 257641 DEBUG nova.network.neutron [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.608 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Start _get_guest_xml network_info=[{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:37:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d50ab14fd36fbb8fb2b6c1905c34176dee22b8969892ae45a3fdc116610a1335-merged.mount: Deactivated successfully.
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.615 257641 WARNING nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.620 257641 DEBUG nova.virt.libvirt.host [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.621 257641 DEBUG nova.virt.libvirt.host [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:37:14 np0005539550 podman[368391]: 2025-11-29 08:37:14.624709864 +0000 UTC m=+0.160871071 container remove 9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hoover, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.628 257641 DEBUG nova.virt.libvirt.host [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.629 257641 DEBUG nova.virt.libvirt.host [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.630 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.630 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.631 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.631 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.631 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.632 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.632 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.632 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.632 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.633 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.633 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.633 257641 DEBUG nova.virt.hardware [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:37:14 np0005539550 systemd[1]: libpod-conmon-9cc824b233aae50f449c4480adc638f3de40d2ca37d1ac8529a9bfb2f505313d.scope: Deactivated successfully.
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.637 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:14 np0005539550 podman[368435]: 2025-11-29 08:37:14.815479755 +0000 UTC m=+0.045180422 container create 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:14 np0005539550 systemd[1]: Started libpod-conmon-1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516.scope.
Nov 29 03:37:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8761a46e81ba522d331530755e2a94027be6b8fecf6c1242b40c1d89ab0e05/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8761a46e81ba522d331530755e2a94027be6b8fecf6c1242b40c1d89ab0e05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8761a46e81ba522d331530755e2a94027be6b8fecf6c1242b40c1d89ab0e05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed8761a46e81ba522d331530755e2a94027be6b8fecf6c1242b40c1d89ab0e05/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:14 np0005539550 podman[368435]: 2025-11-29 08:37:14.793413308 +0000 UTC m=+0.023113995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:14 np0005539550 podman[368435]: 2025-11-29 08:37:14.904068772 +0000 UTC m=+0.133769449 container init 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:14 np0005539550 podman[368435]: 2025-11-29 08:37:14.911180198 +0000 UTC m=+0.140880865 container start 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:37:14 np0005539550 podman[368435]: 2025-11-29 08:37:14.914218384 +0000 UTC m=+0.143919081 container attach 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:14 np0005539550 nova_compute[257631]: 2025-11-29 08:37:14.960 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1854125805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.113 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.138 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.143 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2847203017' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.574 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.576 257641 DEBUG nova.virt.libvirt.vif [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-874759565',display_name='tempest-TestNetworkBasicOps-server-874759565',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-874759565',id=179,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK59mMwdkIRco5e+wx7z1qyWEKx2l9WSUMzqdwFM2UJMLW1lFhIC1s/A3JYUvXkYkjgGNZRfQ1g+CuIKY+jbDApbG/uQdsxXxpYYXSx3CJ40Y4Osyky+kLIItP0l/wcDVA==',key_name='tempest-TestNetworkBasicOps-2011738495',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-as5blxm9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:11Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=46075bef-e2f4-434f-8d14-deccfa05cd2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.577 257641 DEBUG nova.network.os_vif_util [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.577 257641 DEBUG nova.network.os_vif_util [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.579 257641 DEBUG nova.objects.instance [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 46075bef-e2f4-434f-8d14-deccfa05cd2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.597 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <uuid>46075bef-e2f4-434f-8d14-deccfa05cd2f</uuid>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <name>instance-000000b3</name>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-874759565</nova:name>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:37:14</nova:creationTime>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <nova:port uuid="c5532af0-4f88-4953-8da5-1553ef8d3b8f">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="serial">46075bef-e2f4-434f-8d14-deccfa05cd2f</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="uuid">46075bef-e2f4-434f-8d14-deccfa05cd2f</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/46075bef-e2f4-434f-8d14-deccfa05cd2f_disk">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:ee:16:c6"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <target dev="tapc5532af0-4f"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/console.log" append="off"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:37:15 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:37:15 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:37:15 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:37:15 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.598 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Preparing to wait for external event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.599 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.599 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.599 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.600 257641 DEBUG nova.virt.libvirt.vif [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-874759565',display_name='tempest-TestNetworkBasicOps-server-874759565',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-874759565',id=179,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK59mMwdkIRco5e+wx7z1qyWEKx2l9WSUMzqdwFM2UJMLW1lFhIC1s/A3JYUvXkYkjgGNZRfQ1g+CuIKY+jbDApbG/uQdsxXxpYYXSx3CJ40Y4Osyky+kLIItP0l/wcDVA==',key_name='tempest-TestNetworkBasicOps-2011738495',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-as5blxm9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:11Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=46075bef-e2f4-434f-8d14-deccfa05cd2f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.601 257641 DEBUG nova.network.os_vif_util [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.601 257641 DEBUG nova.network.os_vif_util [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.602 257641 DEBUG os_vif [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.603 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.604 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.608 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.608 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5532af0-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.609 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5532af0-4f, col_values=(('external_ids', {'iface-id': 'c5532af0-4f88-4953-8da5-1553ef8d3b8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:16:c6', 'vm-uuid': '46075bef-e2f4-434f-8d14-deccfa05cd2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:15 np0005539550 NetworkManager[49039]: <info>  [1764405435.6117] manager: (tapc5532af0-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.613 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.619 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.620 257641 INFO os_vif [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f')#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.699 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.699 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.699 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:ee:16:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.700 257641 INFO nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Using config drive#033[00m
Nov 29 03:37:15 np0005539550 nova_compute[257631]: 2025-11-29 08:37:15.725 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]: {
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:    "0": [
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:        {
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "devices": [
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "/dev/loop3"
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            ],
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "lv_name": "ceph_lv0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "lv_size": "7511998464",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "name": "ceph_lv0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "tags": {
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.cluster_name": "ceph",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.crush_device_class": "",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.encrypted": "0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.osd_id": "0",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.type": "block",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:                "ceph.vdo": "0"
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            },
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "type": "block",
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:            "vg_name": "ceph_vg0"
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:        }
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]:    ]
Nov 29 03:37:15 np0005539550 xenodochial_einstein[368466]: }
Nov 29 03:37:15 np0005539550 systemd[1]: libpod-1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516.scope: Deactivated successfully.
Nov 29 03:37:15 np0005539550 podman[368435]: 2025-11-29 08:37:15.786942378 +0000 UTC m=+1.016643045 container died 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:37:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ed8761a46e81ba522d331530755e2a94027be6b8fecf6c1242b40c1d89ab0e05-merged.mount: Deactivated successfully.
Nov 29 03:37:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:16 np0005539550 podman[368435]: 2025-11-29 08:37:16.022804228 +0000 UTC m=+1.252504895 container remove 1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_einstein, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:37:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:16.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:16 np0005539550 systemd[1]: libpod-conmon-1e6173568ef4c5356a2ae90185de6df7850730b1dce8e6db5b8a884fa6d64516.scope: Deactivated successfully.
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.185 257641 DEBUG nova.network.neutron [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updated VIF entry in instance network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.185 257641 DEBUG nova.network.neutron [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.189 257641 INFO nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Creating config drive at /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.196 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_y8sf_j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.224 257641 DEBUG oslo_concurrency.lockutils [req-b8de7dfa-1a98-48ea-9266-82fa8d10add0 req-1863865c-a059-4b2c-b7e7-0b50e60edec4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 258 KiB/s rd, 1.9 MiB/s wr, 66 op/s
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.330 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd_y8sf_j" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.357 257641 DEBUG nova.storage.rbd_utils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.362 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.519 257641 DEBUG oslo_concurrency.processutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config 46075bef-e2f4-434f-8d14-deccfa05cd2f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.520 257641 INFO nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Deleting local config drive /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f/disk.config because it was imported into RBD.#033[00m
Nov 29 03:37:16 np0005539550 kernel: tapc5532af0-4f: entered promiscuous mode
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.5823] manager: (tapc5532af0-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Nov 29 03:37:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:16Z|00823|binding|INFO|Claiming lport c5532af0-4f88-4953-8da5-1553ef8d3b8f for this chassis.
Nov 29 03:37:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:16Z|00824|binding|INFO|c5532af0-4f88-4953-8da5-1553ef8d3b8f: Claiming fa:16:3e:ee:16:c6 10.100.0.13
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.583 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.598 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:16:c6 10.100.0.13'], port_security=['fa:16:3e:ee:16:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '46075bef-e2f4-434f-8d14-deccfa05cd2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'af24e467-2d1e-447f-bcc4-9f3cd733b642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244cacb4-e852-4547-86ff-006a2141eb2a, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532af0-4f88-4953-8da5-1553ef8d3b8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.600 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532af0-4f88-4953-8da5-1553ef8d3b8f in datapath 4dc7ed86-80fb-4377-a5d0-8edd5e264c14 bound to our chassis#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.601 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4dc7ed86-80fb-4377-a5d0-8edd5e264c14#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.615 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[641029f8-0da4-42de-b955-ec4854ebb656]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.616 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4dc7ed86-81 in ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.618 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4dc7ed86-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.618 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4aec3450-0d70-4656-8a4b-f759acf6a8ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.619 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6852fd-e1c9-47e7-b027-509f9bed74a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 systemd-udevd[368753]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:16 np0005539550 systemd-machined[216673]: New machine qemu-97-instance-000000b3.
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.630 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[28d0519d-4a4f-42f6-821d-20b404543347]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.6327] device (tapc5532af0-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.6339] device (tapc5532af0-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.657 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e91ac323-77a7-4583-95fa-cc5bcdeb4007]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 podman[368737]: 2025-11-29 08:37:16.658253818 +0000 UTC m=+0.069300140 container create 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:37:16 np0005539550 systemd[1]: Started Virtual Machine qemu-97-instance-000000b3.
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.665 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:16Z|00825|binding|INFO|Setting lport c5532af0-4f88-4953-8da5-1553ef8d3b8f ovn-installed in OVS
Nov 29 03:37:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:16Z|00826|binding|INFO|Setting lport c5532af0-4f88-4953-8da5-1553ef8d3b8f up in Southbound
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.670 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.696 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8b86dca4-f278-4103-bf11-a13518d1e132]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 systemd[1]: Started libpod-conmon-39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98.scope.
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.7023] manager: (tap4dc7ed86-80): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.703 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed7b771-3d60-4083-9301-63cf5283ad44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 podman[368737]: 2025-11-29 08:37:16.635254777 +0000 UTC m=+0.046301049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.735 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1f70bbc0-41d6-4f7d-8b5e-21d32b8bf607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.739 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[60d31644-e65d-4b52-a8a7-6b36bd92361e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 podman[368737]: 2025-11-29 08:37:16.743684636 +0000 UTC m=+0.154730868 container init 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:37:16 np0005539550 podman[368737]: 2025-11-29 08:37:16.750483194 +0000 UTC m=+0.161529416 container start 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:37:16 np0005539550 podman[368737]: 2025-11-29 08:37:16.754080813 +0000 UTC m=+0.165127025 container attach 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:37:16 np0005539550 systemd[1]: libpod-39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98.scope: Deactivated successfully.
Nov 29 03:37:16 np0005539550 goofy_ellis[368764]: 167 167
Nov 29 03:37:16 np0005539550 conmon[368764]: conmon 39acbf180977fda381f1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98.scope/container/memory.events
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.7625] device (tap4dc7ed86-80): carrier: link connected
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.767 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7acf7ba8-d0da-45d8-8cdd-3716c43b3cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.785 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5becb2fd-d938-4ad3-9209-aee8e2e962ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dc7ed86-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848442, 'reachable_time': 34790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368800, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 podman[368794]: 2025-11-29 08:37:16.799352196 +0000 UTC m=+0.026212781 container died 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.801 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[435c0c25-8f0b-4410-9eb7-2fa8f422966c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:c520'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 848442, 'tstamp': 848442}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368806, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.818 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[70645f3f-0466-4f5f-868b-a7f341f6614c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4dc7ed86-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:c5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 238], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848442, 'reachable_time': 34790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 368809, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-096578da941116f8ef5e487193241a11f82092e2f4f360885100f6eabacb4ac5-merged.mount: Deactivated successfully.
Nov 29 03:37:16 np0005539550 podman[368794]: 2025-11-29 08:37:16.833575375 +0000 UTC m=+0.060435930 container remove 39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:37:16 np0005539550 systemd[1]: libpod-conmon-39acbf180977fda381f18a39f4295bb3877642e9ad62b3cc00de5671b52ddb98.scope: Deactivated successfully.
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.854 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e3d99d-c5a9-4e1c-a0aa-59f2eb6f54c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.916 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d60129-37d4-42f9-afc3-7bd67f188085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.917 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dc7ed86-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.918 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.918 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dc7ed86-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:16 np0005539550 NetworkManager[49039]: <info>  [1764405436.9211] manager: (tap4dc7ed86-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 kernel: tap4dc7ed86-80: entered promiscuous mode
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.925 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4dc7ed86-80, col_values=(('external_ids', {'iface-id': '75cd498f-8b8f-4566-9772-7d1f56716c28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:16 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:16Z|00827|binding|INFO|Releasing lport 75cd498f-8b8f-4566-9772-7d1f56716c28 from this chassis (sb_readonly=0)
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.927 257641 DEBUG nova.compute.manager [req-55a3cd1e-2445-4149-aa07-43d63c8277e3 req-1cffa0f2-45b9-459a-b03e-721eaf29b090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.928 257641 DEBUG oslo_concurrency.lockutils [req-55a3cd1e-2445-4149-aa07-43d63c8277e3 req-1cffa0f2-45b9-459a-b03e-721eaf29b090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.928 257641 DEBUG oslo_concurrency.lockutils [req-55a3cd1e-2445-4149-aa07-43d63c8277e3 req-1cffa0f2-45b9-459a-b03e-721eaf29b090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.928 257641 DEBUG oslo_concurrency.lockutils [req-55a3cd1e-2445-4149-aa07-43d63c8277e3 req-1cffa0f2-45b9-459a-b03e-721eaf29b090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.928 257641 DEBUG nova.compute.manager [req-55a3cd1e-2445-4149-aa07-43d63c8277e3 req-1cffa0f2-45b9-459a-b03e-721eaf29b090 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Processing event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.930 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.931 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bebe42ca-9c7d-40bd-847a-0864282a24ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.932 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-4dc7ed86-80fb-4377-a5d0-8edd5e264c14
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.pid.haproxy
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 4dc7ed86-80fb-4377-a5d0-8edd5e264c14
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:37:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:16.934 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'env', 'PROCESS_TAG=haproxy-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4dc7ed86-80fb-4377-a5d0-8edd5e264c14.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:37:16 np0005539550 nova_compute[257631]: 2025-11-29 08:37:16.941 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Nov 29 03:37:17 np0005539550 podman[368864]: 2025-11-29 08:37:17.018482291 +0000 UTC m=+0.046771111 container create d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:37:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Nov 29 03:37:17 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Nov 29 03:37:17 np0005539550 systemd[1]: Started libpod-conmon-d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00.scope.
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.077 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.079 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405437.0786965, 46075bef-e2f4-434f-8d14-deccfa05cd2f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.079 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] VM Started (Lifecycle Event)#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.081 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.085 257641 INFO nova.virt.libvirt.driver [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Instance spawned successfully.#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.085 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:37:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:17 np0005539550 podman[368864]: 2025-11-29 08:37:16.999429918 +0000 UTC m=+0.027718778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:37:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68428aa5023b7fe6bfa070fed56fc004e62795e85841901484d3990d63960a6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68428aa5023b7fe6bfa070fed56fc004e62795e85841901484d3990d63960a6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.109 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68428aa5023b7fe6bfa070fed56fc004e62795e85841901484d3990d63960a6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.114 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.117 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.117 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.118 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68428aa5023b7fe6bfa070fed56fc004e62795e85841901484d3990d63960a6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.118 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.118 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.119 257641 DEBUG nova.virt.libvirt.driver [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:17 np0005539550 podman[368864]: 2025-11-29 08:37:17.130065178 +0000 UTC m=+0.158354018 container init d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:37:17 np0005539550 podman[368864]: 2025-11-29 08:37:17.138904528 +0000 UTC m=+0.167193348 container start d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:37:17 np0005539550 podman[368864]: 2025-11-29 08:37:17.142965318 +0000 UTC m=+0.171254158 container attach d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.150 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.150 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405437.0788817, 46075bef-e2f4-434f-8d14-deccfa05cd2f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.150 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.171 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.174 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405437.0811036, 46075bef-e2f4-434f-8d14-deccfa05cd2f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.175 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.179 257641 INFO nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Took 5.99 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.179 257641 DEBUG nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.202 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.207 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.253 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.270 257641 INFO nova.compute.manager [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Took 7.71 seconds to build instance.#033[00m
Nov 29 03:37:17 np0005539550 nova_compute[257631]: 2025-11-29 08:37:17.295 257641 DEBUG oslo_concurrency.lockutils [None req-611ecd64-393b-48c3-92c6-02985bf35091 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:17 np0005539550 podman[368913]: 2025-11-29 08:37:17.347032679 +0000 UTC m=+0.061278860 container create 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:37:17 np0005539550 systemd[1]: Started libpod-conmon-31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335.scope.
Nov 29 03:37:17 np0005539550 podman[368913]: 2025-11-29 08:37:17.314750829 +0000 UTC m=+0.028997010 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:37:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/291d58d598c555d186e91c604faf2497bf1c4d046dc1e81ca0750ff105e7b90a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:17 np0005539550 podman[368913]: 2025-11-29 08:37:17.434425277 +0000 UTC m=+0.148671458 container init 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 29 03:37:17 np0005539550 podman[368913]: 2025-11-29 08:37:17.441329148 +0000 UTC m=+0.155575309 container start 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:37:17 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [NOTICE]   (368932) : New worker (368934) forked
Nov 29 03:37:17 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [NOTICE]   (368932) : Loading success.
Nov 29 03:37:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539550 frosty_gould[368886]: {
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:        "osd_id": 0,
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:        "type": "bluestore"
Nov 29 03:37:18 np0005539550 frosty_gould[368886]:    }
Nov 29 03:37:18 np0005539550 frosty_gould[368886]: }
Nov 29 03:37:18 np0005539550 systemd[1]: libpod-d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00.scope: Deactivated successfully.
Nov 29 03:37:18 np0005539550 podman[368864]: 2025-11-29 08:37:18.02848038 +0000 UTC m=+1.056769200 container died d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:37:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:18 np0005539550 systemd[1]: var-lib-containers-storage-overlay-68428aa5023b7fe6bfa070fed56fc004e62795e85841901484d3990d63960a6e-merged.mount: Deactivated successfully.
Nov 29 03:37:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 44 op/s
Nov 29 03:37:18 np0005539550 podman[368864]: 2025-11-29 08:37:18.328576793 +0000 UTC m=+1.356865633 container remove d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:37:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:37:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:37:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f988672-1c5f-4208-b3b5-1aca63abc328 does not exist
Nov 29 03:37:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dbab7db2-2275-4bff-8165-432e337d8cfa does not exist
Nov 29 03:37:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c7869a30-919c-42e3-940f-8be824cbc465 does not exist
Nov 29 03:37:18 np0005539550 systemd[1]: libpod-conmon-d9c2233dc818e8d1913b083b20c92574d715c3ab5062b0605e4096728ab90f00.scope: Deactivated successfully.
Nov 29 03:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:18.968 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:18.968 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:18.969 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.007 257641 DEBUG nova.compute.manager [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.007 257641 DEBUG oslo_concurrency.lockutils [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.008 257641 DEBUG oslo_concurrency.lockutils [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.008 257641 DEBUG oslo_concurrency.lockutils [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.009 257641 DEBUG nova.compute.manager [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] No waiting events found dispatching network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.009 257641 WARNING nova.compute.manager [req-f3533d08-f22b-407a-932e-e2ce163f4ebb req-d70c5a28-7095-4f06-ad10-e6d21bbbd536 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received unexpected event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:19 np0005539550 nova_compute[257631]: 2025-11-29 08:37:19.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:37:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005329199527731249 of space, bias 1.0, pg target 1.5987598583193747 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002162686988181649 of space, bias 1.0, pg target 0.646643409466313 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:37:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 2.2 MiB/s wr, 43 op/s
Nov 29 03:37:20 np0005539550 nova_compute[257631]: 2025-11-29 08:37:20.611 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539550 NetworkManager[49039]: <info>  [1764405440.9609] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Nov 29 03:37:20 np0005539550 nova_compute[257631]: 2025-11-29 08:37:20.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:20 np0005539550 NetworkManager[49039]: <info>  [1764405440.9621] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.205 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:21 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:21Z|00828|binding|INFO|Releasing lport 75cd498f-8b8f-4566-9772-7d1f56716c28 from this chassis (sb_readonly=0)
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.228 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.362 257641 DEBUG nova.compute.manager [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.363 257641 DEBUG nova.compute.manager [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing instance network info cache due to event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.363 257641 DEBUG oslo_concurrency.lockutils [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.363 257641 DEBUG oslo_concurrency.lockutils [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:21 np0005539550 nova_compute[257631]: 2025-11-29 08:37:21.364 257641 DEBUG nova.network.neutron [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:21.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.7 MiB/s wr, 192 op/s
Nov 29 03:37:22 np0005539550 nova_compute[257631]: 2025-11-29 08:37:22.664 257641 DEBUG nova.network.neutron [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updated VIF entry in instance network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:22 np0005539550 nova_compute[257631]: 2025-11-29 08:37:22.664 257641 DEBUG nova.network.neutron [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:22 np0005539550 nova_compute[257631]: 2025-11-29 08:37:22.693 257641 DEBUG oslo_concurrency.lockutils [req-56a26d57-f7e1-4377-a95f-8aefaeb99fc8 req-7166cc07-beef-4938-b2ca-7cef94c143d5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:23 np0005539550 podman[369026]: 2025-11-29 08:37:23.318058387 +0000 UTC m=+0.059134538 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:37:23 np0005539550 podman[369025]: 2025-11-29 08:37:23.318138829 +0000 UTC m=+0.059747913 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, tcib_managed=true)
Nov 29 03:37:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:23.605 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:23 np0005539550 nova_compute[257631]: 2025-11-29 08:37:23.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:23.606 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:37:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539550 nova_compute[257631]: 2025-11-29 08:37:24.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 313 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.8 MiB/s wr, 216 op/s
Nov 29 03:37:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Nov 29 03:37:25 np0005539550 nova_compute[257631]: 2025-11-29 08:37:25.614 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Nov 29 03:37:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Nov 29 03:37:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:26.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 313 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.4 MiB/s wr, 286 op/s
Nov 29 03:37:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:27.608 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:27.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:37:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:28.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:37:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 313 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.2 MiB/s wr, 265 op/s
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.036 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.045 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.046 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.064 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.204 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.205 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.214 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.215 257641 INFO nova.compute.claims [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.399 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385949141' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.836 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:29 np0005539550 nova_compute[257631]: 2025-11-29 08:37:29.843 257641 DEBUG nova.compute.provider_tree [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:29.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.011 257641 DEBUG nova.scheduler.client.report [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:30.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.109 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.109 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:37:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.183 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.184 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.237 257641 INFO nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.265 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:37:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 313 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.2 MiB/s wr, 283 op/s
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.423 257641 DEBUG nova.policy [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bdbcdbdc435844ee8d866288c969331b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '368e3a44279843f5947188dd045d65b6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.617 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.854 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.855 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.856 257641 INFO nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Creating image(s)#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.884 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.914 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.943 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:30 np0005539550 nova_compute[257631]: 2025-11-29 08:37:30.946 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.014 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.015 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.015 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.016 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.040 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.043 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 850e472c-8d49-4ecb-8478-992f11eb6196_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:31Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:16:c6 10.100.0.13
Nov 29 03:37:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:31Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:16:c6 10.100.0.13
Nov 29 03:37:31 np0005539550 podman[369232]: 2025-11-29 08:37:31.348683514 +0000 UTC m=+0.088048715 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.840 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Successfully created port: a84e5028-8ed0-4eb3-90e9-ca099984eb99 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.855 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 850e472c-8d49-4ecb-8478-992f11eb6196_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.811s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:31 np0005539550 nova_compute[257631]: 2025-11-29 08:37:31.936 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] resizing rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:37:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:31.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:32.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.069 257641 DEBUG nova.objects.instance [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'migration_context' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.095 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.095 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Ensure instance console log exists: /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.095 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.096 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:32 np0005539550 nova_compute[257631]: 2025-11-29 08:37:32.096 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 323 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.1 MiB/s wr, 231 op/s
Nov 29 03:37:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:33.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539550 nova_compute[257631]: 2025-11-29 08:37:34.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 343 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 227 op/s
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.102 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Successfully updated port: a84e5028-8ed0-4eb3-90e9-ca099984eb99 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.136 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.136 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquired lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.137 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:37:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.236 257641 DEBUG nova.compute.manager [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-changed-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.236 257641 DEBUG nova.compute.manager [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Refreshing instance network info cache due to event network-changed-a84e5028-8ed0-4eb3-90e9-ca099984eb99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.237 257641 DEBUG oslo_concurrency.lockutils [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.293 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:37:35 np0005539550 nova_compute[257631]: 2025-11-29 08:37:35.620 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:37:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:37:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:36.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.5 MiB/s wr, 248 op/s
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.616 257641 DEBUG nova.network.neutron [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updating instance_info_cache with network_info: [{"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.647 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Releasing lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.647 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Instance network_info: |[{"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.648 257641 DEBUG oslo_concurrency.lockutils [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.648 257641 DEBUG nova.network.neutron [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Refreshing network info cache for port a84e5028-8ed0-4eb3-90e9-ca099984eb99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.652 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Start _get_guest_xml network_info=[{"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.656 257641 WARNING nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.663 257641 DEBUG nova.virt.libvirt.host [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.664 257641 DEBUG nova.virt.libvirt.host [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.667 257641 DEBUG nova.virt.libvirt.host [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.668 257641 DEBUG nova.virt.libvirt.host [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.669 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.669 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.670 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.670 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.670 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.670 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.671 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.671 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.671 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.671 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.672 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.672 257641 DEBUG nova.virt.hardware [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:37:36 np0005539550 nova_compute[257631]: 2025-11-29 08:37:36.675 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713595144' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.126 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.164 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.168 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:37:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500899458' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.621 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.623 257641 DEBUG nova.virt.libvirt.vif [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2050034368',display_name='tempest-AttachVolumeNegativeTest-server-2050034368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2050034368',id=181,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN5Fw+YVl/LXCg78vO+EAU7iLZjD4Foeh/qpUlXRUX4yj1EvhYtXjfSgQ22GkRcPqGa0OkWPhkZ89PcRG3pfCX8R3xMT3yI7RwiiviWm2pxtBubDmed887Hy8nUam2un8g==',key_name='tempest-keypair-591583954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-ca09wi6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=850e472c-8d49-4ecb-8478-992f11eb6196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.624 257641 DEBUG nova.network.os_vif_util [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.624 257641 DEBUG nova.network.os_vif_util [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.626 257641 DEBUG nova.objects.instance [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.641 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <uuid>850e472c-8d49-4ecb-8478-992f11eb6196</uuid>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <name>instance-000000b5</name>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeNegativeTest-server-2050034368</nova:name>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:37:36</nova:creationTime>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:user uuid="bdbcdbdc435844ee8d866288c969331b">tempest-AttachVolumeNegativeTest-1895715059-project-member</nova:user>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:project uuid="368e3a44279843f5947188dd045d65b6">tempest-AttachVolumeNegativeTest-1895715059</nova:project>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <nova:port uuid="a84e5028-8ed0-4eb3-90e9-ca099984eb99">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="serial">850e472c-8d49-4ecb-8478-992f11eb6196</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="uuid">850e472c-8d49-4ecb-8478-992f11eb6196</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/850e472c-8d49-4ecb-8478-992f11eb6196_disk">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/850e472c-8d49-4ecb-8478-992f11eb6196_disk.config">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:f7:c9:46"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <target dev="tapa84e5028-8e"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/console.log" append="off"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:37:37 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:37:37 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:37:37 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:37:37 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.642 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Preparing to wait for external event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.643 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.643 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.643 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.644 257641 DEBUG nova.virt.libvirt.vif [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2050034368',display_name='tempest-AttachVolumeNegativeTest-server-2050034368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2050034368',id=181,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN5Fw+YVl/LXCg78vO+EAU7iLZjD4Foeh/qpUlXRUX4yj1EvhYtXjfSgQ22GkRcPqGa0OkWPhkZ89PcRG3pfCX8R3xMT3yI7RwiiviWm2pxtBubDmed887Hy8nUam2un8g==',key_name='tempest-keypair-591583954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-ca09wi6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:37:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=850e472c-8d49-4ecb-8478-992f11eb6196,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.644 257641 DEBUG nova.network.os_vif_util [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.645 257641 DEBUG nova.network.os_vif_util [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.645 257641 DEBUG os_vif [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.646 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.646 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.647 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.649 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.649 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa84e5028-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.650 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa84e5028-8e, col_values=(('external_ids', {'iface-id': 'a84e5028-8ed0-4eb3-90e9-ca099984eb99', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:c9:46', 'vm-uuid': '850e472c-8d49-4ecb-8478-992f11eb6196'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:37 np0005539550 NetworkManager[49039]: <info>  [1764405457.6938] manager: (tapa84e5028-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/363)
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.700 257641 INFO os_vif [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e')#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.976 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.977 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.977 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f7:c9:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:37:37 np0005539550 nova_compute[257631]: 2025-11-29 08:37:37.979 257641 INFO nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Using config drive#033[00m
Nov 29 03:37:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.018 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.030 257641 INFO nova.compute.manager [None req-585d0208-7592-4cda-ade7-37b007f46d96 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Get console output#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.038 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:37:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:38.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.114 257641 DEBUG nova.network.neutron [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updated VIF entry in instance network info cache for port a84e5028-8ed0-4eb3-90e9-ca099984eb99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.115 257641 DEBUG nova.network.neutron [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updating instance_info_cache with network_info: [{"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.131 257641 DEBUG oslo_concurrency.lockutils [req-7234511a-c152-4d5d-be40-5cfd334b68d6 req-2a9fd0ad-5ac1-4f02-95e4-7fc3b8735c89 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 193 op/s
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.395 257641 INFO nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Creating config drive at /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.401 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r8x17d8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:38 np0005539550 nova_compute[257631]: 2025-11-29 08:37:38.536 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6r8x17d8" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.002 257641 DEBUG nova.storage.rbd_utils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] rbd image 850e472c-8d49-4ecb-8478-992f11eb6196_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.010 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config 850e472c-8d49-4ecb-8478-992f11eb6196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.051 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.211 257641 DEBUG oslo_concurrency.processutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config 850e472c-8d49-4ecb-8478-992f11eb6196_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.213 257641 INFO nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Deleting local config drive /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196/disk.config because it was imported into RBD.#033[00m
Nov 29 03:37:39 np0005539550 kernel: tapa84e5028-8e: entered promiscuous mode
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.2673] manager: (tapa84e5028-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/364)
Nov 29 03:37:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:39Z|00829|binding|INFO|Claiming lport a84e5028-8ed0-4eb3-90e9-ca099984eb99 for this chassis.
Nov 29 03:37:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:39Z|00830|binding|INFO|a84e5028-8ed0-4eb3-90e9-ca099984eb99: Claiming fa:16:3e:f7:c9:46 10.100.0.10
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.267 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.276 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c9:46 10.100.0.10'], port_security=['fa:16:3e:f7:c9:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '850e472c-8d49-4ecb-8478-992f11eb6196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d1b9fcd-bea0-45d6-8ee2-94c92e3180fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a84e5028-8ed0-4eb3-90e9-ca099984eb99) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.277 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a84e5028-8ed0-4eb3-90e9-ca099984eb99 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b bound to our chassis#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.279 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b#033[00m
Nov 29 03:37:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:39Z|00831|binding|INFO|Setting lport a84e5028-8ed0-4eb3-90e9-ca099984eb99 ovn-installed in OVS
Nov 29 03:37:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:39Z|00832|binding|INFO|Setting lport a84e5028-8ed0-4eb3-90e9-ca099984eb99 up in Southbound
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.287 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.294 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[54276703-e5ba-4da9-a680-040bbb09cde4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.295 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0183ad73-01 in ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.296 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0183ad73-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.296 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[053fe48a-5269-45fb-8217-fda2ede0fa93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.297 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b450be-3d10-4b78-be21-15f5831eed20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 systemd-udevd[369474]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.310 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[5844aaaa-c1e1-4cd7-9b7e-ce6808d7800d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 systemd-machined[216673]: New machine qemu-98-instance-000000b5.
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.3263] device (tapa84e5028-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.3277] device (tapa84e5028-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.328 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3bfec9-10bf-42c0-8245-2184198b8c99]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 systemd[1]: Started Virtual Machine qemu-98-instance-000000b5.
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.358 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2a432ceb-e082-4938-988e-a23b52ccb472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.364 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f584ec1a-acc9-4c36-950f-9751cd698606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.3658] manager: (tap0183ad73-00): new Veth device (/org/freedesktop/NetworkManager/Devices/365)
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.396 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[48c5e5eb-aba7-41cb-b05e-4d4c9618c43e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.400 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d4f5ea30-974e-4563-b261-00f538874be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.4295] device (tap0183ad73-00): carrier: link connected
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.438 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[82138b8b-b5d1-43c3-be39-17f8cbb4c0a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.461 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0f395ea3-dcc0-4844-b452-c3b2cc2a6622]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850709, 'reachable_time': 29152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369505, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.487 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[434e681a-2b42-45e4-b75f-138cc4e7f161]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:aad0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 850709, 'tstamp': 850709}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369506, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.510 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8b041ab6-56d7-4f65-a17a-1bd16b712472]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0183ad73-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:aa:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 240], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850709, 'reachable_time': 29152, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369507, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.549 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff3971c-0e62-4a52-9c93-1e5a8a7286bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.626 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[95974955-13a3-40af-afb5-5afa2d730285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.628 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.629 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.630 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0183ad73-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:39 np0005539550 kernel: tap0183ad73-00: entered promiscuous mode
Nov 29 03:37:39 np0005539550 NetworkManager[49039]: <info>  [1764405459.6337] manager: (tap0183ad73-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.633 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.637 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0183ad73-00, col_values=(('external_ids', {'iface-id': 'c88b07d7-f4c8-49a1-9950-8275afef03b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:37:39 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:39Z|00833|binding|INFO|Releasing lport c88b07d7-f4c8-49a1-9950-8275afef03b1 from this chassis (sb_readonly=0)
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.640 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.653 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[968cd76c-746b-4e72-80d4-75dea3fb6c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.654 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.pid.haproxy
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 0183ad73-05c1-46e4-ba3e-b87d7a948c3b
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:37:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:37:39.656 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'env', 'PROCESS_TAG=haproxy-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0183ad73-05c1-46e4-ba3e-b87d7a948c3b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.656 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.840 257641 DEBUG nova.compute.manager [req-351a4d15-fce2-4f3c-9d43-17be3f0d1157 req-e1757340-809c-47a9-a50f-08fdde31fbc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.846 257641 DEBUG oslo_concurrency.lockutils [req-351a4d15-fce2-4f3c-9d43-17be3f0d1157 req-e1757340-809c-47a9-a50f-08fdde31fbc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.847 257641 DEBUG oslo_concurrency.lockutils [req-351a4d15-fce2-4f3c-9d43-17be3f0d1157 req-e1757340-809c-47a9-a50f-08fdde31fbc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.848 257641 DEBUG oslo_concurrency.lockutils [req-351a4d15-fce2-4f3c-9d43-17be3f0d1157 req-e1757340-809c-47a9-a50f-08fdde31fbc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:39 np0005539550 nova_compute[257631]: 2025-11-29 08:37:39.848 257641 DEBUG nova.compute.manager [req-351a4d15-fce2-4f3c-9d43-17be3f0d1157 req-e1757340-809c-47a9-a50f-08fdde31fbc7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Processing event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:37:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:40 np0005539550 podman[369539]: 2025-11-29 08:37:40.052459757 +0000 UTC m=+0.053651592 container create 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:37:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:40.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:37:40 np0005539550 systemd[1]: Started libpod-conmon-0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567.scope.
Nov 29 03:37:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:37:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f39936376fa3f74de336b6c536775dafc84d42763e89208de1ea7f7af9cebb28/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:37:40 np0005539550 podman[369539]: 2025-11-29 08:37:40.024149505 +0000 UTC m=+0.025341360 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:37:40 np0005539550 podman[369539]: 2025-11-29 08:37:40.127682802 +0000 UTC m=+0.128874637 container init 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:40 np0005539550 podman[369539]: 2025-11-29 08:37:40.133287421 +0000 UTC m=+0.134479256 container start 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:37:40 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [NOTICE]   (369558) : New worker (369560) forked
Nov 29 03:37:40 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [NOTICE]   (369558) : Loading success.
Nov 29 03:37:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 392 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 199 op/s
Nov 29 03:37:40 np0005539550 nova_compute[257631]: 2025-11-29 08:37:40.859 257641 DEBUG nova.compute.manager [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:40 np0005539550 nova_compute[257631]: 2025-11-29 08:37:40.859 257641 DEBUG nova.compute.manager [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing instance network info cache due to event network-changed-c5532af0-4f88-4953-8da5-1553ef8d3b8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:40 np0005539550 nova_compute[257631]: 2025-11-29 08:37:40.859 257641 DEBUG oslo_concurrency.lockutils [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:40 np0005539550 nova_compute[257631]: 2025-11-29 08:37:40.860 257641 DEBUG oslo_concurrency.lockutils [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:40 np0005539550 nova_compute[257631]: 2025-11-29 08:37:40.860 257641 DEBUG nova.network.neutron [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Refreshing network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.199 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405461.1986482, 850e472c-8d49-4ecb-8478-992f11eb6196 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.199 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] VM Started (Lifecycle Event)#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.202 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.206 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.210 257641 INFO nova.virt.libvirt.driver [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Instance spawned successfully.#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.211 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.335 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.342 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.346 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.347 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.348 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.348 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.349 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.349 257641 DEBUG nova.virt.libvirt.driver [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.606 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.607 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405461.1987708, 850e472c-8d49-4ecb-8478-992f11eb6196 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.607 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.670 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.673 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405461.2057958, 850e472c-8d49-4ecb-8478-992f11eb6196 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.673 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.700 257641 INFO nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Took 10.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.701 257641 DEBUG nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.704 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.712 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.743 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:37:41 np0005539550 nova_compute[257631]: 2025-11-29 08:37:41.784 257641 INFO nova.compute.manager [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Took 12.61 seconds to build instance.#033[00m
Nov 29 03:37:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:41.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.016 257641 DEBUG oslo_concurrency.lockutils [None req-56b1fae0-ab39-4a55-b5c6-4941d48b1225 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.024 257641 DEBUG nova.compute.manager [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.024 257641 DEBUG oslo_concurrency.lockutils [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.024 257641 DEBUG oslo_concurrency.lockutils [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.024 257641 DEBUG oslo_concurrency.lockutils [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.025 257641 DEBUG nova.compute.manager [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] No waiting events found dispatching network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.025 257641 WARNING nova.compute.manager [req-47007351-4a05-40a7-8f59-91d36ccdfb92 req-faa7b502-3ad2-442b-9b21-d256fd6e6aaa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received unexpected event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:37:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 413 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 207 op/s
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:42 np0005539550 nova_compute[257631]: 2025-11-29 08:37:42.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.597 257641 DEBUG nova.network.neutron [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updated VIF entry in instance network info cache for port c5532af0-4f88-4953-8da5-1553ef8d3b8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.597 257641 DEBUG nova.network.neutron [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.651 257641 DEBUG oslo_concurrency.lockutils [req-b928f24d-75a6-453f-89be-c99b165e96c6 req-7b10d2e9-709c-4f8c-9118-a94e72c7a483 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:37:43 np0005539550 nova_compute[257631]: 2025-11-29 08:37:43.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:37:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:43.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:44 np0005539550 nova_compute[257631]: 2025-11-29 08:37:44.042 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:44.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:44 np0005539550 nova_compute[257631]: 2025-11-29 08:37:44.131 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:44 np0005539550 nova_compute[257631]: 2025-11-29 08:37:44.131 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:44 np0005539550 nova_compute[257631]: 2025-11-29 08:37:44.132 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:37:44 np0005539550 nova_compute[257631]: 2025-11-29 08:37:44.132 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 46075bef-e2f4-434f-8d14-deccfa05cd2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:37:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 421 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 793 KiB/s rd, 4.7 MiB/s wr, 136 op/s
Nov 29 03:37:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:46.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:46.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:46 np0005539550 nova_compute[257631]: 2025-11-29 08:37:46.235 257641 DEBUG nova.compute.manager [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-changed-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:37:46 np0005539550 nova_compute[257631]: 2025-11-29 08:37:46.235 257641 DEBUG nova.compute.manager [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Refreshing instance network info cache due to event network-changed-a84e5028-8ed0-4eb3-90e9-ca099984eb99. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:37:46 np0005539550 nova_compute[257631]: 2025-11-29 08:37:46.236 257641 DEBUG oslo_concurrency.lockutils [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:37:46 np0005539550 nova_compute[257631]: 2025-11-29 08:37:46.236 257641 DEBUG oslo_concurrency.lockutils [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:37:46 np0005539550 nova_compute[257631]: 2025-11-29 08:37:46.236 257641 DEBUG nova.network.neutron [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Refreshing network info cache for port a84e5028-8ed0-4eb3-90e9-ca099984eb99 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:37:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 219 op/s
Nov 29 03:37:47 np0005539550 nova_compute[257631]: 2025-11-29 08:37:47.743 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:48.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:48.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.123 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [{"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.139 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-46075bef-e2f4-434f-8d14-deccfa05cd2f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.140 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.140 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.140 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.141 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.141 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.141 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.141 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.164 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.165 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.165 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.166 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.167 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Nov 29 03:37:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833414936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.609 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.685 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.685 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b3 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.688 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.689 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.840 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.841 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3901MB free_disk=20.830760955810547GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.842 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.842 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.956 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 46075bef-e2f4-434f-8d14-deccfa05cd2f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.956 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 850e472c-8d49-4ecb-8478-992f11eb6196 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.957 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:37:48 np0005539550 nova_compute[257631]: 2025-11-29 08:37:48.957 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.002 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.280 257641 DEBUG nova.network.neutron [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updated VIF entry in instance network info cache for port a84e5028-8ed0-4eb3-90e9-ca099984eb99. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.280 257641 DEBUG nova.network.neutron [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updating instance_info_cache with network_info: [{"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.300 257641 DEBUG oslo_concurrency.lockutils [req-4f3bf7bd-bac1-47da-bfcf-fc093ac8afa9 req-0c1e22c9-7362-4dc2-8126-013ce7a087ad 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-850e472c-8d49-4ecb-8478-992f11eb6196" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:37:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:37:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2392959293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.440 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.446 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.464 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.489 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:37:49 np0005539550 nova_compute[257631]: 2025-11-29 08:37:49.489 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:37:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:50.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:50.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Nov 29 03:37:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:37:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:52.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:37:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:52.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 427 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Nov 29 03:37:52 np0005539550 nova_compute[257631]: 2025-11-29 08:37:52.745 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:54.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:54 np0005539550 nova_compute[257631]: 2025-11-29 08:37:54.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:54.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 435 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 676 KiB/s wr, 135 op/s
Nov 29 03:37:54 np0005539550 podman[369722]: 2025-11-29 08:37:54.332311251 +0000 UTC m=+0.068547861 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:37:54 np0005539550 podman[369721]: 2025-11-29 08:37:54.358942321 +0000 UTC m=+0.096210607 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:37:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:37:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:37:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:56.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:37:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:56.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:56Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:c9:46 10.100.0.10
Nov 29 03:37:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:37:56Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:c9:46 10.100.0.10
Nov 29 03:37:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 490 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.4 MiB/s wr, 182 op/s
Nov 29 03:37:57 np0005539550 nova_compute[257631]: 2025-11-29 08:37:57.269 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:57 np0005539550 nova_compute[257631]: 2025-11-29 08:37:57.270 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:37:57 np0005539550 nova_compute[257631]: 2025-11-29 08:37:57.748 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:37:58.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:37:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:37:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:37:58.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:37:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:37:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:37:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 490 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 3.3 MiB/s wr, 81 op/s
Nov 29 03:37:59 np0005539550 nova_compute[257631]: 2025-11-29 08:37:59.081 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:37:59
Nov 29 03:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Nov 29 03:37:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:38:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:00.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:00.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 506 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 538 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Nov 29 03:38:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:02.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:02.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.9 MiB/s wr, 153 op/s
Nov 29 03:38:02 np0005539550 podman[369760]: 2025-11-29 08:38:02.337204169 +0000 UTC m=+0.077953074 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:38:02 np0005539550 nova_compute[257631]: 2025-11-29 08:38:02.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.056371) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484056463, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1003, "num_deletes": 252, "total_data_size": 1452993, "memory_usage": 1473600, "flush_reason": "Manual Compaction"}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484067051, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 1435391, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57556, "largest_seqno": 58558, "table_properties": {"data_size": 1430581, "index_size": 2333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11190, "raw_average_key_size": 20, "raw_value_size": 1420623, "raw_average_value_size": 2555, "num_data_blocks": 102, "num_entries": 556, "num_filter_entries": 556, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405405, "oldest_key_time": 1764405405, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 10733 microseconds, and 4768 cpu microseconds.
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.067117) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 1435391 bytes OK
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.067138) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.068457) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.068470) EVENT_LOG_v1 {"time_micros": 1764405484068466, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.068486) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 1448300, prev total WAL file size 1448300, number of live WAL files 2.
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.069077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(1401KB)], [128(13MB)]
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484069131, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 15312262, "oldest_snapshot_seqno": -1}
Nov 29 03:38:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:04 np0005539550 nova_compute[257631]: 2025-11-29 08:38:04.125 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 9212 keys, 13395534 bytes, temperature: kUnknown
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484196987, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 13395534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13334158, "index_size": 37263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23045, "raw_key_size": 243294, "raw_average_key_size": 26, "raw_value_size": 13170141, "raw_average_value_size": 1429, "num_data_blocks": 1426, "num_entries": 9212, "num_filter_entries": 9212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.197408) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 13395534 bytes
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.250031) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.6 rd, 104.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(20.0) write-amplify(9.3) OK, records in: 9733, records dropped: 521 output_compression: NoCompression
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.250100) EVENT_LOG_v1 {"time_micros": 1764405484250075, "job": 78, "event": "compaction_finished", "compaction_time_micros": 127994, "compaction_time_cpu_micros": 34071, "output_level": 6, "num_output_files": 1, "total_output_size": 13395534, "num_input_records": 9733, "num_output_records": 9212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484251139, "job": 78, "event": "table_file_deletion", "file_number": 130}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405484255403, "job": 78, "event": "table_file_deletion", "file_number": 128}
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.068987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.255542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.255548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.255551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.255553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:04.255556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Nov 29 03:38:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:06.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.6 MiB/s wr, 152 op/s
Nov 29 03:38:07 np0005539550 nova_compute[257631]: 2025-11-29 08:38:07.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:08.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 644 KiB/s wr, 96 op/s
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:38:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:38:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Nov 29 03:38:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Nov 29 03:38:08 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Nov 29 03:38:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:38:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:38:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:38:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:38:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:38:09 np0005539550 nova_compute[257631]: 2025-11-29 08:38:09.128 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:10.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 490 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 89 KiB/s wr, 105 op/s
Nov 29 03:38:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Nov 29 03:38:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Nov 29 03:38:10 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Nov 29 03:38:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Nov 29 03:38:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Nov 29 03:38:12 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Nov 29 03:38:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:12.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 474 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 4.7 MiB/s wr, 176 op/s
Nov 29 03:38:12 np0005539550 nova_compute[257631]: 2025-11-29 08:38:12.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:14 np0005539550 nova_compute[257631]: 2025-11-29 08:38:14.130 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:14.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 501 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 8.2 MiB/s wr, 246 op/s
Nov 29 03:38:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Nov 29 03:38:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Nov 29 03:38:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Nov 29 03:38:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:16.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.6 MiB/s rd, 15 MiB/s wr, 434 op/s
Nov 29 03:38:17 np0005539550 nova_compute[257631]: 2025-11-29 08:38:17.923 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 534 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 12 MiB/s wr, 273 op/s
Nov 29 03:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:18.969 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:18.970 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:18.971 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:19 np0005539550 nova_compute[257631]: 2025-11-29 08:38:19.157 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.133 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.135 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.150 257641 DEBUG nova.objects.instance [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.249 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008300690373162107 of space, bias 1.0, pg target 2.490207111948632 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021625052345058625 of space, bias 1.0, pg target 0.644426559882747 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005021127047273404 of space, bias 1.0, pg target 1.4962958600874745 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 519 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 8.1 MiB/s wr, 213 op/s
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.458 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.459 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.459 257641 INFO nova.compute.manager [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Attaching volume ce79c491-0b98-4186-bc37-6c4796c913e1 to /dev/vdb#033[00m
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.616 257641 DEBUG os_brick.utils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.618 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.629 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.630 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d054fde8-eec3-4062-9bde-e0c0c34ca7d6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.631 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.639 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.640 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[bc6981bc-39ca-45d5-9a43-8d91af158d93]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.641 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.649 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.649 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d70e4418-eab4-4f77-9018-6bbc652285ce]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.650 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[9e06e5ba-5f30-47d2-84f6-dd8c3eb1cd63]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.651 257641 DEBUG oslo_concurrency.processutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.679 257641 DEBUG oslo_concurrency.processutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.682 257641 DEBUG os_brick.initiator.connectors.lightos [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.683 257641 DEBUG os_brick.initiator.connectors.lightos [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.683 257641 DEBUG os_brick.initiator.connectors.lightos [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.683 257641 DEBUG os_brick.utils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:38:20 np0005539550 nova_compute[257631]: 2025-11-29 08:38:20.684 257641 DEBUG nova.virt.block_device [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updating existing volume attachment record: 25d43d9c-1ef7-40b5-b1d7-69efabe2ceb4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 010c967e-8a45-4147-9ead-44d2984122da does not exist
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0b54fbf5-a955-4ba4-a757-c288b31d6eec does not exist
Nov 29 03:38:20 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ff45c8ef-353a-435a-af50-7cee42bc1c0c does not exist
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:38:21 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.322983615 +0000 UTC m=+0.039856729 container create 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:21 np0005539550 systemd[1]: Started libpod-conmon-349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94.scope.
Nov 29 03:38:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.400107348 +0000 UTC m=+0.116980472 container init 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.304293921 +0000 UTC m=+0.021167045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.401 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.402 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.402 257641 INFO nova.compute.manager [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Unshelving#033[00m
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.40745444 +0000 UTC m=+0.124327544 container start 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.410523926 +0000 UTC m=+0.127397030 container attach 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:38:21 np0005539550 crazy_kirch[370262]: 167 167
Nov 29 03:38:21 np0005539550 systemd[1]: libpod-349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94.scope: Deactivated successfully.
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.413363517 +0000 UTC m=+0.130236621 container died 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-43aeef17783a1ee8fb53ab76cf0964fedf5cbbd3b40e274f3beec84a65f6423f-merged.mount: Deactivated successfully.
Nov 29 03:38:21 np0005539550 podman[370246]: 2025-11-29 08:38:21.467184701 +0000 UTC m=+0.184057805 container remove 349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:38:21 np0005539550 systemd[1]: libpod-conmon-349fdf1a430548ac822636a4920abf3187d171630b996dc6e144b5512be77b94.scope: Deactivated successfully.
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.488 257641 INFO nova.virt.block_device [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Booting with volume d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd at /dev/vdc#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.542 257641 DEBUG nova.objects.instance [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.564 257641 DEBUG nova.virt.libvirt.driver [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Attempting to attach volume ce79c491-0b98-4186-bc37-6c4796c913e1 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.568 257641 DEBUG nova.virt.libvirt.guest [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-ce79c491-0b98-4186-bc37-6c4796c913e1">
Nov 29 03:38:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:38:21 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:21 np0005539550 nova_compute[257631]:  <serial>ce79c491-0b98-4186-bc37-6c4796c913e1</serial>
Nov 29 03:38:21 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:38:21 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.649 257641 DEBUG os_brick.utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.650 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.665 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.666 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[672e4006-ea32-4fd9-a0c7-153517b6183d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539550 podman[370305]: 2025-11-29 08:38:21.66711607 +0000 UTC m=+0.039173583 container create b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.668 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.675 257641 DEBUG nova.virt.libvirt.driver [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.675 257641 DEBUG nova.virt.libvirt.driver [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.676 257641 DEBUG nova.virt.libvirt.driver [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.676 257641 DEBUG nova.virt.libvirt.driver [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] No VIF found with MAC fa:16:3e:f7:c9:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.682 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.683 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3405ab4a-c2a7-4783-950c-340d17ff5517]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.684 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.693 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.693 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb6cbcf-7448-4ac9-bee2-a7c0f240d070]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.697 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[22aa3908-fbf3-443e-9768-12b3f4af50ab]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.697 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:21 np0005539550 systemd[1]: Started libpod-conmon-b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0.scope.
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.728 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.731 257641 DEBUG os_brick.initiator.connectors.lightos [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.732 257641 DEBUG os_brick.initiator.connectors.lightos [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.732 257641 DEBUG os_brick.initiator.connectors.lightos [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.732 257641 DEBUG os_brick.utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.733 257641 DEBUG nova.virt.block_device [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating existing volume attachment record: a1863a6a-f772-48e4-ba02-325f54dbdba8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:38:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:21 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:21 np0005539550 podman[370305]: 2025-11-29 08:38:21.651710968 +0000 UTC m=+0.023768481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:21 np0005539550 podman[370305]: 2025-11-29 08:38:21.757193914 +0000 UTC m=+0.129251447 container init b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:38:21 np0005539550 podman[370305]: 2025-11-29 08:38:21.764140966 +0000 UTC m=+0.136198479 container start b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:38:21 np0005539550 podman[370305]: 2025-11-29 08:38:21.768083654 +0000 UTC m=+0.140141167 container attach b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:38:21 np0005539550 nova_compute[257631]: 2025-11-29 08:38:21.929 257641 DEBUG oslo_concurrency.lockutils [None req-232da8ef-e09f-42f3-a61b-aeb04684d9af bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.470s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:22.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:38:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:22.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 458 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 544 KiB/s rd, 5.4 MiB/s wr, 205 op/s
Nov 29 03:38:22 np0005539550 eager_germain[370327]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:38:22 np0005539550 eager_germain[370327]: --> relative data size: 1.0
Nov 29 03:38:22 np0005539550 eager_germain[370327]: --> All data devices are unavailable
Nov 29 03:38:22 np0005539550 systemd[1]: libpod-b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0.scope: Deactivated successfully.
Nov 29 03:38:22 np0005539550 podman[370305]: 2025-11-29 08:38:22.606439996 +0000 UTC m=+0.978497549 container died b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-19c69bbc851051abfe34921f12915b683407d3982d899a8f23f4579174c086ce-merged.mount: Deactivated successfully.
Nov 29 03:38:22 np0005539550 podman[370305]: 2025-11-29 08:38:22.672930285 +0000 UTC m=+1.044987798 container remove b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:22 np0005539550 systemd[1]: libpod-conmon-b8790fa3978dc1ca014dc68ea313ab3c46648df97037ebb0424898f5114be3a0.scope: Deactivated successfully.
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.757 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.758 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.765 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_requests' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.794 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.812 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.812 257641 INFO nova.compute.claims [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.927 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:22 np0005539550 nova_compute[257631]: 2025-11-29 08:38:22.971 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.326048783 +0000 UTC m=+0.046631097 container create ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:38:23 np0005539550 systemd[1]: Started libpod-conmon-ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad.scope.
Nov 29 03:38:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.305528005 +0000 UTC m=+0.026110349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1200237701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.416719512 +0000 UTC m=+0.137301856 container init ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.425058729 +0000 UTC m=+0.145641043 container start ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.429 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:23 np0005539550 kind_cartwright[370532]: 167 167
Nov 29 03:38:23 np0005539550 systemd[1]: libpod-ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad.scope: Deactivated successfully.
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.439490047 +0000 UTC m=+0.160072361 container attach ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.439 257641 DEBUG nova.compute.provider_tree [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.439970949 +0000 UTC m=+0.160553263 container died ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.476 257641 DEBUG nova.scheduler.client.report [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-02c3cb3089d4f3c4624dc369fcdc573ecc40cdf50e60f1925cb4134e73a35714-merged.mount: Deactivated successfully.
Nov 29 03:38:23 np0005539550 podman[370516]: 2025-11-29 08:38:23.491955898 +0000 UTC m=+0.212538212 container remove ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_cartwright, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:38:23 np0005539550 systemd[1]: libpod-conmon-ed92b0519ddbe72db582dfd5ac50a5c6c5bf5e51a2fa1f728eb17b96faebcfad.scope: Deactivated successfully.
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.500 257641 DEBUG oslo_concurrency.lockutils [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.501 257641 DEBUG oslo_concurrency.lockutils [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.526 257641 INFO nova.compute.manager [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Detaching volume ce79c491-0b98-4186-bc37-6c4796c913e1#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.530 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:23 np0005539550 podman[370559]: 2025-11-29 08:38:23.671172823 +0000 UTC m=+0.040626559 container create 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:38:23 np0005539550 systemd[1]: Started libpod-conmon-1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac.scope.
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.716 257641 INFO nova.virt.block_device [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Attempting to driver detach volume ce79c491-0b98-4186-bc37-6c4796c913e1 from mountpoint /dev/vdb#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.726 257641 DEBUG nova.virt.libvirt.driver [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Attempting to detach device vdb from instance 850e472c-8d49-4ecb-8478-992f11eb6196 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.727 257641 DEBUG nova.virt.libvirt.guest [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-ce79c491-0b98-4186-bc37-6c4796c913e1">
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <serial>ce79c491-0b98-4186-bc37-6c4796c913e1</serial>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:38:23 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c033933ec259f75cdab114ad03995d5e8af77e1f02cacd9dc9142762fd3823e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c033933ec259f75cdab114ad03995d5e8af77e1f02cacd9dc9142762fd3823e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.748 257641 INFO nova.virt.libvirt.driver [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance 850e472c-8d49-4ecb-8478-992f11eb6196 from the persistent domain config.#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.749 257641 DEBUG nova.virt.libvirt.driver [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 850e472c-8d49-4ecb-8478-992f11eb6196 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:38:23 np0005539550 podman[370559]: 2025-11-29 08:38:23.65369754 +0000 UTC m=+0.023151306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c033933ec259f75cdab114ad03995d5e8af77e1f02cacd9dc9142762fd3823e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.749 257641 DEBUG nova.virt.libvirt.guest [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-ce79c491-0b98-4186-bc37-6c4796c913e1">
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <serial>ce79c491-0b98-4186-bc37-6c4796c913e1</serial>
Nov 29 03:38:23 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:38:23 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:38:23 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:23 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c033933ec259f75cdab114ad03995d5e8af77e1f02cacd9dc9142762fd3823e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:23 np0005539550 podman[370559]: 2025-11-29 08:38:23.761139064 +0000 UTC m=+0.130592820 container init 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:38:23 np0005539550 podman[370559]: 2025-11-29 08:38:23.768684871 +0000 UTC m=+0.138138607 container start 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.770 257641 INFO nova.network.neutron [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating port 6fb79eb2-29a8-4947-8bac-7bed17841673 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:38:23 np0005539550 podman[370559]: 2025-11-29 08:38:23.773152902 +0000 UTC m=+0.142606628 container attach 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.810 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405503.80965, 850e472c-8d49-4ecb-8478-992f11eb6196 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.812 257641 DEBUG nova.virt.libvirt.driver [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 850e472c-8d49-4ecb-8478-992f11eb6196 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.814 257641 INFO nova.virt.libvirt.driver [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully detached device vdb from instance 850e472c-8d49-4ecb-8478-992f11eb6196 from the live domain config.#033[00m
Nov 29 03:38:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:23.929 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:23.930 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:38:23 np0005539550 nova_compute[257631]: 2025-11-29 08:38:23.988 257641 DEBUG nova.objects.instance [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'flavor' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:24 np0005539550 nova_compute[257631]: 2025-11-29 08:38:24.036 257641 DEBUG oslo_concurrency.lockutils [None req-9045af43-b9b0-4b10-a9ec-72980e22ee13 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:24.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:24.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:24 np0005539550 nova_compute[257631]: 2025-11-29 08:38:24.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 460 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]: {
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:    "0": [
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:        {
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "devices": [
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "/dev/loop3"
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            ],
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "lv_name": "ceph_lv0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "lv_size": "7511998464",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "name": "ceph_lv0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "tags": {
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.cluster_name": "ceph",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.crush_device_class": "",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.encrypted": "0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.osd_id": "0",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.type": "block",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:                "ceph.vdo": "0"
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            },
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "type": "block",
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:            "vg_name": "ceph_vg0"
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:        }
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]:    ]
Nov 29 03:38:24 np0005539550 heuristic_khayyam[370575]: }
Nov 29 03:38:24 np0005539550 systemd[1]: libpod-1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac.scope: Deactivated successfully.
Nov 29 03:38:24 np0005539550 podman[370559]: 2025-11-29 08:38:24.568498458 +0000 UTC m=+0.937952224 container died 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 29 03:38:24 np0005539550 nova_compute[257631]: 2025-11-29 08:38:24.570 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:24 np0005539550 nova_compute[257631]: 2025-11-29 08:38:24.571 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:24 np0005539550 nova_compute[257631]: 2025-11-29 08:38:24.571 257641 DEBUG nova.network.neutron [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:38:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6c033933ec259f75cdab114ad03995d5e8af77e1f02cacd9dc9142762fd3823e-merged.mount: Deactivated successfully.
Nov 29 03:38:24 np0005539550 podman[370559]: 2025-11-29 08:38:24.635585932 +0000 UTC m=+1.005039668 container remove 1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:24 np0005539550 systemd[1]: libpod-conmon-1b50b4dc87d0d42bbd60782d1841cbeaf2f26f30a9ad590e44b9aad5a9102eac.scope: Deactivated successfully.
Nov 29 03:38:24 np0005539550 podman[370587]: 2025-11-29 08:38:24.691588561 +0000 UTC m=+0.087093052 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Nov 29 03:38:24 np0005539550 podman[370594]: 2025-11-29 08:38:24.711881763 +0000 UTC m=+0.101534258 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.075 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.076 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.076 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.076 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.076 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.077 257641 INFO nova.compute.manager [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Terminating instance#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.078 257641 DEBUG nova.compute.manager [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:25 np0005539550 kernel: tapa84e5028-8e (unregistering): left promiscuous mode
Nov 29 03:38:25 np0005539550 NetworkManager[49039]: <info>  [1764405505.1296] device (tapa84e5028-8e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:25Z|00834|binding|INFO|Releasing lport a84e5028-8ed0-4eb3-90e9-ca099984eb99 from this chassis (sb_readonly=0)
Nov 29 03:38:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:25Z|00835|binding|INFO|Setting lport a84e5028-8ed0-4eb3-90e9-ca099984eb99 down in Southbound
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.142 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:25Z|00836|binding|INFO|Removing iface tapa84e5028-8e ovn-installed in OVS
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.144 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:25.150 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:c9:46 10.100.0.10'], port_security=['fa:16:3e:f7:c9:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '850e472c-8d49-4ecb-8478-992f11eb6196', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '368e3a44279843f5947188dd045d65b6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d1b9fcd-bea0-45d6-8ee2-94c92e3180fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=93400fd2-d19f-44bb-bf19-75f9854fcf6d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=a84e5028-8ed0-4eb3-90e9-ca099984eb99) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:25.151 158978 INFO neutron.agent.ovn.metadata.agent [-] Port a84e5028-8ed0-4eb3-90e9-ca099984eb99 in datapath 0183ad73-05c1-46e4-ba3e-b87d7a948c3b unbound from our chassis#033[00m
Nov 29 03:38:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:25.153 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0183ad73-05c1-46e4-ba3e-b87d7a948c3b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:25.154 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd1f756-6ad9-4096-b6d6-b4d9c18d925e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:25.154 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b namespace which is not needed anymore#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539550 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000b5.scope: Deactivated successfully.
Nov 29 03:38:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:25 np0005539550 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000b5.scope: Consumed 15.735s CPU time.
Nov 29 03:38:25 np0005539550 systemd-machined[216673]: Machine qemu-98-instance-000000b5 terminated.
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.320 257641 INFO nova.virt.libvirt.driver [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Instance destroyed successfully.#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.320 257641 DEBUG nova.objects.instance [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lazy-loading 'resources' on Instance uuid 850e472c-8d49-4ecb-8478-992f11eb6196 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:25 np0005539550 podman[370796]: 2025-11-29 08:38:25.241368895 +0000 UTC m=+0.021964846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.344 257641 DEBUG nova.virt.libvirt.vif [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-2050034368',display_name='tempest-AttachVolumeNegativeTest-server-2050034368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-2050034368',id=181,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN5Fw+YVl/LXCg78vO+EAU7iLZjD4Foeh/qpUlXRUX4yj1EvhYtXjfSgQ22GkRcPqGa0OkWPhkZ89PcRG3pfCX8R3xMT3yI7RwiiviWm2pxtBubDmed887Hy8nUam2un8g==',key_name='tempest-keypair-591583954',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='368e3a44279843f5947188dd045d65b6',ramdisk_id='',reservation_id='r-ca09wi6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1895715059',owner_user_name='tempest-AttachVolumeNegativeTest-1895715059-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='bdbcdbdc435844ee8d866288c969331b',uuid=850e472c-8d49-4ecb-8478-992f11eb6196,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.344 257641 DEBUG nova.network.os_vif_util [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converting VIF {"id": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "address": "fa:16:3e:f7:c9:46", "network": {"id": "0183ad73-05c1-46e4-ba3e-b87d7a948c3b", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-1280517693-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "368e3a44279843f5947188dd045d65b6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa84e5028-8e", "ovs_interfaceid": "a84e5028-8ed0-4eb3-90e9-ca099984eb99", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.345 257641 DEBUG nova.network.os_vif_util [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.345 257641 DEBUG os_vif [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.346 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.347 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa84e5028-8e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.348 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:25 np0005539550 nova_compute[257631]: 2025-11-29 08:38:25.352 257641 INFO os_vif [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:c9:46,bridge_name='br-int',has_traffic_filtering=True,id=a84e5028-8ed0-4eb3-90e9-ca099984eb99,network=Network(0183ad73-05c1-46e4-ba3e-b87d7a948c3b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa84e5028-8e')#033[00m
Nov 29 03:38:25 np0005539550 podman[370796]: 2025-11-29 08:38:25.550756108 +0000 UTC m=+0.331352039 container create 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:38:25 np0005539550 systemd[1]: Started libpod-conmon-53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb.scope.
Nov 29 03:38:25 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:26.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [NOTICE]   (369558) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [NOTICE]   (369558) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [WARNING]  (369558) : Exiting Master process...
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [WARNING]  (369558) : Exiting Master process...
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [ALERT]    (369558) : Current worker (369560) exited with code 143 (Terminated)
Nov 29 03:38:26 np0005539550 neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b[369554]: [WARNING]  (369558) : All workers exited. Exiting... (0)
Nov 29 03:38:26 np0005539550 systemd[1]: libpod-0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567.scope: Deactivated successfully.
Nov 29 03:38:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.290 257641 DEBUG nova.compute.manager [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.291 257641 DEBUG nova.compute.manager [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing instance network info cache due to event network-changed-6fb79eb2-29a8-4947-8bac-7bed17841673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.292 257641 DEBUG oslo_concurrency.lockutils [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:38:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 424 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 112 op/s
Nov 29 03:38:26 np0005539550 podman[370796]: 2025-11-29 08:38:26.3879126 +0000 UTC m=+1.168508581 container init 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:38:26 np0005539550 podman[370796]: 2025-11-29 08:38:26.398353469 +0000 UTC m=+1.178949400 container start 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:38:26 np0005539550 podman[370796]: 2025-11-29 08:38:26.401780764 +0000 UTC m=+1.182376715 container attach 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:26 np0005539550 zen_hugle[370856]: 167 167
Nov 29 03:38:26 np0005539550 systemd[1]: libpod-53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb.scope: Deactivated successfully.
Nov 29 03:38:26 np0005539550 podman[370796]: 2025-11-29 08:38:26.405811554 +0000 UTC m=+1.186407485 container died 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b43028ff2c4eaf48b881859d0fd764386bc65031e761c02e53bf113a42050e91-merged.mount: Deactivated successfully.
Nov 29 03:38:26 np0005539550 podman[370796]: 2025-11-29 08:38:26.450646796 +0000 UTC m=+1.231242727 container remove 53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:38:26 np0005539550 systemd[1]: libpod-conmon-53983637e69e057157ae92d0a52c88829a098e8fc7b6b3efbb07bddfb98910cb.scope: Deactivated successfully.
Nov 29 03:38:26 np0005539550 podman[370810]: 2025-11-29 08:38:26.50967902 +0000 UTC m=+1.262395390 container died 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f39936376fa3f74de336b6c536775dafc84d42763e89208de1ea7f7af9cebb28-merged.mount: Deactivated successfully.
Nov 29 03:38:26 np0005539550 podman[370810]: 2025-11-29 08:38:26.542768801 +0000 UTC m=+1.295485171 container cleanup 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:38:26 np0005539550 systemd[1]: libpod-conmon-0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567.scope: Deactivated successfully.
Nov 29 03:38:26 np0005539550 podman[370892]: 2025-11-29 08:38:26.612602743 +0000 UTC m=+0.045092659 container remove 0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.618 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[11ecc777-8cff-45b8-978b-634c97109874]: (4, ('Sat Nov 29 08:38:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567)\n0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567\nSat Nov 29 08:38:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b (0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567)\n0db3ca4c178e770e62b6609ca8aab6a34831d86669d39d762e00eccf8f272567\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.621 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a210ddae-78dc-49d5-a6d5-6aba6554e1df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.622 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0183ad73-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.661 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:26 np0005539550 kernel: tap0183ad73-00: left promiscuous mode
Nov 29 03:38:26 np0005539550 podman[370908]: 2025-11-29 08:38:26.673826782 +0000 UTC m=+0.076557430 container create 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.685 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.687 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6875bc-9b48-4f0f-907b-df30ea128a99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[53e78a13-7020-49dc-baa4-0746e364bd80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.705 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d5144b-03af-4704-9368-5ec5e6689eef]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 podman[370908]: 2025-11-29 08:38:26.626442326 +0000 UTC m=+0.029173004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:38:26 np0005539550 systemd[1]: Started libpod-conmon-00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b.scope.
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.724 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9fc60763-e83c-4011-8fb0-27c095a2fec7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 850701, 'reachable_time': 22329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370941, 'error': None, 'target': 'ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 systemd[1]: run-netns-ovnmeta\x2d0183ad73\x2d05c1\x2d46e4\x2dba3e\x2db87d7a948c3b.mount: Deactivated successfully.
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.728 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0183ad73-05c1-46e4-ba3e-b87d7a948c3b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:26.728 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea1db6c-3211-4cdf-9b4e-af3ce54e809b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5f1f2909328c5aa030f7ed0800977eac275d83069904dfc40a4110dc1fe165/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5f1f2909328c5aa030f7ed0800977eac275d83069904dfc40a4110dc1fe165/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5f1f2909328c5aa030f7ed0800977eac275d83069904dfc40a4110dc1fe165/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5f1f2909328c5aa030f7ed0800977eac275d83069904dfc40a4110dc1fe165/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:26 np0005539550 podman[370908]: 2025-11-29 08:38:26.767366611 +0000 UTC m=+0.170097289 container init 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:38:26 np0005539550 podman[370908]: 2025-11-29 08:38:26.777108903 +0000 UTC m=+0.179839541 container start 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:38:26 np0005539550 podman[370908]: 2025-11-29 08:38:26.787315416 +0000 UTC m=+0.190046084 container attach 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.868 257641 DEBUG nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-unplugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.870 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.870 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.870 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.870 257641 DEBUG nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] No waiting events found dispatching network-vif-unplugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.870 257641 DEBUG nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-unplugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 DEBUG nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 DEBUG oslo_concurrency.lockutils [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 DEBUG nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] No waiting events found dispatching network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.871 257641 WARNING nova.compute.manager [req-43a4686f-6e5f-474b-9ca9-e07416899639 req-ef757483-914a-441c-82f3-6ee4be169fa5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received unexpected event network-vif-plugged-a84e5028-8ed0-4eb3-90e9-ca099984eb99 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.889 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.890 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.890 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.890 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.890 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.891 257641 INFO nova.compute.manager [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Terminating instance#033[00m
Nov 29 03:38:26 np0005539550 nova_compute[257631]: 2025-11-29 08:38:26.892 257641 DEBUG nova.compute.manager [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:27 np0005539550 kernel: tapc5532af0-4f (unregistering): left promiscuous mode
Nov 29 03:38:27 np0005539550 NetworkManager[49039]: <info>  [1764405507.0432] device (tapc5532af0-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:27Z|00837|binding|INFO|Releasing lport c5532af0-4f88-4953-8da5-1553ef8d3b8f from this chassis (sb_readonly=0)
Nov 29 03:38:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:27Z|00838|binding|INFO|Setting lport c5532af0-4f88-4953-8da5-1553ef8d3b8f down in Southbound
Nov 29 03:38:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:27Z|00839|binding|INFO|Removing iface tapc5532af0-4f ovn-installed in OVS
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.059 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.082 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b3.scope: Deactivated successfully.
Nov 29 03:38:27 np0005539550 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000b3.scope: Consumed 15.967s CPU time.
Nov 29 03:38:27 np0005539550 systemd-machined[216673]: Machine qemu-97-instance-000000b3 terminated.
Nov 29 03:38:27 np0005539550 NetworkManager[49039]: <info>  [1764405507.1185] manager: (tapc5532af0-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.140 257641 INFO nova.virt.libvirt.driver [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Instance destroyed successfully.#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.140 257641 DEBUG nova.objects.instance [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 46075bef-e2f4-434f-8d14-deccfa05cd2f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.202 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:16:c6 10.100.0.13'], port_security=['fa:16:3e:ee:16:c6 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '46075bef-e2f4-434f-8d14-deccfa05cd2f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'af24e467-2d1e-447f-bcc4-9f3cd733b642', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244cacb4-e852-4547-86ff-006a2141eb2a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=c5532af0-4f88-4953-8da5-1553ef8d3b8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.203 158978 INFO neutron.agent.ovn.metadata.agent [-] Port c5532af0-4f88-4953-8da5-1553ef8d3b8f in datapath 4dc7ed86-80fb-4377-a5d0-8edd5e264c14 unbound from our chassis#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.205 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4dc7ed86-80fb-4377-a5d0-8edd5e264c14, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.206 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d3383e75-a285-44db-adea-aea60e12da23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.207 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 namespace which is not needed anymore#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.219 257641 DEBUG nova.virt.libvirt.vif [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:37:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-874759565',display_name='tempest-TestNetworkBasicOps-server-874759565',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-874759565',id=179,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK59mMwdkIRco5e+wx7z1qyWEKx2l9WSUMzqdwFM2UJMLW1lFhIC1s/A3JYUvXkYkjgGNZRfQ1g+CuIKY+jbDApbG/uQdsxXxpYYXSx3CJ40Y4Osyky+kLIItP0l/wcDVA==',key_name='tempest-TestNetworkBasicOps-2011738495',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-as5blxm9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:37:17Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=46075bef-e2f4-434f-8d14-deccfa05cd2f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.220 257641 DEBUG nova.network.os_vif_util [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "address": "fa:16:3e:ee:16:c6", "network": {"id": "4dc7ed86-80fb-4377-a5d0-8edd5e264c14", "bridge": "br-int", "label": "tempest-network-smoke--370633540", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5532af0-4f", "ovs_interfaceid": "c5532af0-4f88-4953-8da5-1553ef8d3b8f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.220 257641 DEBUG nova.network.os_vif_util [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.221 257641 DEBUG os_vif [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.222 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.222 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5532af0-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.224 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.225 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.225 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.227 257641 INFO os_vif [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:16:c6,bridge_name='br-int',has_traffic_filtering=True,id=c5532af0-4f88-4953-8da5-1553ef8d3b8f,network=Network(4dc7ed86-80fb-4377-a5d0-8edd5e264c14),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc5532af0-4f')#033[00m
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [NOTICE]   (368932) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [NOTICE]   (368932) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [WARNING]  (368932) : Exiting Master process...
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [WARNING]  (368932) : Exiting Master process...
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [ALERT]    (368932) : Current worker (368934) exited with code 143 (Terminated)
Nov 29 03:38:27 np0005539550 neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14[368928]: [WARNING]  (368932) : All workers exited. Exiting... (0)
Nov 29 03:38:27 np0005539550 systemd[1]: libpod-31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335.scope: Deactivated successfully.
Nov 29 03:38:27 np0005539550 podman[371034]: 2025-11-29 08:38:27.346254538 +0000 UTC m=+0.046048083 container died 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-291d58d598c555d186e91c604faf2497bf1c4d046dc1e81ca0750ff105e7b90a-merged.mount: Deactivated successfully.
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.382 257641 DEBUG nova.network.neutron [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:27 np0005539550 podman[371034]: 2025-11-29 08:38:27.413788693 +0000 UTC m=+0.113582238 container cleanup 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.418 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.419 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.420 257641 INFO nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating image(s)#033[00m
Nov 29 03:38:27 np0005539550 systemd[1]: libpod-conmon-31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335.scope: Deactivated successfully.
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.449 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.456 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.457 257641 DEBUG oslo_concurrency.lockutils [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.457 257641 DEBUG nova.network.neutron [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Refreshing network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:38:27 np0005539550 podman[371064]: 2025-11-29 08:38:27.479286268 +0000 UTC m=+0.040102326 container remove 31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.484 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8566641e-c70a-45f4-a421-fca98a75ddc4]: (4, ('Sat Nov 29 08:38:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 (31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335)\n31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335\nSat Nov 29 08:38:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 (31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335)\n31e9c7827a6ad227dcbdf48326623624d9f466f907beb65697c6bed3a88b6335\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.486 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e554448-f935-406c-b93b-d625578f1c1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.486 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dc7ed86-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.488 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 kernel: tap4dc7ed86-80: left promiscuous mode
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.493 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3ef312-3377-499b-9d39-f879ec0498d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.512 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[44008340-a579-481e-8cda-d3ab5e6645a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.514 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6a72dee1-670e-46e2-b2c3-d15be9ec7f0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c3a43e-2b1d-4463-ae94-bb72d6dc407e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 848435, 'reachable_time': 19421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371100, 'error': None, 'target': 'ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.534 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4dc7ed86-80fb-4377-a5d0-8edd5e264c14 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:27.534 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[7d290e4f-3aa4-48f7-9a77-867aee5052f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:27 np0005539550 systemd[1]: run-netns-ovnmeta\x2d4dc7ed86\x2d80fb\x2d4377\x2da5d0\x2d8edd5e264c14.mount: Deactivated successfully.
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.544 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.569 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.572 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "2bf5d9f525d80a9747a9d7380b99de142bb71838" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.572 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "2bf5d9f525d80a9747a9d7380b99de142bb71838" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]: {
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:        "osd_id": 0,
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:        "type": "bluestore"
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]:    }
Nov 29 03:38:27 np0005539550 strange_mccarthy[370952]: }
Nov 29 03:38:27 np0005539550 systemd[1]: libpod-00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b.scope: Deactivated successfully.
Nov 29 03:38:27 np0005539550 conmon[370952]: conmon 00b61e82715f6e255ca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b.scope/container/memory.events
Nov 29 03:38:27 np0005539550 podman[370908]: 2025-11-29 08:38:27.686541548 +0000 UTC m=+1.089272216 container died 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:27 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1c5f1f2909328c5aa030f7ed0800977eac275d83069904dfc40a4110dc1fe165-merged.mount: Deactivated successfully.
Nov 29 03:38:27 np0005539550 podman[370908]: 2025-11-29 08:38:27.74873005 +0000 UTC m=+1.151460698 container remove 00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_mccarthy, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:38:27 np0005539550 systemd[1]: libpod-conmon-00b61e82715f6e255ca1c6f2de29a532d45057aa3aa920f1b6a4080fe0946b5b.scope: Deactivated successfully.
Nov 29 03:38:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:38:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.904 257641 DEBUG nova.virt.libvirt.imagebackend [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.967 257641 DEBUG nova.virt.libvirt.imagebackend [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:38:27 np0005539550 nova_compute[257631]: 2025-11-29 08:38:27.968 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] cloning images/c373ab38-0e65-4d5c-bf51-8234f0ed5ffd@snap to None/5748313e-fbb3-409e-83e6-aff548491530_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:38:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6c36358b-9606-46fe-8582-2ea91702c0c9 does not exist
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a842b02-5008-45cc-b38a-70bd19c464ea does not exist
Nov 29 03:38:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f7467b5-d3f3-4bdd-9d58-bd8ccd48a6e9 does not exist
Nov 29 03:38:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:28.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:28.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.311 257641 DEBUG nova.compute.manager [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-unplugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.311 257641 DEBUG oslo_concurrency.lockutils [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.311 257641 DEBUG oslo_concurrency.lockutils [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.312 257641 DEBUG oslo_concurrency.lockutils [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.312 257641 DEBUG nova.compute.manager [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] No waiting events found dispatching network-vif-unplugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.312 257641 DEBUG nova.compute.manager [req-48e5e490-e6a1-48e1-85b5-efcea631ea89 req-7b1e8df1-a6ed-4758-ae99-854d3f62d627 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-unplugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 424 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 112 op/s
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.680 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "2bf5d9f525d80a9747a9d7380b99de142bb71838" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.744 257641 INFO nova.virt.libvirt.driver [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Deleting instance files /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196_del#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.745 257641 INFO nova.virt.libvirt.driver [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Deletion of /var/lib/nova/instances/850e472c-8d49-4ecb-8478-992f11eb6196_del complete#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.858 257641 INFO nova.virt.libvirt.driver [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Deleting instance files /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f_del#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.859 257641 INFO nova.virt.libvirt.driver [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Deletion of /var/lib/nova/instances/46075bef-e2f4-434f-8d14-deccfa05cd2f_del complete#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.862 257641 INFO nova.compute.manager [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Took 3.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.862 257641 DEBUG oslo.service.loopingcall [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.866 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'migration_context' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.867 257641 DEBUG nova.compute.manager [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.868 257641 DEBUG nova.network.neutron [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:28.932 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.938 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] flattening vms/5748313e-fbb3-409e-83e6-aff548491530_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.987 257641 INFO nova.compute.manager [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Took 2.09 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.987 257641 DEBUG oslo.service.loopingcall [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.987 257641 DEBUG nova.compute.manager [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:28 np0005539550 nova_compute[257631]: 2025-11-29 08:38:28.988 257641 DEBUG nova.network.neutron [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.293 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Image rbd:vms/5748313e-fbb3-409e-83e6-aff548491530_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.294 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.294 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Ensure instance console log exists: /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.295 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.295 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.295 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.297 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start _get_guest_xml network_info=[{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdc': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:38:04Z,direct_url=<?>,disk_format='raw',id=c373ab38-0e65-4d5c-bf51-8234f0ed5ffd,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-716574022-shelved',owner='889608c71d13429fb37793575792ae74',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:38:13Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'a1863a6a-f772-48e4-ba02-325f54dbdba8', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd6f82ad4-8ec0-4138-9a4b-77fdefc17bbd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attached', 'instance': '5748313e-fbb3-409e-83e6-aff548491530', 'attached_at': '', 'detached_at': '', 'volume_id': 'd6f82ad4-8ec0-4138-9a4b-77fdefc17bbd', 'serial': 'd6f82ad4-8ec0-4138-9a4b-77fdefc17bbd'}, 'mount_device': '/dev/vdc', 'guest_format': None, 'boot_index': None, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.302 257641 WARNING nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.308 257641 DEBUG nova.virt.libvirt.host [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.308 257641 DEBUG nova.virt.libvirt.host [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.312 257641 DEBUG nova.virt.libvirt.host [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.313 257641 DEBUG nova.virt.libvirt.host [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.314 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.314 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:38:04Z,direct_url=<?>,disk_format='raw',id=c373ab38-0e65-4d5c-bf51-8234f0ed5ffd,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-716574022-shelved',owner='889608c71d13429fb37793575792ae74',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:38:13Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.314 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.314 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.315 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.316 257641 DEBUG nova.virt.hardware [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.316 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.317 257641 DEBUG nova.network.neutron [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updated VIF entry in instance network info cache for port 6fb79eb2-29a8-4947-8bac-7bed17841673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.318 257641 DEBUG nova.network.neutron [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [{"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.346 257641 DEBUG oslo_concurrency.lockutils [req-987beae2-9c4c-45da-9c3d-35f15333451d req-38b0e26c-96e1-4925-9fe4-240789872668 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5748313e-fbb3-409e-83e6-aff548491530" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.348 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1245411024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.768 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.793 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:29 np0005539550 nova_compute[257631]: 2025-11-29 08:38:29.797 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:30.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.088 257641 DEBUG nova.network.neutron [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.141 257641 INFO nova.compute.manager [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:30.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.177 257641 DEBUG nova.network.neutron [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.202 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.203 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.204 257641 INFO nova.compute.manager [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Took 1.34 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:38:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3399572713' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.228 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.254 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.255 257641 DEBUG nova.virt.libvirt.vif [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='c373ab38-0e65-4d5c-bf51-8234f0ed5ffd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1744874454',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:38:13.647915',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='c373ab38-0e65-4d5c-bf51-8234f0ed5ffd'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.256 257641 DEBUG nova.network.os_vif_util [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.256 257641 DEBUG nova.network.os_vif_util [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.257 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.267 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <uuid>5748313e-fbb3-409e-83e6-aff548491530</uuid>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <name>instance-000000b4</name>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-716574022</nova:name>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:38:29</nova:creationTime>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:user uuid="eeba34466b8f4a1bb5f742f1e811053c">tempest-AttachVolumeShelveTestJSON-907905934-project-member</nova:user>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:project uuid="889608c71d13429fb37793575792ae74">tempest-AttachVolumeShelveTestJSON-907905934</nova:project>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="c373ab38-0e65-4d5c-bf51-8234f0ed5ffd"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <nova:port uuid="6fb79eb2-29a8-4947-8bac-7bed17841673">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="serial">5748313e-fbb3-409e-83e6-aff548491530</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="uuid">5748313e-fbb3-409e-83e6-aff548491530</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5748313e-fbb3-409e-83e6-aff548491530_disk">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5748313e-fbb3-409e-83e6-aff548491530_disk.config">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <target dev="vdc" bus="virtio"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <serial>d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd</serial>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:8b:9f:50"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <target dev="tap6fb79eb2-29"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/console.log" append="off"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:38:30 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:38:30 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:38:30 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:38:30 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.269 257641 DEBUG nova.compute.manager [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Preparing to wait for external event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.269 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.269 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.269 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.270 257641 DEBUG nova.virt.libvirt.vif [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='c373ab38-0e65-4d5c-bf51-8234f0ed5ffd',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1744874454',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:37:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:38:13.647915',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='c373ab38-0e65-4d5c-bf51-8234f0ed5ffd'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:38:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.270 257641 DEBUG nova.network.os_vif_util [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.271 257641 DEBUG nova.network.os_vif_util [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.271 257641 DEBUG os_vif [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.271 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.272 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.272 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.278 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fb79eb2-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.279 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fb79eb2-29, col_values=(('external_ids', {'iface-id': '6fb79eb2-29a8-4947-8bac-7bed17841673', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9f:50', 'vm-uuid': '5748313e-fbb3-409e-83e6-aff548491530'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:30 np0005539550 NetworkManager[49039]: <info>  [1764405510.2825] manager: (tap6fb79eb2-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.284 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.289 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.291 257641 INFO os_vif [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29')#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.311 257641 DEBUG oslo_concurrency.processutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 383 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.376 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.376 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.377 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.377 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No VIF found with MAC fa:16:3e:8b:9f:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.377 257641 INFO nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Using config drive#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.407 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.416 257641 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.417 257641 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.417 257641 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.418 257641 DEBUG oslo_concurrency.lockutils [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.418 257641 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] No waiting events found dispatching network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.418 257641 WARNING nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received unexpected event network-vif-plugged-c5532af0-4f88-4953-8da5-1553ef8d3b8f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.419 257641 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Received event network-vif-deleted-c5532af0-4f88-4953-8da5-1553ef8d3b8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.419 257641 DEBUG nova.compute.manager [req-d4fe4b4d-6bc6-4eff-b794-9b7d5324745c req-50c6a074-51cf-4f95-8e89-43db1676b2b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Received event network-vif-deleted-a84e5028-8ed0-4eb3-90e9-ca099984eb99 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.455 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.519 257641 DEBUG nova.objects.instance [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'keypairs' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524414112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.755 257641 DEBUG oslo_concurrency.processutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.760 257641 DEBUG nova.compute.provider_tree [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.774 257641 DEBUG nova.scheduler.client.report [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.798 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.800 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.827 257641 INFO nova.scheduler.client.report [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 46075bef-e2f4-434f-8d14-deccfa05cd2f#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.863 257641 INFO nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Creating config drive at /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.869 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplmue7m1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.903 257641 DEBUG oslo_concurrency.lockutils [None req-d9eefafc-5cac-4e09-b877-4ec1f8b70705 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "46075bef-e2f4-434f-8d14-deccfa05cd2f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:30 np0005539550 nova_compute[257631]: 2025-11-29 08:38:30.919 257641 DEBUG oslo_concurrency.processutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.005 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplmue7m1" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.034 257641 DEBUG nova.storage.rbd_utils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 5748313e-fbb3-409e-83e6-aff548491530_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.038 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config 5748313e-fbb3-409e-83e6-aff548491530_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.258 257641 DEBUG oslo_concurrency.processutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config 5748313e-fbb3-409e-83e6-aff548491530_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.259 257641 INFO nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deleting local config drive /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530/disk.config because it was imported into RBD.#033[00m
Nov 29 03:38:31 np0005539550 kernel: tap6fb79eb2-29: entered promiscuous mode
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.3225] manager: (tap6fb79eb2-29): new Tun device (/org/freedesktop/NetworkManager/Devices/369)
Nov 29 03:38:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:31Z|00840|binding|INFO|Claiming lport 6fb79eb2-29a8-4947-8bac-7bed17841673 for this chassis.
Nov 29 03:38:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:31Z|00841|binding|INFO|6fb79eb2-29a8-4947-8bac-7bed17841673: Claiming fa:16:3e:8b:9f:50 10.100.0.3
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.325 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.329 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9f:50 10.100.0.3'], port_security=['fa:16:3e:8b:9f:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748313e-fbb3-409e-83e6-aff548491530', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c266cfe3-9f5e-4a42-93d4-52df1525211e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6fb79eb2-29a8-4947-8bac-7bed17841673) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.331 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb79eb2-29a8-4947-8bac-7bed17841673 in datapath d042f24f-c2f0-4843-9727-cc3720586596 bound to our chassis#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.333 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d042f24f-c2f0-4843-9727-cc3720586596#033[00m
Nov 29 03:38:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/496992336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:31Z|00842|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 ovn-installed in OVS
Nov 29 03:38:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:31Z|00843|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 up in Southbound
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.343 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.346 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[708403a2-dd82-4041-802a-f3d2ad1488d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.347 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd042f24f-c1 in ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.349 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd042f24f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.349 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e77d7ee-712a-4e10-863a-291974f611eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.350 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.350 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cee63966-ee6c-45c7-9229-d26a564ad46f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 systemd-udevd[371558]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:31 np0005539550 systemd-machined[216673]: New machine qemu-99-instance-000000b4.
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.366 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c42b3d-52c5-4a4a-b6ee-370a41275a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.3779] device (tap6fb79eb2-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.3789] device (tap6fb79eb2-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:38:31 np0005539550 systemd[1]: Started Virtual Machine qemu-99-instance-000000b4.
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.379 257641 DEBUG oslo_concurrency.processutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.388 257641 DEBUG nova.compute.provider_tree [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.395 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a4cbdf7b-9c78-471f-b605-30fc83089b05]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.403 257641 DEBUG nova.scheduler.client.report [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.425 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.426 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[810af8c0-0790-4826-a28e-ebf56f0a21c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.432 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4788eac4-f23c-4347-80d0-7a7ab4fab43c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.4332] manager: (tapd042f24f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/370)
Nov 29 03:38:31 np0005539550 systemd-udevd[371561]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.459 257641 INFO nova.scheduler.client.report [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Deleted allocations for instance 850e472c-8d49-4ecb-8478-992f11eb6196#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.462 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[80d109de-8e0d-4350-b16c-49870ff55bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.464 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[de03462b-0d15-4515-85ee-9db5d450bb93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.4863] device (tapd042f24f-c0): carrier: link connected
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.490 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[149e880c-df8c-452d-b524-c23fb7b63d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.511 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4a614ceb-54a9-4904-9ee7-97f579fd6e6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855915, 'reachable_time': 24513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371589, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.527 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2faeb27e-6752-44ef-bc08-9e07501fe301]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:6732'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 855915, 'tstamp': 855915}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371590, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.534 257641 DEBUG oslo_concurrency.lockutils [None req-53852d63-44af-4eb4-b879-0da9e9aa3035 bdbcdbdc435844ee8d866288c969331b 368e3a44279843f5947188dd045d65b6 - - default default] Lock "850e472c-8d49-4ecb-8478-992f11eb6196" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.548 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[427082a0-3b30-428e-9449-ffab459b8ba7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 244], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855915, 'reachable_time': 24513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371591, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.587 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[537d9cb0-319e-4cd2-ae88-96433d90576f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.643 257641 DEBUG nova.compute.manager [req-afee78ab-af48-4eb9-b2c3-07ebbe853282 req-3216fcc7-0f45-47e3-9476-58100983a1f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.643 257641 DEBUG oslo_concurrency.lockutils [req-afee78ab-af48-4eb9-b2c3-07ebbe853282 req-3216fcc7-0f45-47e3-9476-58100983a1f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.644 257641 DEBUG oslo_concurrency.lockutils [req-afee78ab-af48-4eb9-b2c3-07ebbe853282 req-3216fcc7-0f45-47e3-9476-58100983a1f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.644 257641 DEBUG oslo_concurrency.lockutils [req-afee78ab-af48-4eb9-b2c3-07ebbe853282 req-3216fcc7-0f45-47e3-9476-58100983a1f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.644 257641 DEBUG nova.compute.manager [req-afee78ab-af48-4eb9-b2c3-07ebbe853282 req-3216fcc7-0f45-47e3-9476-58100983a1f0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Processing event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.669 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb38949-632d-4bfb-9a09-0672d83b184a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.671 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.671 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.672 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd042f24f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.674 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 NetworkManager[49039]: <info>  [1764405511.6753] manager: (tapd042f24f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Nov 29 03:38:31 np0005539550 kernel: tapd042f24f-c0: entered promiscuous mode
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.677 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.678 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd042f24f-c0, col_values=(('external_ids', {'iface-id': 'e44bf17c-ebb7-4e62-850a-20ff20a74960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.679 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:31Z|00844|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.694 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.696 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.697 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[410bb33c-5a49-4894-943e-58e0eaf8d65b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.698 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:38:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:31.700 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'env', 'PROCESS_TAG=haproxy-d042f24f-c2f0-4843-9727-cc3720586596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d042f24f-c2f0-4843-9727-cc3720586596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.986 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405511.9861665, 5748313e-fbb3-409e-83e6-aff548491530 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.987 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Started (Lifecycle Event)#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.989 257641 DEBUG nova.compute.manager [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.992 257641 DEBUG nova.virt.libvirt.driver [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:38:31 np0005539550 nova_compute[257631]: 2025-11-29 08:38:31.995 257641 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance spawned successfully.#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.006 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.009 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.036 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.037 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405511.9884875, 5748313e-fbb3-409e-83e6-aff548491530 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.037 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:38:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:32.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.059 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.065 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405511.9914176, 5748313e-fbb3-409e-83e6-aff548491530 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.065 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:38:32 np0005539550 podman[371681]: 2025-11-29 08:38:32.074064782 +0000 UTC m=+0.049529529 container create b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.116 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.130 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:38:32 np0005539550 podman[371681]: 2025-11-29 08:38:32.047811621 +0000 UTC m=+0.023276388 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:38:32 np0005539550 systemd[1]: Started libpod-conmon-b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103.scope.
Nov 29 03:38:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:38:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:32.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:38:32 np0005539550 nova_compute[257631]: 2025-11-29 08:38:32.167 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:38:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:38:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c48cf31d185bdda61fe22c6d2af6f7a6c53d800eeb921aa56bf77fd243da31e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:38:32 np0005539550 podman[371681]: 2025-11-29 08:38:32.1985689 +0000 UTC m=+0.174033647 container init b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:38:32 np0005539550 podman[371681]: 2025-11-29 08:38:32.207004899 +0000 UTC m=+0.182469646 container start b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:38:32 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [NOTICE]   (371700) : New worker (371702) forked
Nov 29 03:38:32 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [NOTICE]   (371700) : Loading success.
Nov 29 03:38:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 332 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.8 MiB/s wr, 231 op/s
Nov 29 03:38:33 np0005539550 podman[371712]: 2025-11-29 08:38:33.351714079 +0000 UTC m=+0.080223910 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.757 257641 DEBUG nova.compute.manager [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.758 257641 DEBUG oslo_concurrency.lockutils [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.758 257641 DEBUG oslo_concurrency.lockutils [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.758 257641 DEBUG oslo_concurrency.lockutils [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.758 257641 DEBUG nova.compute.manager [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:33 np0005539550 nova_compute[257631]: 2025-11-29 08:38:33.758 257641 WARNING nova.compute.manager [req-84354e91-cc33-463d-95c2-5a8f59d3de81 req-6bdf1022-f116-41e3-bad3-f5142e0c452d 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received unexpected event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Nov 29 03:38:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Nov 29 03:38:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Nov 29 03:38:33 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Nov 29 03:38:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:34 np0005539550 nova_compute[257631]: 2025-11-29 08:38:34.165 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 346 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.9 MiB/s rd, 5.8 MiB/s wr, 300 op/s
Nov 29 03:38:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:35 np0005539550 nova_compute[257631]: 2025-11-29 08:38:35.281 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:35Z|00845|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:38:35 np0005539550 nova_compute[257631]: 2025-11-29 08:38:35.615 257641 DEBUG nova.compute.manager [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:35 np0005539550 nova_compute[257631]: 2025-11-29 08:38:35.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:35 np0005539550 nova_compute[257631]: 2025-11-29 08:38:35.696 257641 DEBUG oslo_concurrency.lockutils [None req-15c6ece8-9be5-46f0-98a9-dae575755ba4 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 14.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:36.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:36.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 4.8 MiB/s wr, 374 op/s
Nov 29 03:38:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:38.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 297 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 4.8 MiB/s wr, 374 op/s
Nov 29 03:38:38 np0005539550 nova_compute[257631]: 2025-11-29 08:38:38.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:39 np0005539550 nova_compute[257631]: 2025-11-29 08:38:39.170 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:40.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:40 np0005539550 nova_compute[257631]: 2025-11-29 08:38:40.284 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:40 np0005539550 nova_compute[257631]: 2025-11-29 08:38:40.318 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405505.3181083, 850e472c-8d49-4ecb-8478-992f11eb6196 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:40 np0005539550 nova_compute[257631]: 2025-11-29 08:38:40.319 257641 INFO nova.compute.manager [-] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.4 MiB/s rd, 4.7 MiB/s wr, 376 op/s
Nov 29 03:38:40 np0005539550 nova_compute[257631]: 2025-11-29 08:38:40.354 257641 DEBUG nova.compute.manager [None req-02189231-090c-4556-adfd-e9ab4eb0f13b - - - - - -] [instance: 850e472c-8d49-4ecb-8478-992f11eb6196] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:42 np0005539550 nova_compute[257631]: 2025-11-29 08:38:42.139 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405507.1385524, 46075bef-e2f4-434f-8d14-deccfa05cd2f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:38:42 np0005539550 nova_compute[257631]: 2025-11-29 08:38:42.140 257641 INFO nova.compute.manager [-] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:38:42 np0005539550 nova_compute[257631]: 2025-11-29 08:38:42.158 257641 DEBUG nova.compute.manager [None req-8119b3ae-a140-4e01-a082-dcb029c401d2 - - - - - -] [instance: 46075bef-e2f4-434f-8d14-deccfa05cd2f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:38:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:42.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 1.1 MiB/s wr, 252 op/s
Nov 29 03:38:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:42Z|00846|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:38:42 np0005539550 nova_compute[257631]: 2025-11-29 08:38:42.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:42 np0005539550 nova_compute[257631]: 2025-11-29 08:38:42.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:43 np0005539550 nova_compute[257631]: 2025-11-29 08:38:43.333 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:43 np0005539550 nova_compute[257631]: 2025-11-29 08:38:43.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:43 np0005539550 nova_compute[257631]: 2025-11-29 08:38:43.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:38:43 np0005539550 nova_compute[257631]: 2025-11-29 08:38:43.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:38:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:44.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:44 np0005539550 nova_compute[257631]: 2025-11-29 08:38:44.205 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:44 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:38:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 275 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 681 KiB/s wr, 177 op/s
Nov 29 03:38:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:44Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9f:50 10.100.0.3
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.211938) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525212031, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 793, "num_deletes": 254, "total_data_size": 1029143, "memory_usage": 1043216, "flush_reason": "Manual Compaction"}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525221191, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 776540, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58559, "largest_seqno": 59351, "table_properties": {"data_size": 772681, "index_size": 1574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10208, "raw_average_key_size": 21, "raw_value_size": 764573, "raw_average_value_size": 1623, "num_data_blocks": 67, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405485, "oldest_key_time": 1764405485, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 9317 microseconds, and 5694 cpu microseconds.
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.221250) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 776540 bytes OK
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.221288) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.224208) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.224235) EVENT_LOG_v1 {"time_micros": 1764405525224226, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.224264) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1025102, prev total WAL file size 1025102, number of live WAL files 2.
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.225196) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323535' seq:0, type:0; will stop at (end)
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(758KB)], [131(12MB)]
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525225256, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 14172074, "oldest_snapshot_seqno": -1}
Nov 29 03:38:45 np0005539550 nova_compute[257631]: 2025-11-29 08:38:45.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 9171 keys, 10517524 bytes, temperature: kUnknown
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525323742, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10517524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10460395, "index_size": 33080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22981, "raw_key_size": 242799, "raw_average_key_size": 26, "raw_value_size": 10301103, "raw_average_value_size": 1123, "num_data_blocks": 1252, "num_entries": 9171, "num_filter_entries": 9171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405525, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.324042) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10517524 bytes
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.325655) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.8 rd, 106.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(31.8) write-amplify(13.5) OK, records in: 9683, records dropped: 512 output_compression: NoCompression
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.325681) EVENT_LOG_v1 {"time_micros": 1764405525325669, "job": 80, "event": "compaction_finished", "compaction_time_micros": 98558, "compaction_time_cpu_micros": 28710, "output_level": 6, "num_output_files": 1, "total_output_size": 10517524, "num_input_records": 9683, "num_output_records": 9171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525325968, "job": 80, "event": "table_file_deletion", "file_number": 133}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405525328059, "job": 80, "event": "table_file_deletion", "file_number": 131}
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.225140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:38:45.328097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:38:45 np0005539550 nova_compute[257631]: 2025-11-29 08:38:45.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:46.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:46.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 862 KiB/s rd, 2.5 MiB/s wr, 156 op/s
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.943 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:38:46 np0005539550 nova_compute[257631]: 2025-11-29 08:38:46.943 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/832370329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.387 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.465 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.466 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.466 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.660 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.664 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3956MB free_disk=20.89777374267578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.665 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.666 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.763 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 5748313e-fbb3-409e-83e6-aff548491530 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.765 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.765 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:38:47 np0005539550 nova_compute[257631]: 2025-11-29 08:38:47.820 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:48.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814579618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:48 np0005539550 nova_compute[257631]: 2025-11-29 08:38:48.250 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:48 np0005539550 nova_compute[257631]: 2025-11-29 08:38:48.256 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:48 np0005539550 nova_compute[257631]: 2025-11-29 08:38:48.272 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:48 np0005539550 nova_compute[257631]: 2025-11-29 08:38:48.296 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:38:48 np0005539550 nova_compute[257631]: 2025-11-29 08:38:48.296 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 298 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 862 KiB/s rd, 2.5 MiB/s wr, 156 op/s
Nov 29 03:38:49 np0005539550 nova_compute[257631]: 2025-11-29 08:38:49.207 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:49 np0005539550 nova_compute[257631]: 2025-11-29 08:38:49.296 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539550 nova_compute[257631]: 2025-11-29 08:38:49.297 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539550 nova_compute[257631]: 2025-11-29 08:38:49.297 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:49 np0005539550 nova_compute[257631]: 2025-11-29 08:38:49.298 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:38:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:50.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:50 np0005539550 nova_compute[257631]: 2025-11-29 08:38:50.325 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 300 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 999 KiB/s rd, 2.6 MiB/s wr, 156 op/s
Nov 29 03:38:51 np0005539550 nova_compute[257631]: 2025-11-29 08:38:51.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:52.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 300 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.6 MiB/s wr, 131 op/s
Nov 29 03:38:52 np0005539550 nova_compute[257631]: 2025-11-29 08:38:52.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.045 257641 DEBUG oslo_concurrency.lockutils [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.046 257641 DEBUG oslo_concurrency.lockutils [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.062 257641 INFO nova.compute.manager [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Detaching volume d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd#033[00m
Nov 29 03:38:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:38:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:38:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:54.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.210 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.252 257641 INFO nova.virt.block_device [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Attempting to driver detach volume d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd from mountpoint /dev/vdc#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.260 257641 DEBUG nova.virt.libvirt.driver [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Attempting to detach device vdc from instance 5748313e-fbb3-409e-83e6-aff548491530 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.261 257641 DEBUG nova.virt.libvirt.guest [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd">
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <serial>d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd</serial>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:38:54 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.283 257641 INFO nova.virt.libvirt.driver [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully detached device vdc from instance 5748313e-fbb3-409e-83e6-aff548491530 from the persistent domain config.#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.284 257641 DEBUG nova.virt.libvirt.driver [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance 5748313e-fbb3-409e-83e6-aff548491530 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.285 257641 DEBUG nova.virt.libvirt.guest [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd">
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <serial>d6f82ad4-8ec0-4138-9a4b-77fdefc17bbd</serial>
Nov 29 03:38:54 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Nov 29 03:38:54 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:38:54 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:38:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 303 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 953 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.347 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405534.346691, 5748313e-fbb3-409e-83e6-aff548491530 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.348 257641 DEBUG nova.virt.libvirt.driver [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance 5748313e-fbb3-409e-83e6-aff548491530 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.352 257641 INFO nova.virt.libvirt.driver [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully detached device vdc from instance 5748313e-fbb3-409e-83e6-aff548491530 from the live domain config.#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.566 257641 DEBUG nova.objects.instance [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'flavor' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:54 np0005539550 nova_compute[257631]: 2025-11-29 08:38:54.637 257641 DEBUG oslo_concurrency.lockutils [None req-a6c08243-371b-407c-9718-1f1a2414fcba eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.327 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:55 np0005539550 podman[371849]: 2025-11-29 08:38:55.341964728 +0000 UTC m=+0.061376293 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:55 np0005539550 podman[371848]: 2025-11-29 08:38:55.347917646 +0000 UTC m=+0.073179776 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS)
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.894 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.895 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.895 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.896 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.896 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.897 257641 INFO nova.compute.manager [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Terminating instance#033[00m
Nov 29 03:38:55 np0005539550 nova_compute[257631]: 2025-11-29 08:38:55.898 257641 DEBUG nova.compute.manager [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:38:56 np0005539550 kernel: tap6fb79eb2-29 (unregistering): left promiscuous mode
Nov 29 03:38:56 np0005539550 NetworkManager[49039]: <info>  [1764405536.0441] device (tap6fb79eb2-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:38:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:56Z|00847|binding|INFO|Releasing lport 6fb79eb2-29a8-4947-8bac-7bed17841673 from this chassis (sb_readonly=0)
Nov 29 03:38:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:56Z|00848|binding|INFO|Setting lport 6fb79eb2-29a8-4947-8bac-7bed17841673 down in Southbound
Nov 29 03:38:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:38:56Z|00849|binding|INFO|Removing iface tap6fb79eb2-29 ovn-installed in OVS
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.068 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.069 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.074 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.074 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9f:50 10.100.0.3'], port_security=['fa:16:3e:8b:9f:50 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5748313e-fbb3-409e-83e6-aff548491530', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'c266cfe3-9f5e-4a42-93d4-52df1525211e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=6fb79eb2-29a8-4947-8bac-7bed17841673) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.076 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 6fb79eb2-29a8-4947-8bac-7bed17841673 in datapath d042f24f-c2f0-4843-9727-cc3720586596 unbound from our chassis#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.078 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d042f24f-c2f0-4843-9727-cc3720586596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.079 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d95b09cf-f28e-4db6-8bf8-9a26be693a5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.080 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace which is not needed anymore#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.094 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539550 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d000000b4.scope: Consumed 14.993s CPU time.
Nov 29 03:38:56 np0005539550 systemd-machined[216673]: Machine qemu-99-instance-000000b4 terminated.
Nov 29 03:38:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:38:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [NOTICE]   (371700) : haproxy version is 2.8.14-c23fe91
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [NOTICE]   (371700) : path to executable is /usr/sbin/haproxy
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [WARNING]  (371700) : Exiting Master process...
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [WARNING]  (371700) : Exiting Master process...
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [ALERT]    (371700) : Current worker (371702) exited with code 143 (Terminated)
Nov 29 03:38:56 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[371696]: [WARNING]  (371700) : All workers exited. Exiting... (0)
Nov 29 03:38:56 np0005539550 systemd[1]: libpod-b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539550 podman[371913]: 2025-11-29 08:38:56.216920348 +0000 UTC m=+0.046810602 container died b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:38:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103-userdata-shm.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0c48cf31d185bdda61fe22c6d2af6f7a6c53d800eeb921aa56bf77fd243da31e-merged.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539550 podman[371913]: 2025-11-29 08:38:56.254248804 +0000 UTC m=+0.084139058 container cleanup b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:56 np0005539550 systemd[1]: libpod-conmon-b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103.scope: Deactivated successfully.
Nov 29 03:38:56 np0005539550 podman[371944]: 2025-11-29 08:38:56.313548355 +0000 UTC m=+0.040531487 container remove b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.319 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[238989d3-9b7d-4f84-91c3-53dfdbc48be2]: (4, ('Sat Nov 29 08:38:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103)\nb5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103\nSat Nov 29 08:38:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (b5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103)\nb5ea06eddb5d611abfb0e378bc5df009eef8315c26c17548fa91b4604da94103\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.320 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c3c5cf-3d71-4a06-ab8f-cd673b8f1630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.321 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.332 257641 INFO nova.virt.libvirt.driver [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Instance destroyed successfully.#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.334 257641 DEBUG nova.objects.instance [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'resources' on Instance uuid 5748313e-fbb3-409e-83e6-aff548491530 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:38:56 np0005539550 kernel: tapd042f24f-c0: left promiscuous mode
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.339 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.342 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 877 KiB/s rd, 3.6 MiB/s wr, 137 op/s
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.345 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06d7476c-f270-4e72-8186-9f2287f8a5f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.358 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3652361d-b8d5-4a12-b07f-e8b7f16b3c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.359 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef87e91-2d74-4c9a-867e-cb0f1b47545a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.375 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd890bd-1137-4796-b46a-0f2b159b69a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 855908, 'reachable_time': 41315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371973, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.377 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:38:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:38:56.377 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[90bb8c87-b009-43ed-ae77-8ee57f535c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:38:56 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd042f24f\x2dc2f0\x2d4843\x2d9727\x2dcc3720586596.mount: Deactivated successfully.
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.437 257641 DEBUG nova.virt.libvirt.vif [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-716574022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-716574022',id=180,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBSzclE1qHhmNtB+Dlhxw2Ps8HWdSPocEaWlsxDme1qVoS+t7CcJ/Bo6lrhqUu/ZA6JT3SxHX6WwNieCHu9AGemn9sAzHapyUGRyjBFuHCFhJn85rjzwwkttpV/QWN0gNg==',key_name='tempest-keypair-1744874454',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:38:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-pwfpf00q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:38:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=5748313e-fbb3-409e-83e6-aff548491530,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.438 257641 DEBUG nova.network.os_vif_util [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "6fb79eb2-29a8-4947-8bac-7bed17841673", "address": "fa:16:3e:8b:9f:50", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fb79eb2-29", "ovs_interfaceid": "6fb79eb2-29a8-4947-8bac-7bed17841673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.439 257641 DEBUG nova.network.os_vif_util [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.439 257641 DEBUG os_vif [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.441 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.441 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fb79eb2-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.448 257641 INFO os_vif [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9f:50,bridge_name='br-int',has_traffic_filtering=True,id=6fb79eb2-29a8-4947-8bac-7bed17841673,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6fb79eb2-29')#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.959 257641 DEBUG nova.compute.manager [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.960 257641 DEBUG oslo_concurrency.lockutils [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.960 257641 DEBUG oslo_concurrency.lockutils [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.960 257641 DEBUG oslo_concurrency.lockutils [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.961 257641 DEBUG nova.compute.manager [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.961 257641 DEBUG nova.compute.manager [req-aec5b675-c5d5-4d40-8fbc-0ccf18052fca req-1823c8b9-bfee-4e1d-9686-c8427eca46a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-unplugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.983 257641 INFO nova.virt.libvirt.driver [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deleting instance files /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530_del#033[00m
Nov 29 03:38:56 np0005539550 nova_compute[257631]: 2025-11-29 08:38:56.984 257641 INFO nova.virt.libvirt.driver [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deletion of /var/lib/nova/instances/5748313e-fbb3-409e-83e6-aff548491530_del complete#033[00m
Nov 29 03:38:57 np0005539550 nova_compute[257631]: 2025-11-29 08:38:57.180 257641 INFO nova.compute.manager [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Took 1.28 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:38:57 np0005539550 nova_compute[257631]: 2025-11-29 08:38:57.181 257641 DEBUG oslo.service.loopingcall [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:38:57 np0005539550 nova_compute[257631]: 2025-11-29 08:38:57.181 257641 DEBUG nova.compute.manager [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:38:57 np0005539550 nova_compute[257631]: 2025-11-29 08:38:57.181 257641 DEBUG nova.network.neutron [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:57 np0005539550 nova_compute[257631]: 2025-11-29 08:38:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:38:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:38:58.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:38:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:38:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:38:58.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:38:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:38:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:38:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 348 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 190 KiB/s rd, 1.9 MiB/s wr, 50 op/s
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.704 257641 DEBUG nova.network.neutron [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.731 257641 INFO nova.compute.manager [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Took 1.55 seconds to deallocate network for instance.#033[00m
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.790 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.790 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.807 257641 DEBUG nova.compute.manager [req-f4d3eb77-17ca-44c1-8422-ad64182476f2 req-18aa0f97-a717-4215-89d5-2e44cbdf3a62 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-deleted-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:58 np0005539550 nova_compute[257631]: 2025-11-29 08:38:58.847 257641 DEBUG oslo_concurrency.processutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.075 257641 DEBUG nova.compute.manager [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.076 257641 DEBUG oslo_concurrency.lockutils [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5748313e-fbb3-409e-83e6-aff548491530-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.076 257641 DEBUG oslo_concurrency.lockutils [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.076 257641 DEBUG oslo_concurrency.lockutils [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.076 257641 DEBUG nova.compute.manager [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] No waiting events found dispatching network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.077 257641 WARNING nova.compute.manager [req-24533de5-845b-4aba-be2b-42482a3a66fd req-fcb5bd7c-1a44-4ebc-aa8a-344d51ff89c3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Received unexpected event network-vif-plugged-6fb79eb2-29a8-4947-8bac-7bed17841673 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.248 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:38:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:38:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143990179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.268 257641 DEBUG oslo_concurrency.processutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.273 257641 DEBUG nova.compute.provider_tree [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.293 257641 DEBUG nova.scheduler.client.report [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.314 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.339 257641 INFO nova.scheduler.client.report [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Deleted allocations for instance 5748313e-fbb3-409e-83e6-aff548491530#033[00m
Nov 29 03:38:59 np0005539550 nova_compute[257631]: 2025-11-29 08:38:59.425 257641 DEBUG oslo_concurrency.lockutils [None req-bb7cc8c2-1642-4bb5-88af-1b8f2b8a8d84 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "5748313e-fbb3-409e-83e6-aff548491530" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:38:59
Nov 29 03:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', '.mgr', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'volumes']
Nov 29 03:38:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:39:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:00.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:00.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 334 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 678 KiB/s rd, 1.9 MiB/s wr, 72 op/s
Nov 29 03:39:01 np0005539550 nova_compute[257631]: 2025-11-29 08:39:01.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:01 np0005539550 nova_compute[257631]: 2025-11-29 08:39:01.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:39:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:02.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:39:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Nov 29 03:39:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:04.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:04 np0005539550 nova_compute[257631]: 2025-11-29 08:39:04.250 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 146 op/s
Nov 29 03:39:04 np0005539550 podman[372020]: 2025-11-29 08:39:04.348647492 +0000 UTC m=+0.083037700 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:39:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:05 np0005539550 nova_compute[257631]: 2025-11-29 08:39:05.415 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:06.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:06.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 204 op/s
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.452 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.452 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.471 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.572 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.573 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.583 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.584 257641 INFO nova.compute.claims [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:39:06 np0005539550 nova_compute[257631]: 2025-11-29 08:39:06.741 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065846408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.199 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.206 257641 DEBUG nova.compute.provider_tree [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.237 257641 DEBUG nova.scheduler.client.report [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.269 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.269 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.336 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.337 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.359 257641 INFO nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.381 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.482 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.484 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.484 257641 INFO nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating image(s)#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.517 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.545 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.570 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.574 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.608 257641 DEBUG nova.policy [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eeba34466b8f4a1bb5f742f1e811053c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '889608c71d13429fb37793575792ae74', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.645 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.647 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.648 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.649 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.676 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.680 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 78534f04-30a6-4f58-9768-091f48082c9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:07 np0005539550 nova_compute[257631]: 2025-11-29 08:39:07.976 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 78534f04-30a6-4f58-9768-091f48082c9c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.052 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] resizing rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:39:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.161 257641 DEBUG nova.objects.instance [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'migration_context' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.173 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.174 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Ensure instance console log exists: /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.174 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.174 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.175 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:08.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 267 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 35 KiB/s wr, 210 op/s
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:39:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:39:08 np0005539550 nova_compute[257631]: 2025-11-29 08:39:08.521 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Successfully created port: df823be2-d3ae-4d3c-b70a-37db097fc356 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:39:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:39:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:39:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:39:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.247 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Successfully updated port: df823be2-d3ae-4d3c-b70a-37db097fc356 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.251 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.260 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.260 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.260 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.341 257641 DEBUG nova.compute.manager [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.342 257641 DEBUG nova.compute.manager [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing instance network info cache due to event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.342 257641 DEBUG oslo_concurrency.lockutils [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:09 np0005539550 nova_compute[257631]: 2025-11-29 08:39:09.416 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:10.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:10.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 203 op/s
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.587 257641 DEBUG nova.network.neutron [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.610 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.610 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance network_info: |[{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.610 257641 DEBUG oslo_concurrency.lockutils [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.611 257641 DEBUG nova.network.neutron [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.613 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start _get_guest_xml network_info=[{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.618 257641 WARNING nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.624 257641 DEBUG nova.virt.libvirt.host [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.625 257641 DEBUG nova.virt.libvirt.host [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.628 257641 DEBUG nova.virt.libvirt.host [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.628 257641 DEBUG nova.virt.libvirt.host [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.630 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.630 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.631 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.631 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.631 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.631 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.632 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.632 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.632 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.632 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.632 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.633 257641 DEBUG nova.virt.hardware [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:10 np0005539550 nova_compute[257631]: 2025-11-29 08:39:10.635 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/483798028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.067 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.092 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.096 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.331 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405536.3302312, 5748313e-fbb3-409e-83e6-aff548491530 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.332 257641 INFO nova.compute.manager [-] [instance: 5748313e-fbb3-409e-83e6-aff548491530] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.352 257641 DEBUG nova.compute.manager [None req-1652c53e-0462-490b-804c-cdd9eaaa3c02 - - - - - -] [instance: 5748313e-fbb3-409e-83e6-aff548491530] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252723546' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.568 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.569 257641 DEBUG nova.virt.libvirt.vif [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU/Um4efU77ova0kz9WpooHJ1R78p7btmWfhWnw2SWonD75t5qli/+3ke1m06M5GJ4lWVadgCjMA2qcBuT9S+rdGvf2uoQyHcOJoYChEuEWX8cHtdlJ+rQFdBRmufcsRw==',key_name='tempest-keypair-701830470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.569 257641 DEBUG nova.network.os_vif_util [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.571 257641 DEBUG nova.network.os_vif_util [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.572 257641 DEBUG nova.objects.instance [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.587 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <uuid>78534f04-30a6-4f58-9768-091f48082c9c</uuid>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <name>instance-000000b9</name>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1172691465</nova:name>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:39:10</nova:creationTime>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:user uuid="eeba34466b8f4a1bb5f742f1e811053c">tempest-AttachVolumeShelveTestJSON-907905934-project-member</nova:user>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:project uuid="889608c71d13429fb37793575792ae74">tempest-AttachVolumeShelveTestJSON-907905934</nova:project>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <nova:port uuid="df823be2-d3ae-4d3c-b70a-37db097fc356">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="serial">78534f04-30a6-4f58-9768-091f48082c9c</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="uuid">78534f04-30a6-4f58-9768-091f48082c9c</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/78534f04-30a6-4f58-9768-091f48082c9c_disk">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/78534f04-30a6-4f58-9768-091f48082c9c_disk.config">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:bc:b9:6e"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <target dev="tapdf823be2-d3"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/console.log" append="off"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:39:11 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:39:11 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:39:11 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:39:11 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.589 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Preparing to wait for external event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.590 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.590 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.590 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.591 257641 DEBUG nova.virt.libvirt.vif [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU/Um4efU77ova0kz9WpooHJ1R78p7btmWfhWnw2SWonD75t5qli/+3ke1m06M5GJ4lWVadgCjMA2qcBuT9S+rdGvf2uoQyHcOJoYChEuEWX8cHtdlJ+rQFdBRmufcsRw==',key_name='tempest-keypair-701830470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.592 257641 DEBUG nova.network.os_vif_util [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.592 257641 DEBUG nova.network.os_vif_util [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.593 257641 DEBUG os_vif [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.594 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.595 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.597 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.598 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf823be2-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.598 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf823be2-d3, col_values=(('external_ids', {'iface-id': 'df823be2-d3ae-4d3c-b70a-37db097fc356', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:b9:6e', 'vm-uuid': '78534f04-30a6-4f58-9768-091f48082c9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.600 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539550 NetworkManager[49039]: <info>  [1764405551.6020] manager: (tapdf823be2-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.606 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.606 257641 INFO os_vif [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3')#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.651 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.652 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.652 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No VIF found with MAC fa:16:3e:bc:b9:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.652 257641 INFO nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Using config drive#033[00m
Nov 29 03:39:11 np0005539550 nova_compute[257631]: 2025-11-29 08:39:11.677 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:12.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:12.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 333 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.2 MiB/s wr, 195 op/s
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.012 257641 INFO nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating config drive at /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.019 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9f7iwa0f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.154 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9f7iwa0f" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.285 257641 DEBUG nova.storage.rbd_utils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.289 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config 78534f04-30a6-4f58-9768-091f48082c9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.627 257641 DEBUG nova.network.neutron [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updated VIF entry in instance network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.627 257641 DEBUG nova.network.neutron [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:13 np0005539550 nova_compute[257631]: 2025-11-29 08:39:13.725 257641 DEBUG oslo_concurrency.lockutils [req-61bb69ff-8f4c-48c8-8f8f-1ceee37e73a0 req-16df6337-ada8-4fb5-be26-82ad998f8bc8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:14.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:14.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.253 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.311 257641 DEBUG oslo_concurrency.processutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config 78534f04-30a6-4f58-9768-091f48082c9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.311 257641 INFO nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deleting local config drive /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 333 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 180 op/s
Nov 29 03:39:14 np0005539550 kernel: tapdf823be2-d3: entered promiscuous mode
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.3712] manager: (tapdf823be2-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Nov 29 03:39:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:14Z|00850|binding|INFO|Claiming lport df823be2-d3ae-4d3c-b70a-37db097fc356 for this chassis.
Nov 29 03:39:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:14Z|00851|binding|INFO|df823be2-d3ae-4d3c-b70a-37db097fc356: Claiming fa:16:3e:bc:b9:6e 10.100.0.14
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.379 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:b9:6e 10.100.0.14'], port_security=['fa:16:3e:bc:b9:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '78534f04-30a6-4f58-9768-091f48082c9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d2929c7-13e0-4091-aa93-1048c769102b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=df823be2-d3ae-4d3c-b70a-37db097fc356) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.380 158978 INFO neutron.agent.ovn.metadata.agent [-] Port df823be2-d3ae-4d3c-b70a-37db097fc356 in datapath d042f24f-c2f0-4843-9727-cc3720586596 bound to our chassis#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.381 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d042f24f-c2f0-4843-9727-cc3720586596#033[00m
Nov 29 03:39:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:14Z|00852|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 ovn-installed in OVS
Nov 29 03:39:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:14Z|00853|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 up in Southbound
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.389 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.392 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.394 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[40a0f19b-33c4-4fea-8daf-92559014b0e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.395 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd042f24f-c1 in ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.398 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd042f24f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.398 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fd080d-da9f-43aa-a714-47eebf3ecb97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.400 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1b3bba29-c130-4f96-8554-35c8635f7167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 systemd-udevd[372426]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.411 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[5048e442-9755-47da-80db-9489e72119a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 systemd-machined[216673]: New machine qemu-100-instance-000000b9.
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.4217] device (tapdf823be2-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.4235] device (tapdf823be2-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:14 np0005539550 systemd[1]: Started Virtual Machine qemu-100-instance-000000b9.
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.435 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbcf0c1-a7ee-4133-8191-76c5f5de1d98]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.476 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7e076f6b-e786-4f2c-89d4-0a76545965f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.482 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cdb79f-1fbf-4efe-8008-8abd2e18afcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.4839] manager: (tapd042f24f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/374)
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.520 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4001c16e-5678-4e49-bae9-83ba4e502566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.523 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[64987db9-97a3-4823-8232-5d1a58976da9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.5528] device (tapd042f24f-c0): carrier: link connected
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.558 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8176e5-faa9-4a3d-a06f-cfbaa0187799]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.576 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[482c5ab4-0ffb-4695-a560-aba4565c928f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860221, 'reachable_time': 41415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372458, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.591 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4907be-f131-417b-9200-2029112ad6ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:6732'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 860221, 'tstamp': 860221}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372459, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.607 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0af2b747-80f5-4fc8-a759-7eb9d9a7015d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 247], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860221, 'reachable_time': 41415, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372460, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.633 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cd10a31b-aa2f-4ccd-adb1-b4d372949f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.688 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dc151b31-6884-4e65-a3aa-29cf3a2e74f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.690 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.690 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.690 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd042f24f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.692 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 NetworkManager[49039]: <info>  [1764405554.6926] manager: (tapd042f24f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Nov 29 03:39:14 np0005539550 kernel: tapd042f24f-c0: entered promiscuous mode
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.694 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd042f24f-c0, col_values=(('external_ids', {'iface-id': 'e44bf17c-ebb7-4e62-850a-20ff20a74960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.697 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.697 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.698 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbfcf1a-440b-4585-8c19-7f0e084d9a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.699 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:14 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:14.700 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'env', 'PROCESS_TAG=haproxy-d042f24f-c2f0-4843-9727-cc3720586596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d042f24f-c2f0-4843-9727-cc3720586596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:14Z|00854|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:39:14 np0005539550 nova_compute[257631]: 2025-11-29 08:39:14.716 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:15 np0005539550 podman[372493]: 2025-11-29 08:39:15.036622784 +0000 UTC m=+0.023476643 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:15 np0005539550 nova_compute[257631]: 2025-11-29 08:39:15.328 257641 DEBUG nova.compute.manager [req-02b3d1b4-977c-49ee-9d38-69bb8e3d61db req-fba8fa0f-37e4-457f-b57b-329ae542f507 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:15 np0005539550 nova_compute[257631]: 2025-11-29 08:39:15.330 257641 DEBUG oslo_concurrency.lockutils [req-02b3d1b4-977c-49ee-9d38-69bb8e3d61db req-fba8fa0f-37e4-457f-b57b-329ae542f507 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:15 np0005539550 nova_compute[257631]: 2025-11-29 08:39:15.330 257641 DEBUG oslo_concurrency.lockutils [req-02b3d1b4-977c-49ee-9d38-69bb8e3d61db req-fba8fa0f-37e4-457f-b57b-329ae542f507 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:15 np0005539550 nova_compute[257631]: 2025-11-29 08:39:15.331 257641 DEBUG oslo_concurrency.lockutils [req-02b3d1b4-977c-49ee-9d38-69bb8e3d61db req-fba8fa0f-37e4-457f-b57b-329ae542f507 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:15 np0005539550 nova_compute[257631]: 2025-11-29 08:39:15.331 257641 DEBUG nova.compute.manager [req-02b3d1b4-977c-49ee-9d38-69bb8e3d61db req-fba8fa0f-37e4-457f-b57b-329ae542f507 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Processing event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:15 np0005539550 podman[372493]: 2025-11-29 08:39:15.523379866 +0000 UTC m=+0.510233705 container create e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:39:15 np0005539550 systemd[1]: Started libpod-conmon-e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c.scope.
Nov 29 03:39:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2abe034371083762fd88bce2469308cfc13100b13d68e0dd7624f95a7d9312ae/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:15 np0005539550 podman[372493]: 2025-11-29 08:39:15.615914491 +0000 UTC m=+0.602768330 container init e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:15 np0005539550 podman[372493]: 2025-11-29 08:39:15.624788051 +0000 UTC m=+0.611641890 container start e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 03:39:15 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [NOTICE]   (372512) : New worker (372514) forked
Nov 29 03:39:15 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [NOTICE]   (372512) : Loading success.
Nov 29 03:39:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:39:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:16.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:39:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:16.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 350 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 6.1 MiB/s wr, 185 op/s
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.445 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.446 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405556.4456158, 78534f04-30a6-4f58-9768-091f48082c9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.446 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.448 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.452 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance spawned successfully.#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.452 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.809 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.815 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.819 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.819 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.820 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.820 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.821 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.821 257641 DEBUG nova.virt.libvirt.driver [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.847 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.848 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405556.4458897, 78534f04-30a6-4f58-9768-091f48082c9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.848 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.882 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.887 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405556.4477935, 78534f04-30a6-4f58-9768-091f48082c9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.887 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.900 257641 INFO nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Took 9.42 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.900 257641 DEBUG nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.913 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.918 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.942 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.981 257641 INFO nova.compute.manager [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Took 10.44 seconds to build instance.#033[00m
Nov 29 03:39:16 np0005539550 nova_compute[257631]: 2025-11-29 08:39:16.997 257641 DEBUG oslo_concurrency.lockutils [None req-f495d652-a6a4-445a-902b-22d143657d26 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.412 257641 DEBUG nova.compute.manager [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.412 257641 DEBUG oslo_concurrency.lockutils [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.412 257641 DEBUG oslo_concurrency.lockutils [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.412 257641 DEBUG oslo_concurrency.lockutils [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.413 257641 DEBUG nova.compute.manager [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:17 np0005539550 nova_compute[257631]: 2025-11-29 08:39:17.413 257641 WARNING nova.compute.manager [req-5e085283-19d7-4a06-98d1-d196d7c42e7b req-b231d5d8-1293-4d67-8a31-f7eb7ae2485c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received unexpected event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:39:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:18.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 350 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.7 MiB/s wr, 173 op/s
Nov 29 03:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:18.970 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:18.971 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:18.972 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:19 np0005539550 nova_compute[257631]: 2025-11-29 08:39:19.255 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:20.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:20.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005032214021496371 of space, bias 1.0, pg target 1.5096642064489112 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164868032291085 of space, bias 1.0, pg target 0.6472955416550344 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003806103724641727 of space, bias 1.0, pg target 1.1380250136678765 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:39:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 375 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.0 MiB/s wr, 207 op/s
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.019 257641 DEBUG nova.compute.manager [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.020 257641 DEBUG nova.compute.manager [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing instance network info cache due to event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.020 257641 DEBUG oslo_concurrency.lockutils [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.020 257641 DEBUG oslo_concurrency.lockutils [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.020 257641 DEBUG nova.network.neutron [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:21 np0005539550 nova_compute[257631]: 2025-11-29 08:39:21.608 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:22.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:22.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 478 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 8.0 MiB/s wr, 321 op/s
Nov 29 03:39:22 np0005539550 nova_compute[257631]: 2025-11-29 08:39:22.563 257641 DEBUG nova.network.neutron [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updated VIF entry in instance network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:22 np0005539550 nova_compute[257631]: 2025-11-29 08:39:22.564 257641 DEBUG nova.network.neutron [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:22 np0005539550 nova_compute[257631]: 2025-11-29 08:39:22.596 257641 DEBUG oslo_concurrency.lockutils [req-4c1cddd9-9181-4473-a0b3-6a5556f780fd req-a08c6b5d-ecf3-421d-89a1-d36f34fa8c2a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:24.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.5 MiB/s wr, 279 op/s
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.641 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.642 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.752 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.969 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.970 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.978 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:39:24 np0005539550 nova_compute[257631]: 2025-11-29 08:39:24.978 257641 INFO nova.compute.claims [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:39:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.459 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793967733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.943 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.951 257641 DEBUG nova.compute.provider_tree [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.972 257641 DEBUG nova.scheduler.client.report [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.998 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:25 np0005539550 nova_compute[257631]: 2025-11-29 08:39:25.999 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.057 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.057 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.083 257641 INFO nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.099 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:39:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:26.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.144 257641 INFO nova.virt.block_device [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Booting with volume bd3bcc0c-33d5-4f56-93b8-0804e2446b99 at /dev/vda#033[00m
Nov 29 03:39:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.251 257641 DEBUG nova.policy [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'facf4db8501041ab9628ff9f5684c992', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.331 257641 DEBUG os_brick.utils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.334 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:26 np0005539550 podman[372595]: 2025-11-29 08:39:26.339321642 +0000 UTC m=+0.071242628 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.350 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.350 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8377dc-45a7-42b4-9add-9e1c439bd37b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.351 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:26 np0005539550 podman[372596]: 2025-11-29 08:39:26.357646936 +0000 UTC m=+0.082349663 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:39:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 7.5 MiB/s wr, 264 op/s
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.368 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.368 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f7e502-a2da-4d6b-9d6e-be7cb79f84e9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.370 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.383 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.384 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[a1722a63-02e2-4bfc-8685-4dd6890c81d9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.385 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a7cb23-c92e-4d38-a138-76b7588a525f]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.385 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.427 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.430 257641 DEBUG os_brick.initiator.connectors.lightos [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.430 257641 DEBUG os_brick.initiator.connectors.lightos [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.430 257641 DEBUG os_brick.initiator.connectors.lightos [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.430 257641 DEBUG os_brick.utils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (98ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.431 257641 DEBUG nova.virt.block_device [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating existing volume attachment record: 59489c41-ae26-4b9a-acce-f2bc534e63ad _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.650 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.852 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:26.852 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:26.853 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:39:26 np0005539550 nova_compute[257631]: 2025-11-29 08:39:26.992 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Successfully created port: d3028052-76d1-49d3-9c2d-9126edc45935 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.450 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.451 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.452 257641 INFO nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Creating image(s)#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.452 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.453 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Ensure instance console log exists: /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.453 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.454 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.454 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.954 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Successfully updated port: d3028052-76d1-49d3-9c2d-9126edc45935 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.972 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.973 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:27 np0005539550 nova_compute[257631]: 2025-11-29 08:39:27.973 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:28 np0005539550 nova_compute[257631]: 2025-11-29 08:39:28.106 257641 DEBUG nova.compute.manager [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:28 np0005539550 nova_compute[257631]: 2025-11-29 08:39:28.106 257641 DEBUG nova.compute.manager [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:28 np0005539550 nova_compute[257631]: 2025-11-29 08:39:28.106 257641 DEBUG oslo_concurrency.lockutils [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:28.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:28 np0005539550 nova_compute[257631]: 2025-11-29 08:39:28.201 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:39:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:28.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.9 MiB/s wr, 202 op/s
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.182 257641 DEBUG nova.network.neutron [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.205 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.206 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Instance network_info: |[{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.206 257641 DEBUG oslo_concurrency.lockutils [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.206 257641 DEBUG nova.network.neutron [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.210 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Start _get_guest_xml network_info=[{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '59489c41-ae26-4b9a-acce-f2bc534e63ad', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bd3bcc0c-33d5-4f56-93b8-0804e2446b99', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bd3bcc0c-33d5-4f56-93b8-0804e2446b99', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'db9a33b0-f745-4457-b7a6-d22017777a85', 'attached_at': '', 'detached_at': '', 'volume_id': 'bd3bcc0c-33d5-4f56-93b8-0804e2446b99', 'serial': 'bd3bcc0c-33d5-4f56-93b8-0804e2446b99'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.215 257641 WARNING nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.225 257641 DEBUG nova.virt.libvirt.host [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.225 257641 DEBUG nova.virt.libvirt.host [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.232 257641 DEBUG nova.virt.libvirt.host [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.232 257641 DEBUG nova.virt.libvirt.host [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.234 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.234 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.235 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.235 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.235 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.235 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.235 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.236 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.236 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.236 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.236 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.237 257641 DEBUG nova.virt.hardware [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.269 257641 DEBUG nova.storage.rbd_utils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image db9a33b0-f745-4457-b7a6-d22017777a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.273 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.304 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 312a9b7d-d537-4688-9071-5eb8a701c13d does not exist
Nov 29 03:39:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e2c7b222-cdf0-43d7-9ba5-f4414604af82 does not exist
Nov 29 03:39:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 16174c90-554d-4b29-b79f-72b02190c46f does not exist
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:39:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843126663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.800 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.829 257641 DEBUG nova.virt.libvirt.vif [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1460949178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1460949178',id=187,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nbecgxbn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:26Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=db9a33b0-f745-4457-b7a6-d22017777a85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.829 257641 DEBUG nova.network.os_vif_util [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.831 257641 DEBUG nova.network.os_vif_util [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.832 257641 DEBUG nova.objects.instance [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.852 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <uuid>db9a33b0-f745-4457-b7a6-d22017777a85</uuid>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <name>instance-000000bb</name>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-1460949178</nova:name>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:39:29</nova:creationTime>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:user uuid="facf4db8501041ab9628ff9f5684c992">tempest-TestInstancesWithCinderVolumes-911868990-project-member</nova:user>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:project uuid="62ca01275fe34ea0af31d00b34d6d9a5">tempest-TestInstancesWithCinderVolumes-911868990</nova:project>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <nova:port uuid="d3028052-76d1-49d3-9c2d-9126edc45935">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="serial">db9a33b0-f745-4457-b7a6-d22017777a85</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="uuid">db9a33b0-f745-4457-b7a6-d22017777a85</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/db9a33b0-f745-4457-b7a6-d22017777a85_disk.config">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-bd3bcc0c-33d5-4f56-93b8-0804e2446b99">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <serial>bd3bcc0c-33d5-4f56-93b8-0804e2446b99</serial>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:52:60:ba"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <target dev="tapd3028052-76"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/console.log" append="off"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:39:29 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:39:29 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:39:29 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:39:29 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.854 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Preparing to wait for external event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.855 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.855 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.855 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.856 257641 DEBUG nova.virt.libvirt.vif [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:39:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1460949178',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1460949178',id=187,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nbecgxbn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:39:26Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=db9a33b0-f745-4457-b7a6-d22017777a85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.856 257641 DEBUG nova.network.os_vif_util [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.857 257641 DEBUG nova.network.os_vif_util [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.857 257641 DEBUG os_vif [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.858 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.858 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.859 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.863 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.863 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3028052-76, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.864 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3028052-76, col_values=(('external_ids', {'iface-id': 'd3028052-76d1-49d3-9c2d-9126edc45935', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:60:ba', 'vm-uuid': 'db9a33b0-f745-4457-b7a6-d22017777a85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539550 NetworkManager[49039]: <info>  [1764405569.9131] manager: (tapd3028052-76): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/376)
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.920 257641 INFO os_vif [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76')#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.985 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.985 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.985 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:52:60:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:39:29 np0005539550 nova_compute[257631]: 2025-11-29 08:39:29.986 257641 INFO nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Using config drive#033[00m
Nov 29 03:39:30 np0005539550 nova_compute[257631]: 2025-11-29 08:39:30.014 257641 DEBUG nova.storage.rbd_utils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image db9a33b0-f745-4457-b7a6-d22017777a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.02178231 +0000 UTC m=+0.038119226 container create 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:39:30 np0005539550 systemd[1]: Started libpod-conmon-6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f.scope.
Nov 29 03:39:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.004294266 +0000 UTC m=+0.020631202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:30.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.112845989 +0000 UTC m=+0.129182915 container init 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.120326644 +0000 UTC m=+0.136663560 container start 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.126165949 +0000 UTC m=+0.142502865 container attach 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:39:30 np0005539550 youthful_hamilton[373037]: 167 167
Nov 29 03:39:30 np0005539550 systemd[1]: libpod-6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f.scope: Deactivated successfully.
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.128063996 +0000 UTC m=+0.144400912 container died 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:39:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ff2c06a37cf4771f505de8ccef80d332ebfd8caec69ea46f7548ef270b09e030-merged.mount: Deactivated successfully.
Nov 29 03:39:30 np0005539550 podman[373004]: 2025-11-29 08:39:30.167197937 +0000 UTC m=+0.183534853 container remove 6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_hamilton, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:39:30 np0005539550 systemd[1]: libpod-conmon-6774dc5032a862d3705d270dff1c158a67d837899f60222e423bdf8a0d936b4f.scope: Deactivated successfully.
Nov 29 03:39:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:30.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 507 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.9 MiB/s wr, 202 op/s
Nov 29 03:39:30 np0005539550 podman[373061]: 2025-11-29 08:39:30.329341538 +0000 UTC m=+0.024166650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:30 np0005539550 podman[373061]: 2025-11-29 08:39:30.720997731 +0000 UTC m=+0.415822813 container create 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:39:30 np0005539550 systemd[1]: Started libpod-conmon-4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2.scope.
Nov 29 03:39:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:30 np0005539550 podman[373061]: 2025-11-29 08:39:30.872469808 +0000 UTC m=+0.567294890 container init 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:39:30 np0005539550 podman[373061]: 2025-11-29 08:39:30.879548154 +0000 UTC m=+0.574373236 container start 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:39:30 np0005539550 podman[373061]: 2025-11-29 08:39:30.974048677 +0000 UTC m=+0.668873789 container attach 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:39:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:31Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:b9:6e 10.100.0.14
Nov 29 03:39:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:31Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:b9:6e 10.100.0.14
Nov 29 03:39:31 np0005539550 friendly_mclean[373078]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:39:31 np0005539550 friendly_mclean[373078]: --> relative data size: 1.0
Nov 29 03:39:31 np0005539550 friendly_mclean[373078]: --> All data devices are unavailable
Nov 29 03:39:31 np0005539550 systemd[1]: libpod-4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2.scope: Deactivated successfully.
Nov 29 03:39:31 np0005539550 podman[373093]: 2025-11-29 08:39:31.804929414 +0000 UTC m=+0.030673892 container died 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:39:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-df80645eb254d418cdbab252d7693eb0227828cb183ce7aeed5a2d7beca101bf-merged.mount: Deactivated successfully.
Nov 29 03:39:31 np0005539550 podman[373093]: 2025-11-29 08:39:31.856419301 +0000 UTC m=+0.082163759 container remove 4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:39:31 np0005539550 systemd[1]: libpod-conmon-4fc0645b5ed9a080b0e10b0e1ada7dfb81d5542c2b76db327c0abace5767f5f2.scope: Deactivated successfully.
Nov 29 03:39:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:39:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:32.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:39:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:39:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:32.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:39:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 533 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.3 MiB/s wr, 195 op/s
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.48980622 +0000 UTC m=+0.044755371 container create 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:39:32 np0005539550 systemd[1]: Started libpod-conmon-3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988.scope.
Nov 29 03:39:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.564098423 +0000 UTC m=+0.119047584 container init 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.471169188 +0000 UTC m=+0.026118359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.572325447 +0000 UTC m=+0.127274598 container start 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.576336976 +0000 UTC m=+0.131286157 container attach 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 03:39:32 np0005539550 funny_faraday[373265]: 167 167
Nov 29 03:39:32 np0005539550 systemd[1]: libpod-3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988.scope: Deactivated successfully.
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.578846769 +0000 UTC m=+0.133795920 container died 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:39:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2552063989c2b87daa870ee7bbd35544e4b3245019a88f763ede55d7c171a59c-merged.mount: Deactivated successfully.
Nov 29 03:39:32 np0005539550 podman[373248]: 2025-11-29 08:39:32.615357924 +0000 UTC m=+0.170307075 container remove 3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:32 np0005539550 systemd[1]: libpod-conmon-3582d83abfd706b1c7dae43b32bfeb5f6f42f9696675e645ba4ac22c91d13988.scope: Deactivated successfully.
Nov 29 03:39:32 np0005539550 podman[373289]: 2025-11-29 08:39:32.795659975 +0000 UTC m=+0.046823513 container create 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:32 np0005539550 systemd[1]: Started libpod-conmon-1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38.scope.
Nov 29 03:39:32 np0005539550 podman[373289]: 2025-11-29 08:39:32.77169208 +0000 UTC m=+0.022855658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f075a728b25e371fa942574e446371b24bce0783df0cbfda6629bb5c99326f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f075a728b25e371fa942574e446371b24bce0783df0cbfda6629bb5c99326f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f075a728b25e371fa942574e446371b24bce0783df0cbfda6629bb5c99326f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f075a728b25e371fa942574e446371b24bce0783df0cbfda6629bb5c99326f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:32 np0005539550 podman[373289]: 2025-11-29 08:39:32.909999951 +0000 UTC m=+0.161163529 container init 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Nov 29 03:39:32 np0005539550 podman[373289]: 2025-11-29 08:39:32.917956288 +0000 UTC m=+0.169119846 container start 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:39:32 np0005539550 podman[373289]: 2025-11-29 08:39:32.922329467 +0000 UTC m=+0.173493035 container attach 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.136 257641 INFO nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Creating config drive at /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.142 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4jr1u71 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.220 257641 DEBUG nova.network.neutron [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.221 257641 DEBUG nova.network.neutron [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.239 257641 DEBUG oslo_concurrency.lockutils [req-71eb86b8-b055-47f3-a9ca-4f18ea8e1f44 req-e26c0ce8-33ec-49e3-886d-10c095e5dfe8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.287 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps4jr1u71" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.321 257641 DEBUG nova.storage.rbd_utils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] rbd image db9a33b0-f745-4457-b7a6-d22017777a85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:39:33 np0005539550 nova_compute[257631]: 2025-11-29 08:39:33.325 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config db9a33b0-f745-4457-b7a6-d22017777a85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]: {
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:    "0": [
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:        {
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "devices": [
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "/dev/loop3"
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            ],
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "lv_name": "ceph_lv0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "lv_size": "7511998464",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "name": "ceph_lv0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "tags": {
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.cluster_name": "ceph",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.crush_device_class": "",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.encrypted": "0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.osd_id": "0",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.type": "block",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:                "ceph.vdo": "0"
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            },
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "type": "block",
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:            "vg_name": "ceph_vg0"
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:        }
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]:    ]
Nov 29 03:39:34 np0005539550 sharp_jemison[373306]: }
Nov 29 03:39:34 np0005539550 systemd[1]: libpod-1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38.scope: Deactivated successfully.
Nov 29 03:39:34 np0005539550 conmon[373306]: conmon 1f89c88f50eeb20d961d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38.scope/container/memory.events
Nov 29 03:39:34 np0005539550 podman[373289]: 2025-11-29 08:39:34.07243219 +0000 UTC m=+1.323595738 container died 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:39:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:34.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:34.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:34 np0005539550 nova_compute[257631]: 2025-11-29 08:39:34.261 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Nov 29 03:39:34 np0005539550 nova_compute[257631]: 2025-11-29 08:39:34.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2f075a728b25e371fa942574e446371b24bce0783df0cbfda6629bb5c99326f8-merged.mount: Deactivated successfully.
Nov 29 03:39:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:35 np0005539550 nova_compute[257631]: 2025-11-29 08:39:35.639 257641 DEBUG oslo_concurrency.processutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config db9a33b0-f745-4457-b7a6-d22017777a85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:35 np0005539550 nova_compute[257631]: 2025-11-29 08:39:35.640 257641 INFO nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Deleting local config drive /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85/disk.config because it was imported into RBD.#033[00m
Nov 29 03:39:35 np0005539550 kernel: tapd3028052-76: entered promiscuous mode
Nov 29 03:39:35 np0005539550 NetworkManager[49039]: <info>  [1764405575.6962] manager: (tapd3028052-76): new Tun device (/org/freedesktop/NetworkManager/Devices/377)
Nov 29 03:39:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:35Z|00855|binding|INFO|Claiming lport d3028052-76d1-49d3-9c2d-9126edc45935 for this chassis.
Nov 29 03:39:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:35Z|00856|binding|INFO|d3028052-76d1-49d3-9c2d-9126edc45935: Claiming fa:16:3e:52:60:ba 10.100.0.4
Nov 29 03:39:35 np0005539550 nova_compute[257631]: 2025-11-29 08:39:35.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.705 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:60:ba 10.100.0.4'], port_security=['fa:16:3e:52:60:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db9a33b0-f745-4457-b7a6-d22017777a85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d3028052-76d1-49d3-9c2d-9126edc45935) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.707 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d3028052-76d1-49d3-9c2d-9126edc45935 in datapath c114bc23-cd62-4198-a95d-5595953a88bd bound to our chassis#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.710 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c114bc23-cd62-4198-a95d-5595953a88bd#033[00m
Nov 29 03:39:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:35Z|00857|binding|INFO|Setting lport d3028052-76d1-49d3-9c2d-9126edc45935 ovn-installed in OVS
Nov 29 03:39:35 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:35Z|00858|binding|INFO|Setting lport d3028052-76d1-49d3-9c2d-9126edc45935 up in Southbound
Nov 29 03:39:35 np0005539550 nova_compute[257631]: 2025-11-29 08:39:35.716 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539550 nova_compute[257631]: 2025-11-29 08:39:35.719 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.722 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2358d7-da06-434c-8fba-e75a2bff959c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.723 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc114bc23-c1 in ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.726 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc114bc23-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.726 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ddb6e260-4b15-4875-940c-6024c1cbafd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.727 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9038c8c-9af1-4f06-bab4-64e87ebb2711]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 systemd-machined[216673]: New machine qemu-101-instance-000000bb.
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.737 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[fda6c2be-5eb0-42fc-b132-19e972c088c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 systemd[1]: Started Virtual Machine qemu-101-instance-000000bb.
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.762 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9361b173-269c-497c-8287-c5e51d366c70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 systemd-udevd[373394]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:35 np0005539550 NetworkManager[49039]: <info>  [1764405575.7839] device (tapd3028052-76): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:39:35 np0005539550 NetworkManager[49039]: <info>  [1764405575.7849] device (tapd3028052-76): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.799 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4796764d-88db-4cad-8bba-1fab0ee182f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.805 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[05261e4c-4758-4814-9467-fbac07876473]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 NetworkManager[49039]: <info>  [1764405575.8065] manager: (tapc114bc23-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/378)
Nov 29 03:39:35 np0005539550 systemd-udevd[373399]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.838 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b41742f8-9cd3-40c7-8249-6b2d46e31a94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.842 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e3344f0d-0296-49d6-9d7f-ce1ae021e4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.855 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:35 np0005539550 NetworkManager[49039]: <info>  [1764405575.8669] device (tapc114bc23-c0): carrier: link connected
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.873 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5a2ea6-70f1-40f6-8a8c-d91ca949d8e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 podman[373289]: 2025-11-29 08:39:35.892422138 +0000 UTC m=+3.143585696 container remove 1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.893 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9934dd21-da26-43f4-9626-61f179b9f944]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862353, 'reachable_time': 39138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373424, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.923 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[359227ee-b70a-4f62-a63e-8300bc3b9ac2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:784d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 862353, 'tstamp': 862353}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373425, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.947 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a03c98a8-cde3-42d0-ab1d-316b4d0be25c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc114bc23-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:78:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862353, 'reachable_time': 39138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373432, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 systemd[1]: libpod-conmon-1f89c88f50eeb20d961d3acb8c4c4ecacee5caf265cb6b62c3f6f9e2f6f7ed38.scope: Deactivated successfully.
Nov 29 03:39:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:35.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[58fcd6c6-a1fb-4ef7-ab8b-5aaa82f20686]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:35 np0005539550 podman[373368]: 2025-11-29 08:39:35.982286137 +0000 UTC m=+1.129166986 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16a89d2c-1713-47e6-87fc-bdabe30432db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.049 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.050 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.050 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc114bc23-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539550 kernel: tapc114bc23-c0: entered promiscuous mode
Nov 29 03:39:36 np0005539550 NetworkManager[49039]: <info>  [1764405576.0532] manager: (tapc114bc23-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.057 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc114bc23-c0, col_values=(('external_ids', {'iface-id': '1642a0e3-a8d4-4ee4-8971-26f27541a04e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:36Z|00859|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.058 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.059 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.061 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.062 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7717e3-530c-40fa-b488-88146fd87f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.063 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/c114bc23-cd62-4198-a95d-5595953a88bd.pid.haproxy
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID c114bc23-cd62-4198-a95d-5595953a88bd
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:39:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:36.063 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'env', 'PROCESS_TAG=haproxy-c114bc23-cd62-4198-a95d-5595953a88bd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c114bc23-cd62-4198-a95d-5595953a88bd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.074 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:36.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:36.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.326 257641 DEBUG nova.compute.manager [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.327 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.327 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.328 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.328 257641 DEBUG nova.compute.manager [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Processing event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.328 257641 DEBUG nova.compute.manager [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.329 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.329 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.330 257641 DEBUG oslo_concurrency.lockutils [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.330 257641 DEBUG nova.compute.manager [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] No waiting events found dispatching network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:36 np0005539550 nova_compute[257631]: 2025-11-29 08:39:36.330 257641 WARNING nova.compute.manager [req-300d3344-efd5-4bd0-a486-b77b3770eef0 req-b71a978d-2fc4-4c71-a48a-30efbf4eb3c4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received unexpected event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:39:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 145 op/s
Nov 29 03:39:36 np0005539550 podman[373604]: 2025-11-29 08:39:36.460593929 +0000 UTC m=+0.023014732 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:39:36 np0005539550 podman[373604]: 2025-11-29 08:39:36.765826389 +0000 UTC m=+0.328247172 container create 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.785737582 +0000 UTC m=+0.308731527 container create 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:39:36 np0005539550 systemd[1]: Started libpod-conmon-391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec.scope.
Nov 29 03:39:36 np0005539550 systemd[1]: Started libpod-conmon-62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f.scope.
Nov 29 03:39:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56667402d78935e6d38138c556e8a0adb793175078c438c7a9fb606791fad71a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.760128207 +0000 UTC m=+0.283122162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:36 np0005539550 podman[373604]: 2025-11-29 08:39:36.880075882 +0000 UTC m=+0.442496665 container init 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:39:36 np0005539550 podman[373604]: 2025-11-29 08:39:36.887934547 +0000 UTC m=+0.450355330 container start 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.901157425 +0000 UTC m=+0.424151370 container init 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.909630835 +0000 UTC m=+0.432624780 container start 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.913366398 +0000 UTC m=+0.436360323 container attach 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:39:36 np0005539550 flamboyant_bartik[373680]: 167 167
Nov 29 03:39:36 np0005539550 systemd[1]: libpod-62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f.scope: Deactivated successfully.
Nov 29 03:39:36 np0005539550 podman[373622]: 2025-11-29 08:39:36.916654599 +0000 UTC m=+0.439648524 container died 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:39:36 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [NOTICE]   (373686) : New worker (373695) forked
Nov 29 03:39:36 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [NOTICE]   (373686) : Loading success.
Nov 29 03:39:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da260047443b3e8c4aea1f973fe15d9603e373013c842048f63784eafbcf5e4e-merged.mount: Deactivated successfully.
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.002 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.003 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405577.002729, db9a33b0-f745-4457-b7a6-d22017777a85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.003 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] VM Started (Lifecycle Event)#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.005 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.009 257641 INFO nova.virt.libvirt.driver [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Instance spawned successfully.#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.009 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.029 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.035 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.039 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.040 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.040 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.040 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.041 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.041 257641 DEBUG nova.virt.libvirt.driver [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:39:37 np0005539550 podman[373622]: 2025-11-29 08:39:37.058010205 +0000 UTC m=+0.581004130 container remove 62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.064 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.065 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405577.0028186, db9a33b0-f745-4457-b7a6-d22017777a85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.065 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:39:37 np0005539550 systemd[1]: libpod-conmon-62e908d9a517ffff8670881ec68be6f8efd4f8d683e2c63c7d1d211dfc8c619f.scope: Deactivated successfully.
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.093 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.099 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405577.0052693, db9a33b0-f745-4457-b7a6-d22017777a85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.099 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.117 257641 INFO nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Took 9.67 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.118 257641 DEBUG nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.125 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.129 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.156 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.187 257641 INFO nova.compute.manager [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Took 12.25 seconds to build instance.#033[00m
Nov 29 03:39:37 np0005539550 nova_compute[257631]: 2025-11-29 08:39:37.220 257641 DEBUG oslo_concurrency.lockutils [None req-f518d4eb-865f-4db9-b19b-14fca741312e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:37 np0005539550 podman[373722]: 2025-11-29 08:39:37.239059355 +0000 UTC m=+0.027881942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:39:37 np0005539550 podman[373722]: 2025-11-29 08:39:37.54852328 +0000 UTC m=+0.337345887 container create 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:37 np0005539550 systemd[1]: Started libpod-conmon-9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04.scope.
Nov 29 03:39:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:39:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b4797789ca7de8b643097f554b42d308b8d8cb24c27aca73e8cc9136fb7e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b4797789ca7de8b643097f554b42d308b8d8cb24c27aca73e8cc9136fb7e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b4797789ca7de8b643097f554b42d308b8d8cb24c27aca73e8cc9136fb7e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/175b4797789ca7de8b643097f554b42d308b8d8cb24c27aca73e8cc9136fb7e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:39:37 np0005539550 podman[373722]: 2025-11-29 08:39:37.820963978 +0000 UTC m=+0.609786565 container init 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:39:37 np0005539550 podman[373722]: 2025-11-29 08:39:37.831169181 +0000 UTC m=+0.619991748 container start 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:39:37 np0005539550 podman[373722]: 2025-11-29 08:39:37.835087018 +0000 UTC m=+0.623909605 container attach 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:39:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:38.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:38.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 145 op/s
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]: {
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:        "osd_id": 0,
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:        "type": "bluestore"
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]:    }
Nov 29 03:39:38 np0005539550 crazy_lumiere[373738]: }
Nov 29 03:39:38 np0005539550 systemd[1]: libpod-9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04.scope: Deactivated successfully.
Nov 29 03:39:38 np0005539550 podman[373760]: 2025-11-29 08:39:38.767737339 +0000 UTC m=+0.025696429 container died 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:39:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-175b4797789ca7de8b643097f554b42d308b8d8cb24c27aca73e8cc9136fb7e6-merged.mount: Deactivated successfully.
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.010 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.014 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.014 257641 INFO nova.compute.manager [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Shelving#033[00m
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.038 257641 DEBUG nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:39 np0005539550 podman[373760]: 2025-11-29 08:39:39.611221018 +0000 UTC m=+0.869180118 container remove 9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:39:39 np0005539550 systemd[1]: libpod-conmon-9c3a26c013113d142d49d7de02a25bd8b1880b48db0c420304d168b6de28dd04.scope: Deactivated successfully.
Nov 29 03:39:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:39:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:39:39 np0005539550 nova_compute[257631]: 2025-11-29 08:39:39.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:40.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f8db12f7-2ad1-4ccf-9237-b71d6b560c28 does not exist
Nov 29 03:39:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 28f97392-34ec-4268-ae0a-4404d72b7227 does not exist
Nov 29 03:39:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 838a2825-b6c9-4bd3-ac54-089a4369271f does not exist
Nov 29 03:39:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:40.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 540 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 172 op/s
Nov 29 03:39:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:39:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:42.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:42.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 541 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.2 MiB/s wr, 273 op/s
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.060 257641 INFO nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance shutdown successfully after 4 seconds.#033[00m
Nov 29 03:39:43 np0005539550 kernel: tapdf823be2-d3 (unregistering): left promiscuous mode
Nov 29 03:39:43 np0005539550 NetworkManager[49039]: <info>  [1764405583.5679] device (tapdf823be2-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:39:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:43Z|00860|binding|INFO|Releasing lport df823be2-d3ae-4d3c-b70a-37db097fc356 from this chassis (sb_readonly=0)
Nov 29 03:39:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:43Z|00861|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 down in Southbound
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:43Z|00862|binding|INFO|Removing iface tapdf823be2-d3 ovn-installed in OVS
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.579 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:43.585 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:b9:6e 10.100.0.14'], port_security=['fa:16:3e:bc:b9:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '78534f04-30a6-4f58-9768-091f48082c9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d2929c7-13e0-4091-aa93-1048c769102b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=df823be2-d3ae-4d3c-b70a-37db097fc356) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:39:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:43.587 158978 INFO neutron.agent.ovn.metadata.agent [-] Port df823be2-d3ae-4d3c-b70a-37db097fc356 in datapath d042f24f-c2f0-4843-9727-cc3720586596 unbound from our chassis#033[00m
Nov 29 03:39:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:43.589 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d042f24f-c2f0-4843-9727-cc3720586596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.595 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:43.593 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[71661f4a-3320-42eb-ba8c-6b3dcc9ed275]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:43.596 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace which is not needed anymore#033[00m
Nov 29 03:39:43 np0005539550 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Nov 29 03:39:43 np0005539550 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d000000b9.scope: Consumed 16.078s CPU time.
Nov 29 03:39:43 np0005539550 systemd-machined[216673]: Machine qemu-100-instance-000000b9 terminated.
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.699 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance destroyed successfully.#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.699 257641 DEBUG nova.objects.instance [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'numa_topology' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [NOTICE]   (372512) : haproxy version is 2.8.14-c23fe91
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [NOTICE]   (372512) : path to executable is /usr/sbin/haproxy
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [WARNING]  (372512) : Exiting Master process...
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [WARNING]  (372512) : Exiting Master process...
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [ALERT]    (372512) : Current worker (372514) exited with code 143 (Terminated)
Nov 29 03:39:43 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[372508]: [WARNING]  (372512) : All workers exited. Exiting... (0)
Nov 29 03:39:43 np0005539550 systemd[1]: libpod-e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c.scope: Deactivated successfully.
Nov 29 03:39:43 np0005539550 conmon[372508]: conmon e5f7872a8b48b310d547 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c.scope/container/memory.events
Nov 29 03:39:43 np0005539550 podman[373858]: 2025-11-29 08:39:43.777148726 +0000 UTC m=+0.055045186 container died e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:39:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2065408949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c-userdata-shm.mount: Deactivated successfully.
Nov 29 03:39:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2abe034371083762fd88bce2469308cfc13100b13d68e0dd7624f95a7d9312ae-merged.mount: Deactivated successfully.
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.943 257641 DEBUG nova.compute.manager [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.943 257641 DEBUG oslo_concurrency.lockutils [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.944 257641 DEBUG oslo_concurrency.lockutils [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.944 257641 DEBUG oslo_concurrency.lockutils [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.944 257641 DEBUG nova.compute.manager [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:43 np0005539550 nova_compute[257631]: 2025-11-29 08:39:43.944 257641 WARNING nova.compute.manager [req-2a98e381-475c-454a-81d4-63833381dd04 req-f970644d-97ab-4433-8020-e7de5520bb32 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received unexpected event network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with vm_state active and task_state shelving.#033[00m
Nov 29 03:39:43 np0005539550 podman[373858]: 2025-11-29 08:39:43.977342931 +0000 UTC m=+0.255239391 container cleanup e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 03:39:43 np0005539550 systemd[1]: libpod-conmon-e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c.scope: Deactivated successfully.
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.060 257641 INFO nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Beginning cold snapshot process#033[00m
Nov 29 03:39:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:44.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:44 np0005539550 podman[373888]: 2025-11-29 08:39:44.249968723 +0000 UTC m=+0.246385432 container remove e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.255 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6478b2f7-88ce-4067-95f6-0b09093bdafc]: (4, ('Sat Nov 29 08:39:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c)\ne5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c\nSat Nov 29 08:39:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (e5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c)\ne5f7872a8b48b310d547b1f59358d7f24d90e81b5a5abf12b4a4cd010158dd9c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.257 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9680fb-b154-4aa2-bdf1-ee1e8dd8bd92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.258 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.259 257641 DEBUG nova.virt.libvirt.imagebackend [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:39:44 np0005539550 kernel: tapd042f24f-c0: left promiscuous mode
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:44.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.286 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.289 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bd78f87a-a59e-4523-8eea-16559523cf9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.311 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[51d91ab1-a21e-4ea7-b775-c6de75c976a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.312 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2848058c-4161-4436-8f85-54d638646c20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.326 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ead86f06-4c47-4143-96f7-3756db7857d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 860213, 'reachable_time': 20197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373940, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.329 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:39:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:39:44.329 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[0db0fa15-4e20-4e3e-9e2e-50e1ee276ac2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:39:44 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd042f24f\x2dc2f0\x2d4843\x2d9727\x2dcc3720586596.mount: Deactivated successfully.
Nov 29 03:39:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 541 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 MiB/s rd, 956 KiB/s wr, 257 op/s
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.533 257641 DEBUG nova.storage.rbd_utils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] creating snapshot(5faa3860c460422da133e06a7b6e15af) on rbd image(78534f04-30a6-4f58-9768-091f48082c9c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:39:44 np0005539550 nova_compute[257631]: 2025-11-29 08:39:44.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Nov 29 03:39:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Nov 29 03:39:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.071 257641 DEBUG nova.storage.rbd_utils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] cloning vms/78534f04-30a6-4f58-9768-091f48082c9c_disk@5faa3860c460422da133e06a7b6e15af to images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:39:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.517 257641 DEBUG nova.storage.rbd_utils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] flattening images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.955 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:39:45 np0005539550 nova_compute[257631]: 2025-11-29 08:39:45.955 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:46.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:46.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 255 op/s
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.518 257641 DEBUG nova.compute.manager [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.519 257641 DEBUG oslo_concurrency.lockutils [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.519 257641 DEBUG oslo_concurrency.lockutils [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.519 257641 DEBUG oslo_concurrency.lockutils [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.519 257641 DEBUG nova.compute.manager [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:39:46 np0005539550 nova_compute[257631]: 2025-11-29 08:39:46.519 257641 WARNING nova.compute.manager [req-359a5335-9d43-4feb-a5d2-5908e37e573a req-7ccf3578-60ae-4811-a89c-7679545a16aa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received unexpected event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Nov 29 03:39:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:48.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:48.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:48 np0005539550 nova_compute[257631]: 2025-11-29 08:39:48.291 257641 DEBUG nova.storage.rbd_utils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] removing snapshot(5faa3860c460422da133e06a7b6e15af) on rbd image(78534f04-30a6-4f58-9768-091f48082c9c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:39:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 2.3 MiB/s wr, 255 op/s
Nov 29 03:39:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Nov 29 03:39:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Nov 29 03:39:49 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.315 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.340 257641 DEBUG nova.storage.rbd_utils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] creating snapshot(snap) on rbd image(eb8dd751-ecc7-464b-8f9f-4f4d38e755cf) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.375 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.398 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.399 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.399 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.400 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.400 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.400 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.400 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.419 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.420 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.420 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.420 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.420 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272840259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.896 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:49 np0005539550 nova_compute[257631]: 2025-11-29 08:39:49.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:50.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Nov 29 03:39:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:50.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 589 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.9 MiB/s wr, 184 op/s
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.370 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.370 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.373 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.374 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000b9 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:39:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Nov 29 03:39:51 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.575 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.577 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3931MB free_disk=20.850975036621094GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.577 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:51 np0005539550 nova_compute[257631]: 2025-11-29 08:39:51.578 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:51Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:60:ba 10.100.0.4
Nov 29 03:39:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:39:51Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:60:ba 10.100.0.4
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.125 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 78534f04-30a6-4f58-9768-091f48082c9c actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.125 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance db9a33b0-f745-4457-b7a6-d22017777a85 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.125 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.126 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:39:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:52.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:52.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.358 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 682 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 13 MiB/s wr, 293 op/s
Nov 29 03:39:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1664661030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.798 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.803 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.946 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.993 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:39:52 np0005539550 nova_compute[257631]: 2025-11-29 08:39:52.993 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:39:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:54.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:39:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:54.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.344 257641 INFO nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Snapshot image upload complete#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.345 257641 DEBUG nova.compute.manager [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 686 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 284 op/s
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.423 257641 INFO nova.compute.manager [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Shelve offloading#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.432 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance destroyed successfully.#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.433 257641 DEBUG nova.compute.manager [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.435 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.435 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.436 257641 DEBUG nova.network.neutron [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.513 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:39:54 np0005539550 nova_compute[257631]: 2025-11-29 08:39:54.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:39:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Nov 29 03:39:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Nov 29 03:39:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Nov 29 03:39:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:56.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:56 np0005539550 nova_compute[257631]: 2025-11-29 08:39:56.163 257641 DEBUG nova.network.neutron [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:56 np0005539550 nova_compute[257631]: 2025-11-29 08:39:56.230 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 702 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 13 MiB/s wr, 325 op/s
Nov 29 03:39:57 np0005539550 podman[374158]: 2025-11-29 08:39:57.327831517 +0000 UTC m=+0.057460246 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:39:57 np0005539550 podman[374157]: 2025-11-29 08:39:57.336057531 +0000 UTC m=+0.069097144 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:39:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Nov 29 03:39:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Nov 29 03:39:57 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.689 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance destroyed successfully.#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.690 257641 DEBUG nova.objects.instance [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'resources' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.715 257641 DEBUG nova.virt.libvirt.vif [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU/Um4efU77ova0kz9WpooHJ1R78p7btmWfhWnw2SWonD75t5qli/+3ke1m06M5GJ4lWVadgCjMA2qcBuT9S+rdGvf2uoQyHcOJoYChEuEWX8cHtdlJ+rQFdBRmufcsRw==',key_name='tempest-keypair-701830470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:39:54.345339',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='eb8dd751-ecc7-464b-8f9f-4f4d38e755cf'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:44Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.716 257641 DEBUG nova.network.os_vif_util [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.718 257641 DEBUG nova.network.os_vif_util [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.719 257641 DEBUG os_vif [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.722 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.723 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf823be2-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.726 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.731 257641 INFO os_vif [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3')#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.799 257641 DEBUG nova.compute.manager [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.800 257641 DEBUG nova.compute.manager [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing instance network info cache due to event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.800 257641 DEBUG oslo_concurrency.lockutils [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.801 257641 DEBUG oslo_concurrency.lockutils [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:39:57 np0005539550 nova_compute[257631]: 2025-11-29 08:39:57.801 257641 DEBUG nova.network.neutron [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:39:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:39:58.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.152 257641 INFO nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deleting instance files /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c_del#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.153 257641 INFO nova.virt.libvirt.driver [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deletion of /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c_del complete#033[00m
Nov 29 03:39:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:39:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.278 257641 INFO nova.scheduler.client.report [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Deleted allocations for instance 78534f04-30a6-4f58-9768-091f48082c9c#033[00m
Nov 29 03:39:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:39:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:39:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:39:58.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.325 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.326 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:39:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 684 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1006 KiB/s rd, 7.7 MiB/s wr, 286 op/s
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.393 257641 DEBUG oslo_concurrency.processutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.699 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405583.697741, 78534f04-30a6-4f58-9768-091f48082c9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.701 257641 INFO nova.compute.manager [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.721 257641 DEBUG nova.compute.manager [None req-25f67bbd-6301-41bb-bd5f-7e2e6c572b1d - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:39:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:39:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1348037086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.850 257641 DEBUG oslo_concurrency.processutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.855 257641 DEBUG nova.compute.provider_tree [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.880 257641 DEBUG nova.scheduler.client.report [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.919 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:58 np0005539550 nova_compute[257631]: 2025-11-29 08:39:58.981 257641 DEBUG oslo_concurrency.lockutils [None req-13f1a5b1-afb2-40b7-b5bb-537088d52d3c eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 19.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:39:59 np0005539550 nova_compute[257631]: 2025-11-29 08:39:59.320 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:39:59
Nov 29 03:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'images', 'backups', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 03:39:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:39:59 np0005539550 nova_compute[257631]: 2025-11-29 08:39:59.612 257641 DEBUG nova.network.neutron [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updated VIF entry in instance network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:39:59 np0005539550 nova_compute[257631]: 2025-11-29 08:39:59.613 257641 DEBUG nova.network.neutron [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": null, "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapdf823be2-d3", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:39:59 np0005539550 nova_compute[257631]: 2025-11-29 08:39:59.634 257641 DEBUG oslo_concurrency.lockutils [req-85fa2890-8890-46a2-944e-4350061cfeb8 req-66161264-56f7-40d3-8faf-fa8080a98d38 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:39:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:39:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733187900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:39:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:39:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/733187900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:39:59 np0005539550 nova_compute[257631]: 2025-11-29 08:39:59.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:40:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:40:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:00.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:40:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:40:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:40:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 655 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 900 KiB/s rd, 6.8 MiB/s wr, 269 op/s
Nov 29 03:40:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:40:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:01Z|00863|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:01 np0005539550 nova_compute[257631]: 2025-11-29 08:40:01.564 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:01 np0005539550 nova_compute[257631]: 2025-11-29 08:40:01.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:01 np0005539550 nova_compute[257631]: 2025-11-29 08:40:01.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:40:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:02.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.0 MiB/s wr, 318 op/s
Nov 29 03:40:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:02 np0005539550 nova_compute[257631]: 2025-11-29 08:40:02.726 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.573 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.573 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.573 257641 INFO nova.compute.manager [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Unshelving#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.728 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.729 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.734 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_requests' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.748 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'numa_topology' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.770 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.771 257641 INFO nova.compute.claims [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:40:03 np0005539550 nova_compute[257631]: 2025-11-29 08:40:03.894 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:04.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:04 np0005539550 nova_compute[257631]: 2025-11-29 08:40:04.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598227417' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:04 np0005539550 nova_compute[257631]: 2025-11-29 08:40:04.346 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:04 np0005539550 nova_compute[257631]: 2025-11-29 08:40:04.353 257641 DEBUG nova.compute.provider_tree [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:04 np0005539550 nova_compute[257631]: 2025-11-29 08:40:04.371 257641 DEBUG nova.scheduler.client.report [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:04 np0005539550 nova_compute[257631]: 2025-11-29 08:40:04.398 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 586 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 317 op/s
Nov 29 03:40:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:05 np0005539550 nova_compute[257631]: 2025-11-29 08:40:05.163 257641 INFO nova.network.neutron [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating port df823be2-d3ae-4d3c-b70a-37db097fc356 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:40:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Nov 29 03:40:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Nov 29 03:40:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Nov 29 03:40:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:06.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539550 podman[374263]: 2025-11-29 08:40:06.34496671 +0000 UTC m=+0.081728117 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 360 KiB/s wr, 221 op/s
Nov 29 03:40:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:06.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:06Z|00864|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.762 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.763 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.763 257641 DEBUG nova.network.neutron [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.849 257641 DEBUG nova.compute.manager [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.850 257641 DEBUG nova.compute.manager [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing instance network info cache due to event network-changed-df823be2-d3ae-4d3c-b70a-37db097fc356. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.850 257641 DEBUG oslo_concurrency.lockutils [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:06Z|00865|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:06 np0005539550 nova_compute[257631]: 2025-11-29 08:40:06.905 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:07 np0005539550 nova_compute[257631]: 2025-11-29 08:40:07.728 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:08.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.262 257641 DEBUG nova.network.neutron [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.291 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.292 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.293 257641 INFO nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating image(s)#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.319 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.322 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.324 257641 DEBUG oslo_concurrency.lockutils [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.324 257641 DEBUG nova.network.neutron [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Refreshing network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.369 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.394 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.397 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "23d94505cf4dcf132165b81c548b62fa84cda166" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.398 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "23d94505cf4dcf132165b81c548b62fa84cda166" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 319 KiB/s wr, 196 op/s
Nov 29 03:40:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:08.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.862 257641 DEBUG nova.virt.libvirt.imagebackend [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.928 257641 DEBUG nova.virt.libvirt.imagebackend [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:40:08 np0005539550 nova_compute[257631]: 2025-11-29 08:40:08.929 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] cloning images/eb8dd751-ecc7-464b-8f9f-4f4d38e755cf@snap to None/78534f04-30a6-4f58-9768-091f48082c9c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.069 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "23d94505cf4dcf132165b81c548b62fa84cda166" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:40:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:40:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:40:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:40:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.206 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'migration_context' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.275 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] flattening vms/78534f04-30a6-4f58-9768-091f48082c9c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.583 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Image rbd:vms/78534f04-30a6-4f58-9768-091f48082c9c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.584 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.584 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Ensure instance console log exists: /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.584 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.585 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.585 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.587 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start _get_guest_xml network_info=[{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:39:38Z,direct_url=<?>,disk_format='raw',id=eb8dd751-ecc7-464b-8f9f-4f4d38e755cf,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1172691465-shelved',owner='889608c71d13429fb37793575792ae74',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:39:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.590 257641 WARNING nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.594 257641 DEBUG nova.virt.libvirt.host [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.594 257641 DEBUG nova.virt.libvirt.host [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.597 257641 DEBUG nova.virt.libvirt.host [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.597 257641 DEBUG nova.virt.libvirt.host [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.598 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.598 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:39:38Z,direct_url=<?>,disk_format='raw',id=eb8dd751-ecc7-464b-8f9f-4f4d38e755cf,min_disk=1,min_ram=0,name='tempest-AttachVolumeShelveTestJSON-server-1172691465-shelved',owner='889608c71d13429fb37793575792ae74',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:39:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.599 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.599 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.599 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.599 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.600 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.600 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.600 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.600 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.601 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.601 257641 DEBUG nova.virt.hardware [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.601 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.623 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.744 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.745 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.769 257641 DEBUG nova.objects.instance [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:09 np0005539550 nova_compute[257631]: 2025-11-29 08:40:09.803 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523922733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.057 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.084 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.088 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.136 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.136 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.137 257641 INFO nova.compute.manager [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attaching volume 8bde7127-007c-4f43-9a60-267c3b300611 to /dev/vdb#033[00m
Nov 29 03:40:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:40:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:10.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:40:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.366 257641 DEBUG os_brick.utils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.368 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.378 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.379 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[a097e7ec-98bb-4720-8e11-85bb3a87cad5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.380 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.388 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.388 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[991764e8-9034-4c49-bba8-f59864cd9b4b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.390 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.398 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.398 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[66306f7b-4d16-4e24-a8c1-410e69a509cf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.399 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[f7500f8c-e66d-40a9-a012-3eda8d9abcc2]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.402 257641 DEBUG oslo_concurrency.processutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.434 257641 DEBUG oslo_concurrency.processutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.437 257641 DEBUG os_brick.initiator.connectors.lightos [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.437 257641 DEBUG os_brick.initiator.connectors.lightos [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.437 257641 DEBUG os_brick.initiator.connectors.lightos [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.437 257641 DEBUG os_brick.utils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.438 257641 DEBUG nova.virt.block_device [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating existing volume attachment record: 9097190e-f4e8-4b35-a7f2-734d36d4a64a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:40:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 584 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 250 KiB/s wr, 179 op/s
Nov 29 03:40:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168127812' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.509 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.511 257641 DEBUG nova.virt.libvirt.vif [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='eb8dd751-ecc7-464b-8f9f-4f4d38e755cf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-701830470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:39:54.345339',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='eb8dd751-ecc7-464b-8f9f-4f4d38e755cf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.512 257641 DEBUG nova.network.os_vif_util [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.513 257641 DEBUG nova.network.os_vif_util [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.514 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'pci_devices' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.532 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <uuid>78534f04-30a6-4f58-9768-091f48082c9c</uuid>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <name>instance-000000b9</name>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:name>tempest-AttachVolumeShelveTestJSON-server-1172691465</nova:name>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:40:09</nova:creationTime>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:user uuid="eeba34466b8f4a1bb5f742f1e811053c">tempest-AttachVolumeShelveTestJSON-907905934-project-member</nova:user>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:project uuid="889608c71d13429fb37793575792ae74">tempest-AttachVolumeShelveTestJSON-907905934</nova:project>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="eb8dd751-ecc7-464b-8f9f-4f4d38e755cf"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <nova:port uuid="df823be2-d3ae-4d3c-b70a-37db097fc356">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="serial">78534f04-30a6-4f58-9768-091f48082c9c</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="uuid">78534f04-30a6-4f58-9768-091f48082c9c</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/78534f04-30a6-4f58-9768-091f48082c9c_disk">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/78534f04-30a6-4f58-9768-091f48082c9c_disk.config">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:bc:b9:6e"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <target dev="tapdf823be2-d3"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/console.log" append="off"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:40:10 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:40:10 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:40:10 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:40:10 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.533 257641 DEBUG nova.compute.manager [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Preparing to wait for external event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.534 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.534 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.535 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.535 257641 DEBUG nova.virt.libvirt.vif [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='eb8dd751-ecc7-464b-8f9f-4f4d38e755cf',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-701830470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member',shelved_at='2025-11-29T08:39:54.345339',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='eb8dd751-ecc7-464b-8f9f-4f4d38e755cf'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.536 257641 DEBUG nova.network.os_vif_util [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.536 257641 DEBUG nova.network.os_vif_util [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.537 257641 DEBUG os_vif [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.537 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.538 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.538 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.540 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf823be2-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.541 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf823be2-d3, col_values=(('external_ids', {'iface-id': 'df823be2-d3ae-4d3c-b70a-37db097fc356', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:b9:6e', 'vm-uuid': '78534f04-30a6-4f58-9768-091f48082c9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539550 NetworkManager[49039]: <info>  [1764405610.5439] manager: (tapdf823be2-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.547 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.549 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.550 257641 INFO os_vif [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3')#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.591 257641 DEBUG nova.network.neutron [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updated VIF entry in instance network info cache for port df823be2-d3ae-4d3c-b70a-37db097fc356. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.592 257641 DEBUG nova.network.neutron [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [{"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.615 257641 DEBUG oslo_concurrency.lockutils [req-1b4ef030-76ee-4f77-9ffb-23279aa44f77 req-d4b46381-7172-474a-879d-f34547163b90 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-78534f04-30a6-4f58-9768-091f48082c9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.622 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.623 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.623 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] No VIF found with MAC fa:16:3e:bc:b9:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.624 257641 INFO nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Using config drive#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.653 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.676 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:10 np0005539550 nova_compute[257631]: 2025-11-29 08:40:10.723 257641 DEBUG nova.objects.instance [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'keypairs' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.189 257641 INFO nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Creating config drive at /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.196 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdmyiszd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.298 257641 DEBUG nova.objects.instance [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.319 257641 DEBUG nova.virt.libvirt.driver [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attempting to attach volume 8bde7127-007c-4f43-9a60-267c3b300611 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.322 257641 DEBUG nova.virt.libvirt.guest [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8bde7127-007c-4f43-9a60-267c3b300611">
Nov 29 03:40:11 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:40:11 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:40:11 np0005539550 nova_compute[257631]:  <serial>8bde7127-007c-4f43-9a60-267c3b300611</serial>
Nov 29 03:40:11 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:11 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.333 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfdmyiszd" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.362 257641 DEBUG nova.storage.rbd_utils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] rbd image 78534f04-30a6-4f58-9768-091f48082c9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.369 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config 78534f04-30a6-4f58-9768-091f48082c9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.459 257641 DEBUG nova.virt.libvirt.driver [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.460 257641 DEBUG nova.virt.libvirt.driver [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.460 257641 DEBUG nova.virt.libvirt.driver [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.461 257641 DEBUG nova.virt.libvirt.driver [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:52:60:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.696 257641 DEBUG oslo_concurrency.processutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config 78534f04-30a6-4f58-9768-091f48082c9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.697 257641 INFO nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deleting local config drive /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:40:11 np0005539550 kernel: tapdf823be2-d3: entered promiscuous mode
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.7498] manager: (tapdf823be2-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Nov 29 03:40:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:11Z|00866|binding|INFO|Claiming lport df823be2-d3ae-4d3c-b70a-37db097fc356 for this chassis.
Nov 29 03:40:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:11Z|00867|binding|INFO|df823be2-d3ae-4d3c-b70a-37db097fc356: Claiming fa:16:3e:bc:b9:6e 10.100.0.14
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.750 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.761 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.763 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.7638] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/382)
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.7645] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.767 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:b9:6e 10.100.0.14'], port_security=['fa:16:3e:bc:b9:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '78534f04-30a6-4f58-9768-091f48082c9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9d2929c7-13e0-4091-aa93-1048c769102b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=df823be2-d3ae-4d3c-b70a-37db097fc356) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.769 158978 INFO neutron.agent.ovn.metadata.agent [-] Port df823be2-d3ae-4d3c-b70a-37db097fc356 in datapath d042f24f-c2f0-4843-9727-cc3720586596 bound to our chassis#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.770 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d042f24f-c2f0-4843-9727-cc3720586596#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.784 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99706787-3203-4754-90ed-6ba154896ee4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.786 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd042f24f-c1 in ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:40:11 np0005539550 systemd-udevd[374720]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.787 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd042f24f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.787 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa343a7-00c6-4b05-82de-d49404e0b9cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.788 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1bfb83-e360-4ef6-8681-095d6e540ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 systemd-machined[216673]: New machine qemu-102-instance-000000b9.
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.800 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ca6161-cae5-4a87-83e7-b677e48db8e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.8025] device (tapdf823be2-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.8033] device (tapdf823be2-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:40:11 np0005539550 systemd[1]: Started Virtual Machine qemu-102-instance-000000b9.
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.824 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[39c7267b-8162-4a3b-9cf8-a313183a0736]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.854 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c492219c-d045-41d6-9869-51a929c15aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.862 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4a83d267-26a1-413b-897b-5ef0a6fb5fb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 systemd-udevd[374723]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.8650] manager: (tapd042f24f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/384)
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.914 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[06f7eb5e-42ca-4f61-8ab2-c33956306f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.919 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[38255c3e-bade-4eed-92a9-1477bdb80059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 NetworkManager[49039]: <info>  [1764405611.9453] device (tapd042f24f-c0): carrier: link connected
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.952 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2c82b011-7d1e-40f3-9ea9-85bc9fc45eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:11Z|00868|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.971 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfc15ad-44ba-48ec-b05b-d8aa8a79f7b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865960, 'reachable_time': 34030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374752, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.984 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:11.989 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a3f565-655b-4b6f-8c80-cbbc3997f617]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:6732'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 865960, 'tstamp': 865960}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374753, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:11Z|00869|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 ovn-installed in OVS
Nov 29 03:40:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:11Z|00870|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 up in Southbound
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:11.999 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.010 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[92276021-c1b1-4e40-828a-b854d19284c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd042f24f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:67:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 252], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865960, 'reachable_time': 34030, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374754, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.043 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0718fc59-ec44-4c08-aee4-d3ac7042547e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.071 257641 DEBUG oslo_concurrency.lockutils [None req-b290896b-87dd-482b-8c09-ccbe8b4f806e facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.104 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b13801c5-db86-4fa8-8820-e0c0123eb3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.106 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.106 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.106 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd042f24f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:12 np0005539550 kernel: tapd042f24f-c0: entered promiscuous mode
Nov 29 03:40:12 np0005539550 NetworkManager[49039]: <info>  [1764405612.1086] manager: (tapd042f24f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.111 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd042f24f-c0, col_values=(('external_ids', {'iface-id': 'e44bf17c-ebb7-4e62-850a-20ff20a74960'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:12 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:12Z|00871|binding|INFO|Releasing lport e44bf17c-ebb7-4e62-850a-20ff20a74960 from this chassis (sb_readonly=0)
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.137 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.137 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.138 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[02b76dcd-2d13-4110-925d-d27a6834216d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.139 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/d042f24f-c2f0-4843-9727-cc3720586596.pid.haproxy
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID d042f24f-c2f0-4843-9727-cc3720586596
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:40:12 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:12.139 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'env', 'PROCESS_TAG=haproxy-d042f24f-c2f0-4843-9727-cc3720586596', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d042f24f-c2f0-4843-9727-cc3720586596.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:40:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:12.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.402 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405612.4017498, 78534f04-30a6-4f58-9768-091f48082c9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.403 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.431 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.436 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405612.4020526, 78534f04-30a6-4f58-9768-091f48082c9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.436 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.452 257641 DEBUG nova.compute.manager [req-04795717-3232-4dd3-a656-7d90c66ad68d req-11d0bb85-f5e2-4e0f-b8e3-4dc26654f2d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.452 257641 DEBUG oslo_concurrency.lockutils [req-04795717-3232-4dd3-a656-7d90c66ad68d req-11d0bb85-f5e2-4e0f-b8e3-4dc26654f2d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.452 257641 DEBUG oslo_concurrency.lockutils [req-04795717-3232-4dd3-a656-7d90c66ad68d req-11d0bb85-f5e2-4e0f-b8e3-4dc26654f2d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.453 257641 DEBUG oslo_concurrency.lockutils [req-04795717-3232-4dd3-a656-7d90c66ad68d req-11d0bb85-f5e2-4e0f-b8e3-4dc26654f2d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.453 257641 DEBUG nova.compute.manager [req-04795717-3232-4dd3-a656-7d90c66ad68d req-11d0bb85-f5e2-4e0f-b8e3-4dc26654f2d6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Processing event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.454 257641 DEBUG nova.compute.manager [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.457 257641 DEBUG nova.virt.libvirt.driver [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.461 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance spawned successfully.#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.465 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.468 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405612.4573374, 78534f04-30a6-4f58-9768-091f48082c9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.469 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:40:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 648 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 3.7 MiB/s wr, 129 op/s
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.488 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 03:40:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.492 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:12 np0005539550 nova_compute[257631]: 2025-11-29 08:40:12.514 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:12 np0005539550 podman[374827]: 2025-11-29 08:40:12.522654496 +0000 UTC m=+0.023054955 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:40:12 np0005539550 podman[374827]: 2025-11-29 08:40:12.85711626 +0000 UTC m=+0.357516709 container create 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:12 np0005539550 systemd[1]: Started libpod-conmon-0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b.scope.
Nov 29 03:40:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ac36ec0ef3eb7a7e06af4f8a0b8b7c8fe842bdb4af38feebbf15f139a339f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:13 np0005539550 podman[374827]: 2025-11-29 08:40:13.137451715 +0000 UTC m=+0.637852154 container init 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 03:40:13 np0005539550 podman[374827]: 2025-11-29 08:40:13.145115759 +0000 UTC m=+0.645516178 container start 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:40:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Nov 29 03:40:13 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [NOTICE]   (374848) : New worker (374850) forked
Nov 29 03:40:13 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [NOTICE]   (374848) : Loading success.
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.233 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.233 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.251 257641 DEBUG nova.objects.instance [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.312 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Nov 29 03:40:13 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.806 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.806 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:13 np0005539550 nova_compute[257631]: 2025-11-29 08:40:13.808 257641 INFO nova.compute.manager [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attaching volume 86e515f3-ecb8-4e58-99f6-23e4d8ee3e76 to /dev/vdc#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.145 257641 DEBUG os_brick.utils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.146 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.151 257641 DEBUG nova.compute.manager [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:14.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.161 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.162 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3467b21f-0063-4944-91a8-3c540bd5b44d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.164 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.174 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.174 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[30ff98a7-a544-4b97-8f25-17c6250f766d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.176 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.187 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.187 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[88435392-a139-4977-bcee-a46ae3a54cbf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.189 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[87ecd6ff-7d39-4fa1-a0f6-0d739654529c]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.190 257641 DEBUG oslo_concurrency.processutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.218 257641 DEBUG oslo_concurrency.processutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.220 257641 DEBUG os_brick.initiator.connectors.lightos [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.220 257641 DEBUG os_brick.initiator.connectors.lightos [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.221 257641 DEBUG os_brick.initiator.connectors.lightos [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.221 257641 DEBUG os_brick.utils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.222 257641 DEBUG nova.virt.block_device [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating existing volume attachment record: 21c590f0-94de-4352-b9d7-f49bae8239c5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.283 257641 DEBUG oslo_concurrency.lockutils [None req-81a35afb-45bc-43fb-8287-d9d2c1ace219 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.325 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 671 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 173 op/s
Nov 29 03:40:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.635 257641 DEBUG nova.compute.manager [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.636 257641 DEBUG oslo_concurrency.lockutils [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.637 257641 DEBUG oslo_concurrency.lockutils [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.637 257641 DEBUG oslo_concurrency.lockutils [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.637 257641 DEBUG nova.compute.manager [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:14 np0005539550 nova_compute[257631]: 2025-11-29 08:40:14.638 257641 WARNING nova.compute.manager [req-1106ca79-e297-4413-8d56-c53c6a4a4a59 req-1706fbd5-0ca0-4924-aeb3-be386e28aa2f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received unexpected event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:15 np0005539550 nova_compute[257631]: 2025-11-29 08:40:15.543 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:16.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.422 257641 DEBUG nova.objects.instance [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.475 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attempting to attach volume 86e515f3-ecb8-4e58-99f6-23e4d8ee3e76 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:40:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 631 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 259 op/s
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.478 257641 DEBUG nova.virt.libvirt.guest [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-86e515f3-ecb8-4e58-99f6-23e4d8ee3e76">
Nov 29 03:40:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:40:16 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:16 np0005539550 nova_compute[257631]:  <serial>86e515f3-ecb8-4e58-99f6-23e4d8ee3e76</serial>
Nov 29 03:40:16 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:16 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:40:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:16.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.732 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.735 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.737 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.737 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.737 257641 DEBUG nova.virt.libvirt.driver [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] No VIF found with MAC fa:16:3e:52:60:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:16 np0005539550 nova_compute[257631]: 2025-11-29 08:40:16.980 257641 DEBUG oslo_concurrency.lockutils [None req-31bfdced-47cb-4acf-9400-00b53f66ddba facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:17 np0005539550 nova_compute[257631]: 2025-11-29 08:40:17.942 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:17 np0005539550 nova_compute[257631]: 2025-11-29 08:40:17.942 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:40:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:18.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 631 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 259 op/s
Nov 29 03:40:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:18.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:18.971 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:18.972 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:18.973 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:19 np0005539550 nova_compute[257631]: 2025-11-29 08:40:19.329 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:20.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:20 np0005539550 nova_compute[257631]: 2025-11-29 08:40:20.286 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006510053159315094 of space, bias 1.0, pg target 1.9530159477945281 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008658563360785408 of space, bias 1.0, pg target 2.588910444874837 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0034349627186860228 of space, bias 1.0, pg target 1.0201839274497488 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 16)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Nov 29 03:40:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.2 MiB/s wr, 288 op/s
Nov 29 03:40:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:20.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:20 np0005539550 nova_compute[257631]: 2025-11-29 08:40:20.545 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 223 op/s
Nov 29 03:40:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:22.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:23 np0005539550 nova_compute[257631]: 2025-11-29 08:40:23.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:24.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539550 nova_compute[257631]: 2025-11-29 08:40:24.330 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 152 op/s
Nov 29 03:40:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:24.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:24 np0005539550 nova_compute[257631]: 2025-11-29 08:40:24.646 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.300 257641 DEBUG nova.compute.manager [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.300 257641 DEBUG nova.compute.manager [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.301 257641 DEBUG oslo_concurrency.lockutils [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.301 257641 DEBUG oslo_concurrency.lockutils [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.301 257641 DEBUG nova.network.neutron [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Nov 29 03:40:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Nov 29 03:40:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Nov 29 03:40:25 np0005539550 nova_compute[257631]: 2025-11-29 08:40:25.548 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:26Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:b9:6e 10.100.0.14
Nov 29 03:40:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 907 KiB/s rd, 134 KiB/s wr, 82 op/s
Nov 29 03:40:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:26.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:27 np0005539550 nova_compute[257631]: 2025-11-29 08:40:27.440 257641 DEBUG nova.compute.manager [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:27 np0005539550 nova_compute[257631]: 2025-11-29 08:40:27.440 257641 DEBUG nova.compute.manager [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:27 np0005539550 nova_compute[257631]: 2025-11-29 08:40:27.440 257641 DEBUG oslo_concurrency.lockutils [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:28 np0005539550 podman[374917]: 2025-11-29 08:40:28.063048662 +0000 UTC m=+0.068448093 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:28 np0005539550 podman[374918]: 2025-11-29 08:40:28.0807427 +0000 UTC m=+0.081165065 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 03:40:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:28.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 946 KiB/s rd, 137 KiB/s wr, 88 op/s
Nov 29 03:40:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:28.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:28.577 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:28.578 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.620 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.656 257641 DEBUG nova.network.neutron [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.657 257641 DEBUG nova.network.neutron [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.679 257641 DEBUG oslo_concurrency.lockutils [req-e348dc90-e85c-445a-b147-a8321317ee1c req-f34db3b0-1538-474b-9e08-119ab02695b1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.679 257641 DEBUG oslo_concurrency.lockutils [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:28 np0005539550 nova_compute[257631]: 2025-11-29 08:40:28.680 257641 DEBUG nova.network.neutron [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:29 np0005539550 nova_compute[257631]: 2025-11-29 08:40:29.334 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:29 np0005539550 nova_compute[257631]: 2025-11-29 08:40:29.532 257641 DEBUG nova.compute.manager [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:29 np0005539550 nova_compute[257631]: 2025-11-29 08:40:29.532 257641 DEBUG nova.compute.manager [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:29 np0005539550 nova_compute[257631]: 2025-11-29 08:40:29.532 257641 DEBUG oslo_concurrency.lockutils [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.177 257641 DEBUG nova.network.neutron [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.177 257641 DEBUG nova.network.neutron [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:30.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.214 257641 DEBUG oslo_concurrency.lockutils [req-b0beb79a-eab9-4881-9f94-bdae9b6882ff req-9bfedd77-2309-4ab3-b437-32e5e666edd2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.214 257641 DEBUG oslo_concurrency.lockutils [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.214 257641 DEBUG nova.network.neutron [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.319 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 617 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 635 KiB/s rd, 53 KiB/s wr, 58 op/s
Nov 29 03:40:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:30 np0005539550 nova_compute[257631]: 2025-11-29 08:40:30.550 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:30.581 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 55 KiB/s wr, 57 op/s
Nov 29 03:40:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.340 257641 DEBUG nova.network.neutron [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.341 257641 DEBUG nova.network.neutron [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.361 257641 DEBUG oslo_concurrency.lockutils [req-66f7e35d-160c-4b92-b409-e0a9804eb34a req-ecfc565b-0aad-4fea-9f7e-cbf72bedf695 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.599 257641 DEBUG nova.compute.manager [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.600 257641 DEBUG nova.compute.manager [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.601 257641 DEBUG oslo_concurrency.lockutils [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.601 257641 DEBUG oslo_concurrency.lockutils [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:33 np0005539550 nova_compute[257631]: 2025-11-29 08:40:33.602 257641 DEBUG nova.network.neutron [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:34.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:34 np0005539550 nova_compute[257631]: 2025-11-29 08:40:34.335 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 619 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 637 KiB/s rd, 41 KiB/s wr, 57 op/s
Nov 29 03:40:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:35 np0005539550 nova_compute[257631]: 2025-11-29 08:40:35.552 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:35 np0005539550 nova_compute[257631]: 2025-11-29 08:40:35.671 257641 DEBUG nova.network.neutron [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:35 np0005539550 nova_compute[257631]: 2025-11-29 08:40:35.672 257641 DEBUG nova.network.neutron [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:35 np0005539550 nova_compute[257631]: 2025-11-29 08:40:35.688 257641 DEBUG oslo_concurrency.lockutils [req-f6e1eb46-cfa1-4193-a881-59bc486f2ce9 req-57c7c230-f68e-428f-b20f-3eb333baae07 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:36.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 621 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 577 KiB/s rd, 237 KiB/s wr, 59 op/s
Nov 29 03:40:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:36.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.689 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.690 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.747 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.833 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.834 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.844 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.844 257641 INFO nova.compute.claims [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:40:36 np0005539550 nova_compute[257631]: 2025-11-29 08:40:36.968 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.084 257641 DEBUG oslo_concurrency.lockutils [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.084 257641 DEBUG oslo_concurrency.lockutils [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.103 257641 INFO nova.compute.manager [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Detaching volume 8bde7127-007c-4f43-9a60-267c3b300611#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.322 257641 INFO nova.virt.block_device [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attempting to driver detach volume 8bde7127-007c-4f43-9a60-267c3b300611 from mountpoint /dev/vdb#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.334 257641 DEBUG nova.virt.libvirt.driver [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdb from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.335 257641 DEBUG nova.virt.libvirt.guest [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8bde7127-007c-4f43-9a60-267c3b300611">
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <serial>8bde7127-007c-4f43-9a60-267c3b300611</serial>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:37 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:37 np0005539550 podman[375007]: 2025-11-29 08:40:37.343312472 +0000 UTC m=+0.085326621 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.344 257641 INFO nova.virt.libvirt.driver [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the persistent domain config.#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.345 257641 DEBUG nova.virt.libvirt.driver [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.346 257641 DEBUG nova.virt.libvirt.guest [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-8bde7127-007c-4f43-9a60-267c3b300611">
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <serial>8bde7127-007c-4f43-9a60-267c3b300611</serial>
Nov 29 03:40:37 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:40:37 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:37 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.416 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405637.41607, db9a33b0-f745-4457-b7a6-d22017777a85 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.418 257641 DEBUG nova.virt.libvirt.driver [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance db9a33b0-f745-4457-b7a6-d22017777a85 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.420 257641 INFO nova.virt.libvirt.driver [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdb from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the live domain config.#033[00m
Nov 29 03:40:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2486720418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.460 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.466 257641 DEBUG nova.compute.provider_tree [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.479 257641 DEBUG nova.scheduler.client.report [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.499 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.499 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.553 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.553 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.566 257641 DEBUG nova.objects.instance [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.571 257641 INFO nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.588 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.606 257641 DEBUG oslo_concurrency.lockutils [None req-bcc6f4b5-be37-434b-9d9d-5ba56b5ca7af facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.521s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.667 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.669 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.669 257641 INFO nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Creating image(s)#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.737 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.768 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.794 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.798 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.835 257641 DEBUG nova.policy [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.877 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.878 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.879 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.879 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.908 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:37 np0005539550 nova_compute[257631]: 2025-11-29 08:40:37.912 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:38.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.448 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 640 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 945 KiB/s wr, 15 op/s
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.522 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:40:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:38.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.619 257641 DEBUG nova.objects.instance [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid eb84e8b8-cb17-4f34-8f44-41f2063a152a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.643 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.644 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Ensure instance console log exists: /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.644 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.645 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:38 np0005539550 nova_compute[257631]: 2025-11-29 08:40:38.645 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:39 np0005539550 nova_compute[257631]: 2025-11-29 08:40:39.378 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Successfully created port: 3026e417-ee61-4f08-8c26-06f78013fc48 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:40:39 np0005539550 nova_compute[257631]: 2025-11-29 08:40:39.398 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:40.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.246 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Successfully updated port: 3026e417-ee61-4f08-8c26-06f78013fc48 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.260 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.260 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.261 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.322 257641 DEBUG nova.compute.manager [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.322 257641 DEBUG nova.compute.manager [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing instance network info cache due to event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.322 257641 DEBUG oslo_concurrency.lockutils [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.438 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:40:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 640 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.8 KiB/s rd, 943 KiB/s wr, 11 op/s
Nov 29 03:40:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:40.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:40 np0005539550 nova_compute[257631]: 2025-11-29 08:40:40.554 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.259 257641 DEBUG oslo_concurrency.lockutils [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.260 257641 DEBUG oslo_concurrency.lockutils [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.274 257641 INFO nova.compute.manager [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Detaching volume 86e515f3-ecb8-4e58-99f6-23e4d8ee3e76#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.616 257641 INFO nova.virt.block_device [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Attempting to driver detach volume 86e515f3-ecb8-4e58-99f6-23e4d8ee3e76 from mountpoint /dev/vdc#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.626 257641 DEBUG nova.virt.libvirt.driver [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Attempting to detach device vdc from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.627 257641 DEBUG nova.virt.libvirt.guest [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-86e515f3-ecb8-4e58-99f6-23e4d8ee3e76">
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <serial>86e515f3-ecb8-4e58-99f6-23e4d8ee3e76</serial>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:41 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.634 257641 INFO nova.virt.libvirt.driver [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the persistent domain config.#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.634 257641 DEBUG nova.virt.libvirt.driver [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.635 257641 DEBUG nova.virt.libvirt.guest [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-86e515f3-ecb8-4e58-99f6-23e4d8ee3e76">
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <target dev="vdc" bus="virtio"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <serial>86e515f3-ecb8-4e58-99f6-23e4d8ee3e76</serial>
Nov 29 03:40:41 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Nov 29 03:40:41 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:40:41 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.699 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405641.698549, db9a33b0-f745-4457-b7a6-d22017777a85 => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.700 257641 DEBUG nova.virt.libvirt.driver [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance db9a33b0-f745-4457-b7a6-d22017777a85 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.703 257641 INFO nova.virt.libvirt.driver [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully detached device vdc from instance db9a33b0-f745-4457-b7a6-d22017777a85 from the live domain config.#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.898 257641 DEBUG nova.objects.instance [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'flavor' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:41 np0005539550 nova_compute[257631]: 2025-11-29 08:40:41.954 257641 DEBUG oslo_concurrency.lockutils [None req-cb91445b-7953-4dfe-b2c4-bd68f1e3dce0 facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.251 257641 DEBUG nova.network.neutron [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.278 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.278 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Instance network_info: |[{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.278 257641 DEBUG oslo_concurrency.lockutils [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.279 257641 DEBUG nova.network.neutron [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.281 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Start _get_guest_xml network_info=[{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.285 257641 WARNING nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.290 257641 DEBUG nova.virt.libvirt.host [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.290 257641 DEBUG nova.virt.libvirt.host [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.293 257641 DEBUG nova.virt.libvirt.host [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.294 257641 DEBUG nova.virt.libvirt.host [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.295 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.296 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.296 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.296 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.296 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.297 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.297 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.297 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.297 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.298 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.298 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.298 257641 DEBUG nova.virt.hardware [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.300 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 667 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Nov 29 03:40:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2713204689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.794 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.858 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:42 np0005539550 nova_compute[257631]: 2025-11-29 08:40:42.863 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:40:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.247 257641 DEBUG nova.compute.manager [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.248 257641 DEBUG nova.compute.manager [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.248 257641 DEBUG oslo_concurrency.lockutils [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.249 257641 DEBUG oslo_concurrency.lockutils [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.249 257641 DEBUG nova.network.neutron [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401561850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.345 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.347 257641 DEBUG nova.virt.libvirt.vif [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=190,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNZ+BnKkVfjY6of6GMTavtr+ZOiNYfMudtVmziwMebSFDNerpQamEFJH3x+Vl9cRJZbecAD6+Xjl6aHihOigg85spjIzHavMMidp+NkOyN3qdZUTPVsH+OVXf+hEueHIw==',key_name='tempest-TestSecurityGroupsBasicOps-165052372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-nfdbbpsd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:37Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=eb84e8b8-cb17-4f34-8f44-41f2063a152a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.347 257641 DEBUG nova.network.os_vif_util [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.348 257641 DEBUG nova.network.os_vif_util [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.349 257641 DEBUG nova.objects.instance [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb84e8b8-cb17-4f34-8f44-41f2063a152a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.367 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <uuid>eb84e8b8-cb17-4f34-8f44-41f2063a152a</uuid>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <name>instance-000000be</name>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794</nova:name>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:40:42</nova:creationTime>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <nova:port uuid="3026e417-ee61-4f08-8c26-06f78013fc48">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="serial">eb84e8b8-cb17-4f34-8f44-41f2063a152a</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="uuid">eb84e8b8-cb17-4f34-8f44-41f2063a152a</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:1e:10:6a"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <target dev="tap3026e417-ee"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/console.log" append="off"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:40:43 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:40:43 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:40:43 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:40:43 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.368 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Preparing to wait for external event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.368 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.368 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.368 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.369 257641 DEBUG nova.virt.libvirt.vif [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=190,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNZ+BnKkVfjY6of6GMTavtr+ZOiNYfMudtVmziwMebSFDNerpQamEFJH3x+Vl9cRJZbecAD6+Xjl6aHihOigg85spjIzHavMMidp+NkOyN3qdZUTPVsH+OVXf+hEueHIw==',key_name='tempest-TestSecurityGroupsBasicOps-165052372',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-nfdbbpsd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:40:37Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=eb84e8b8-cb17-4f34-8f44-41f2063a152a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.369 257641 DEBUG nova.network.os_vif_util [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.370 257641 DEBUG nova.network.os_vif_util [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.370 257641 DEBUG os_vif [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.371 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.372 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.372 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.377 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.377 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3026e417-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.378 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3026e417-ee, col_values=(('external_ids', {'iface-id': '3026e417-ee61-4f08-8c26-06f78013fc48', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:10:6a', 'vm-uuid': 'eb84e8b8-cb17-4f34-8f44-41f2063a152a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.379 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:43 np0005539550 NetworkManager[49039]: <info>  [1764405643.3804] manager: (tap3026e417-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.388 257641 INFO os_vif [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee')#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.509 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.510 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.510 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:1e:10:6a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.510 257641 INFO nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Using config drive#033[00m
Nov 29 03:40:43 np0005539550 nova_compute[257631]: 2025-11-29 08:40:43.541 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b59019ab-b1e8-4801-b0a3-b1cc5b90352d does not exist
Nov 29 03:40:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6f22cdbd-7e95-4c97-a302-e75342a1ba71 does not exist
Nov 29 03:40:43 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f546f8d4-7cb1-4838-b2e3-162a7ffa6454 does not exist
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:40:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:40:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.375 257641 INFO nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Creating config drive at /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.380 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpry9jr8fz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.409 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.428 257641 DEBUG nova.network.neutron [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updated VIF entry in instance network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.429 257641 DEBUG nova.network.neutron [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.456 257641 DEBUG oslo_concurrency.lockutils [req-cb46b4df-06e6-4e2d-b8cb-1fd772aba7dc req-a2cc5d62-460f-4d49-a49e-592c96f7cacd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 667 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.518 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpry9jr8fz" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.551 257641 DEBUG nova.storage.rbd_utils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.559 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.577594897 +0000 UTC m=+0.046399175 container create 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:40:44 np0005539550 systemd[1]: Started libpod-conmon-945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d.scope.
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.556332779 +0000 UTC m=+0.025137077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.676545571 +0000 UTC m=+0.145349869 container init 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.685038956 +0000 UTC m=+0.153843234 container start 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.688806201 +0000 UTC m=+0.157610499 container attach 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 03:40:44 np0005539550 relaxed_tu[375603]: 167 167
Nov 29 03:40:44 np0005539550 systemd[1]: libpod-945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d.scope: Deactivated successfully.
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.692374791 +0000 UTC m=+0.161179089 container died 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:40:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-73861fc805f83fc49d8d3c247d1701bafa06f2422ee00676e98791f77865ba41-merged.mount: Deactivated successfully.
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.727 257641 DEBUG oslo_concurrency.processutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config eb84e8b8-cb17-4f34-8f44-41f2063a152a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.728 257641 INFO nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Deleting local config drive /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a/disk.config because it was imported into RBD.#033[00m
Nov 29 03:40:44 np0005539550 podman[375568]: 2025-11-29 08:40:44.737498953 +0000 UTC m=+0.206303231 container remove 945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:44 np0005539550 systemd[1]: libpod-conmon-945a6c2620923c0680cda09ddfe65adad7a724615a2787530c2fd2e8e890ef7d.scope: Deactivated successfully.
Nov 29 03:40:44 np0005539550 kernel: tap3026e417-ee: entered promiscuous mode
Nov 29 03:40:44 np0005539550 NetworkManager[49039]: <info>  [1764405644.7813] manager: (tap3026e417-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.782 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:44Z|00872|binding|INFO|Claiming lport 3026e417-ee61-4f08-8c26-06f78013fc48 for this chassis.
Nov 29 03:40:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:44Z|00873|binding|INFO|3026e417-ee61-4f08-8c26-06f78013fc48: Claiming fa:16:3e:1e:10:6a 10.100.0.11
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.791 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:10:6a 10.100.0.11'], port_security=['fa:16:3e:1e:10:6a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'eb84e8b8-cb17-4f34-8f44-41f2063a152a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbe9de28-a3ca-45b3-82e4-243328809a43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a30ef2cc-c9d3-4e8a-9b25-77976762a1b1 de70ef26-d410-46da-a8db-c626d12e2323', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5b4d98b-ad08-4e05-9935-0f27596c26ff, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3026e417-ee61-4f08-8c26-06f78013fc48) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.792 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3026e417-ee61-4f08-8c26-06f78013fc48 in datapath bbe9de28-a3ca-45b3-82e4-243328809a43 bound to our chassis#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.794 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bbe9de28-a3ca-45b3-82e4-243328809a43#033[00m
Nov 29 03:40:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:44Z|00874|binding|INFO|Setting lport 3026e417-ee61-4f08-8c26-06f78013fc48 ovn-installed in OVS
Nov 29 03:40:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:44Z|00875|binding|INFO|Setting lport 3026e417-ee61-4f08-8c26-06f78013fc48 up in Southbound
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539550 nova_compute[257631]: 2025-11-29 08:40:44.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.808 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe5b644-0dfa-4e6b-9751-2fb8e6acef18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.808 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbbe9de28-a1 in ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.810 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbbe9de28-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.810 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e3d952-0225-48b5-8a1e-a51bf1b94ee0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.811 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7392cd2f-04ac-423b-b508-df63346b3d2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 systemd-udevd[375654]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:44 np0005539550 systemd-machined[216673]: New machine qemu-103-instance-000000be.
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.828 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[a7178edd-83e1-4b3e-a29a-b578f3e13fba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 NetworkManager[49039]: <info>  [1764405644.8320] device (tap3026e417-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:40:44 np0005539550 NetworkManager[49039]: <info>  [1764405644.8334] device (tap3026e417-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:40:44 np0005539550 systemd[1]: Started Virtual Machine qemu-103-instance-000000be.
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.842 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d7921c0-a70a-4d4a-b5ce-9a45ed60a750]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.877 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6e49abde-a761-4c15-8b3e-88c60e2fe594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 systemd-udevd[375658]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:40:44 np0005539550 NetworkManager[49039]: <info>  [1764405644.8847] manager: (tapbbe9de28-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.884 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[103318a0-4221-4ce0-92e6-b5a6f545d2a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.917 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b2278b8a-af91-4335-b682-c1128da2176b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.920 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8999ee9c-b895-4488-85a7-b21dab84f33d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 podman[375671]: 2025-11-29 08:40:44.941330711 +0000 UTC m=+0.043693066 container create 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:44 np0005539550 NetworkManager[49039]: <info>  [1764405644.9458] device (tapbbe9de28-a0): carrier: link connected
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.952 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[f23ac32e-1687-4e48-8e60-d2883ea1e93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.971 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd170ef-71df-4a44-9893-c5815d569a55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbe9de28-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:06:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869261, 'reachable_time': 34927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375703, 'error': None, 'target': 'ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:44 np0005539550 systemd[1]: Started libpod-conmon-7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0.scope.
Nov 29 03:40:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:44.989 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8c69e059-1ddf-49d7-917c-46c33ee0b9ab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febd:621'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 869261, 'tstamp': 869261}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375709, 'error': None, 'target': 'ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.006 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[516c60ae-b4cc-4ad3-a098-9af151dbd8ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbbe9de28-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bd:06:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869261, 'reachable_time': 34927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375713, 'error': None, 'target': 'ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:45 np0005539550 podman[375671]: 2025-11-29 08:40:44.923419778 +0000 UTC m=+0.025782153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 podman[375671]: 2025-11-29 08:40:45.03770502 +0000 UTC m=+0.140067395 container init 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.037 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4c702705-ec81-42ea-80cc-e1a3bd02b2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:45 np0005539550 podman[375671]: 2025-11-29 08:40:45.048959915 +0000 UTC m=+0.151322270 container start 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:45 np0005539550 podman[375671]: 2025-11-29 08:40:45.053175261 +0000 UTC m=+0.155537616 container attach 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.102 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[02a2143a-ba60-4f21-8472-f570d5c8c0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.104 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbe9de28-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.105 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.105 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbbe9de28-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:45 np0005539550 kernel: tapbbe9de28-a0: entered promiscuous mode
Nov 29 03:40:45 np0005539550 NetworkManager[49039]: <info>  [1764405645.1078] manager: (tapbbe9de28-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.114 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbbe9de28-a0, col_values=(('external_ids', {'iface-id': 'dbfc0c95-bb59-4f67-943f-4a79ab89b6c4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:45Z|00876|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.116 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.120 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bbe9de28-a3ca-45b3-82e4-243328809a43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bbe9de28-a3ca-45b3-82e4-243328809a43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.121 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[761fdfaf-009f-425e-b4a4-e5cc50ee8241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.122 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-bbe9de28-a3ca-45b3-82e4-243328809a43
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/bbe9de28-a3ca-45b3-82e4-243328809a43.pid.haproxy
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID bbe9de28-a3ca-45b3-82e4-243328809a43
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:40:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:45.124 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43', 'env', 'PROCESS_TAG=haproxy-bbe9de28-a3ca-45b3-82e4-243328809a43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bbe9de28-a3ca-45b3-82e4-243328809a43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.263 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405645.2629035, eb84e8b8-cb17-4f34-8f44-41f2063a152a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.264 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] VM Started (Lifecycle Event)#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.281 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.285 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405645.2661169, eb84e8b8-cb17-4f34-8f44-41f2063a152a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.285 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.311 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.315 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.335 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.396 257641 DEBUG nova.compute.manager [req-54884e5b-847f-46de-92e5-927a44d22646 req-0414c84f-00b8-4c42-9275-642d7da71ba6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.396 257641 DEBUG oslo_concurrency.lockutils [req-54884e5b-847f-46de-92e5-927a44d22646 req-0414c84f-00b8-4c42-9275-642d7da71ba6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.396 257641 DEBUG oslo_concurrency.lockutils [req-54884e5b-847f-46de-92e5-927a44d22646 req-0414c84f-00b8-4c42-9275-642d7da71ba6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:45 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.397 257641 DEBUG oslo_concurrency.lockutils [req-54884e5b-847f-46de-92e5-927a44d22646 req-0414c84f-00b8-4c42-9275-642d7da71ba6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:45 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.397 257641 DEBUG nova.compute.manager [req-54884e5b-847f-46de-92e5-927a44d22646 req-0414c84f-00b8-4c42-9275-642d7da71ba6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Processing event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.397 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.406 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405645.40673, eb84e8b8-cb17-4f34-8f44-41f2063a152a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.407 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.409 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.415 257641 INFO nova.virt.libvirt.driver [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Instance spawned successfully.#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.416 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.445 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.451 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.455 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.456 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.456 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.457 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.457 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.458 257641 DEBUG nova.virt.libvirt.driver [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.486 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.516 257641 INFO nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Took 7.85 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.516 257641 DEBUG nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:40:45 np0005539550 podman[375790]: 2025-11-29 08:40:45.475353575 +0000 UTC m=+0.022161552 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.601 257641 INFO nova.compute.manager [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Took 8.81 seconds to build instance.#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.618 257641 DEBUG oslo_concurrency.lockutils [None req-516cadfc-603a-456f-bebb-952c447c06f7 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:45 np0005539550 podman[375790]: 2025-11-29 08:40:45.655790071 +0000 UTC m=+0.202598018 container create 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 03:40:45 np0005539550 systemd[1]: Started libpod-conmon-1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac.scope.
Nov 29 03:40:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7139e764d5637fa6b0606764b359d64b780623f67731e20255a39584834f4ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:45 np0005539550 keen_solomon[375710]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:40:45 np0005539550 keen_solomon[375710]: --> relative data size: 1.0
Nov 29 03:40:45 np0005539550 keen_solomon[375710]: --> All data devices are unavailable
Nov 29 03:40:45 np0005539550 systemd[1]: libpod-7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0.scope: Deactivated successfully.
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.943 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:45 np0005539550 nova_compute[257631]: 2025-11-29 08:40:45.943 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:40:45 np0005539550 podman[375790]: 2025-11-29 08:40:45.971061709 +0000 UTC m=+0.517869686 container init 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:40:45 np0005539550 podman[375790]: 2025-11-29 08:40:45.978187659 +0000 UTC m=+0.524995616 container start 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:40:46 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [NOTICE]   (375830) : New worker (375832) forked
Nov 29 03:40:46 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [NOTICE]   (375830) : Loading success.
Nov 29 03:40:46 np0005539550 podman[375671]: 2025-11-29 08:40:46.107302407 +0000 UTC m=+1.209664772 container died 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.157 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.188 257641 DEBUG nova.network.neutron [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.189 257641 DEBUG nova.network.neutron [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:46.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-5eea8108d719473b7e62a07c78170082cd392ded31ebe6fbde1342b9fe483cfd-merged.mount: Deactivated successfully.
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.221 257641 DEBUG oslo_concurrency.lockutils [req-21c2717d-e08a-4c3d-84d5-dab005bacab9 req-1d123a5d-d84a-49a5-81b2-00da028700cc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.223 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:46 np0005539550 nova_compute[257631]: 2025-11-29 08:40:46.223 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:40:46 np0005539550 podman[375817]: 2025-11-29 08:40:46.257875557 +0000 UTC m=+0.313699599 container remove 7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:40:46 np0005539550 systemd[1]: libpod-conmon-7b95c7ba79059c97114c625d6e9d22ce12a612ba7d9139d0a8bf19165bc0b8b0.scope: Deactivated successfully.
Nov 29 03:40:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 665 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Nov 29 03:40:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:46.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:46 np0005539550 podman[375983]: 2025-11-29 08:40:46.929719978 +0000 UTC m=+0.043479841 container create 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:46 np0005539550 systemd[1]: Started libpod-conmon-89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451.scope.
Nov 29 03:40:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:46.912964084 +0000 UTC m=+0.026723967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:47.019327396 +0000 UTC m=+0.133087289 container init 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:47.02818348 +0000 UTC m=+0.141943343 container start 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:47.031692039 +0000 UTC m=+0.145451902 container attach 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:40:47 np0005539550 crazy_brahmagupta[375999]: 167 167
Nov 29 03:40:47 np0005539550 systemd[1]: libpod-89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451.scope: Deactivated successfully.
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:47.035505365 +0000 UTC m=+0.149265238 container died 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:40:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1fa722881f7c2c235bffccc65e84de3aa9c518febebe626f8d704468199636b2-merged.mount: Deactivated successfully.
Nov 29 03:40:47 np0005539550 podman[375983]: 2025-11-29 08:40:47.071809604 +0000 UTC m=+0.185569467 container remove 89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:40:47 np0005539550 systemd[1]: libpod-conmon-89a51b9a2a741f2097b560ee21cba90b4c31314bddda4ab1bc83d74bb5c3d451.scope: Deactivated successfully.
Nov 29 03:40:47 np0005539550 podman[376023]: 2025-11-29 08:40:47.275416647 +0000 UTC m=+0.050898130 container create 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:40:47 np0005539550 systemd[1]: Started libpod-conmon-76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc.scope.
Nov 29 03:40:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c459787a0d082a7d3437dfb93475542ba7cf73027de9c12e472821ccde319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c459787a0d082a7d3437dfb93475542ba7cf73027de9c12e472821ccde319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c459787a0d082a7d3437dfb93475542ba7cf73027de9c12e472821ccde319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee4c459787a0d082a7d3437dfb93475542ba7cf73027de9c12e472821ccde319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:47 np0005539550 podman[376023]: 2025-11-29 08:40:47.254053576 +0000 UTC m=+0.029535089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:47 np0005539550 podman[376023]: 2025-11-29 08:40:47.366679876 +0000 UTC m=+0.142161379 container init 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:40:47 np0005539550 podman[376023]: 2025-11-29 08:40:47.375345595 +0000 UTC m=+0.150827078 container start 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:40:47 np0005539550 podman[376023]: 2025-11-29 08:40:47.38304249 +0000 UTC m=+0.158523973 container attach 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.498 257641 DEBUG nova.compute.manager [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.499 257641 DEBUG oslo_concurrency.lockutils [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.499 257641 DEBUG oslo_concurrency.lockutils [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.499 257641 DEBUG oslo_concurrency.lockutils [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.499 257641 DEBUG nova.compute.manager [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] No waiting events found dispatching network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.499 257641 WARNING nova.compute.manager [req-d78044ee-9ce7-4441-aca1-7e5d21cacbb1 req-5e135acd-9f26-4b56-ad71-0816fcee3841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received unexpected event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.737 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.755 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.756 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.756 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.967 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.967 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.968 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.968 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.968 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.970 257641 INFO nova.compute.manager [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Terminating instance#033[00m
Nov 29 03:40:47 np0005539550 nova_compute[257631]: 2025-11-29 08:40:47.971 257641 DEBUG nova.compute.manager [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:40:48 np0005539550 kernel: tapdf823be2-d3 (unregistering): left promiscuous mode
Nov 29 03:40:48 np0005539550 NetworkManager[49039]: <info>  [1764405648.0368] device (tapdf823be2-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:40:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:48Z|00877|binding|INFO|Releasing lport df823be2-d3ae-4d3c-b70a-37db097fc356 from this chassis (sb_readonly=0)
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.043 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:48Z|00878|binding|INFO|Setting lport df823be2-d3ae-4d3c-b70a-37db097fc356 down in Southbound
Nov 29 03:40:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:40:48Z|00879|binding|INFO|Removing iface tapdf823be2-d3 ovn-installed in OVS
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.047 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.057 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:b9:6e 10.100.0.14'], port_security=['fa:16:3e:bc:b9:6e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '78534f04-30a6-4f58-9768-091f48082c9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d042f24f-c2f0-4843-9727-cc3720586596', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '889608c71d13429fb37793575792ae74', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9d2929c7-13e0-4091-aa93-1048c769102b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.180', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ccac3ef-f009-44e6-937a-0ec744b8cfbf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=df823be2-d3ae-4d3c-b70a-37db097fc356) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.058 158978 INFO neutron.agent.ovn.metadata.agent [-] Port df823be2-d3ae-4d3c-b70a-37db097fc356 in datapath d042f24f-c2f0-4843-9727-cc3720586596 unbound from our chassis#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.060 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d042f24f-c2f0-4843-9727-cc3720586596, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.062 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[145e7fd2-3295-41f1-b9cd-9689c6cac09c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.063 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 namespace which is not needed anymore#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.075 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000b9.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539550 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d000000b9.scope: Consumed 15.230s CPU time.
Nov 29 03:40:48 np0005539550 systemd-machined[216673]: Machine qemu-102-instance-000000b9 terminated.
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]: {
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:    "0": [
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:        {
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "devices": [
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "/dev/loop3"
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            ],
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "lv_name": "ceph_lv0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "lv_size": "7511998464",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "name": "ceph_lv0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "tags": {
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.cluster_name": "ceph",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.crush_device_class": "",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.encrypted": "0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.osd_id": "0",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.type": "block",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:                "ceph.vdo": "0"
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            },
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "type": "block",
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:            "vg_name": "ceph_vg0"
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:        }
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]:    ]
Nov 29 03:40:48 np0005539550 flamboyant_stonebraker[376039]: }
Nov 29 03:40:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:48.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.216 257641 INFO nova.virt.libvirt.driver [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Instance destroyed successfully.#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.217 257641 DEBUG nova.objects.instance [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lazy-loading 'resources' on Instance uuid 78534f04-30a6-4f58-9768-091f48082c9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [NOTICE]   (374848) : haproxy version is 2.8.14-c23fe91
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [NOTICE]   (374848) : path to executable is /usr/sbin/haproxy
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [WARNING]  (374848) : Exiting Master process...
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [WARNING]  (374848) : Exiting Master process...
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [ALERT]    (374848) : Current worker (374850) exited with code 143 (Terminated)
Nov 29 03:40:48 np0005539550 neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596[374844]: [WARNING]  (374848) : All workers exited. Exiting... (0)
Nov 29 03:40:48 np0005539550 systemd[1]: libpod-0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.233 257641 DEBUG nova.virt.libvirt.vif [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:39:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeShelveTestJSON-server-1172691465',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumeshelvetestjson-server-1172691465',id=185,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU/Um4efU77ova0kz9WpooHJ1R78p7btmWfhWnw2SWonD75t5qli/+3ke1m06M5GJ4lWVadgCjMA2qcBuT9S+rdGvf2uoQyHcOJoYChEuEWX8cHtdlJ+rQFdBRmufcsRw==',key_name='tempest-keypair-701830470',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:40:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='889608c71d13429fb37793575792ae74',ramdisk_id='',reservation_id='r-6fq03z6t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeShelveTestJSON-907905934',owner_user_name='tempest-AttachVolumeShelveTestJSON-907905934-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:40:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eeba34466b8f4a1bb5f742f1e811053c',uuid=78534f04-30a6-4f58-9768-091f48082c9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:40:48 np0005539550 systemd[1]: libpod-76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.233 257641 DEBUG nova.network.os_vif_util [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converting VIF {"id": "df823be2-d3ae-4d3c-b70a-37db097fc356", "address": "fa:16:3e:bc:b9:6e", "network": {"id": "d042f24f-c2f0-4843-9727-cc3720586596", "bridge": "br-int", "label": "tempest-AttachVolumeShelveTestJSON-1228611198-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "889608c71d13429fb37793575792ae74", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf823be2-d3", "ovs_interfaceid": "df823be2-d3ae-4d3c-b70a-37db097fc356", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.234 257641 DEBUG nova.network.os_vif_util [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.234 257641 DEBUG os_vif [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:40:48 np0005539550 podman[376023]: 2025-11-29 08:40:48.235440919 +0000 UTC m=+1.010922412 container died 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.237 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.237 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf823be2-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.241 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.244 257641 INFO os_vif [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:b9:6e,bridge_name='br-int',has_traffic_filtering=True,id=df823be2-d3ae-4d3c-b70a-37db097fc356,network=Network(d042f24f-c2f0-4843-9727-cc3720586596),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf823be2-d3')#033[00m
Nov 29 03:40:48 np0005539550 podman[376094]: 2025-11-29 08:40:48.275401151 +0000 UTC m=+0.108908017 container died 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:40:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ee4c459787a0d082a7d3437dfb93475542ba7cf73027de9c12e472821ccde319-merged.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b-userdata-shm.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-67ac36ec0ef3eb7a7e06af4f8a0b8b7c8fe842bdb4af38feebbf15f139a339f1-merged.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539550 podman[376023]: 2025-11-29 08:40:48.35045367 +0000 UTC m=+1.125935153 container remove 76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:40:48 np0005539550 podman[376094]: 2025-11-29 08:40:48.35718835 +0000 UTC m=+0.190695216 container cleanup 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:40:48 np0005539550 systemd[1]: libpod-conmon-0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539550 systemd[1]: libpod-conmon-76cbcba2a6c1a402c06636f4dcd09963e1719443d4b87e35af39771bccd13acc.scope: Deactivated successfully.
Nov 29 03:40:48 np0005539550 podman[376188]: 2025-11-29 08:40:48.448205434 +0000 UTC m=+0.048266223 container remove 0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.461 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a80e673-df99-486b-b9b2-eaaac12766bb]: (4, ('Sat Nov 29 08:40:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b)\n0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b\nSat Nov 29 08:40:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 (0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b)\n0cc1e0111c61e48756233486bc9cf92dc697a9a474edfc85e5f5971b8c7d438b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.466 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[edf187dd-176f-49e1-8578-d237723e6ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.468 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd042f24f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 kernel: tapd042f24f-c0: left promiscuous mode
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.473 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.487 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ad22927c-4e48-4797-aae5-8e659b5a5a05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 665 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 115 op/s
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.500 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.502 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7c249f-0293-490d-9a96-56fcfea3d83c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.504 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[71377ffe-e635-4286-97ce-ff2b5eaaabc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.522 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ff666fd9-431b-4df9-b45e-02e306e65d18]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865951, 'reachable_time': 40556, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376249, 'error': None, 'target': 'ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 systemd[1]: run-netns-ovnmeta\x2dd042f24f\x2dc2f0\x2d4843\x2d9727\x2dcc3720586596.mount: Deactivated successfully.
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.524 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d042f24f-c2f0-4843-9727-cc3720586596 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:40:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:40:48.524 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[27eb6d11-cc4f-4994-b2b5-c827fd086036]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:40:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:48.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.818 257641 INFO nova.virt.libvirt.driver [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deleting instance files /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c_del#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.819 257641 INFO nova.virt.libvirt.driver [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deletion of /var/lib/nova/instances/78534f04-30a6-4f58-9768-091f48082c9c_del complete#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.880 257641 INFO nova.compute.manager [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.881 257641 DEBUG oslo.service.loopingcall [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.881 257641 DEBUG nova.compute.manager [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.881 257641 DEBUG nova.network.neutron [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.950 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:40:48 np0005539550 nova_compute[257631]: 2025-11-29 08:40:48.950 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.038652716 +0000 UTC m=+0.074817755 container create 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.015412699 +0000 UTC m=+0.051577758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:49 np0005539550 systemd[1]: Started libpod-conmon-418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722.scope.
Nov 29 03:40:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.147029498 +0000 UTC m=+0.183194567 container init 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.15503346 +0000 UTC m=+0.191198499 container start 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.15859132 +0000 UTC m=+0.194756379 container attach 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:40:49 np0005539550 confident_bohr[376364]: 167 167
Nov 29 03:40:49 np0005539550 systemd[1]: libpod-418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722.scope: Deactivated successfully.
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.178893984 +0000 UTC m=+0.215059023 container died 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:40:49 np0005539550 podman[376345]: 2025-11-29 08:40:49.215062309 +0000 UTC m=+0.251227348 container remove 418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:40:49 np0005539550 systemd[1]: libpod-conmon-418cf7449e1b3e2f0b16010b2eeedceb39050a25aa0063357f9b4b7c192f4722.scope: Deactivated successfully.
Nov 29 03:40:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-560ff8022f60efa565674a036d68bd35590a9d478bdd7e977174662b1cd2aff4-merged.mount: Deactivated successfully.
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:49 np0005539550 podman[376405]: 2025-11-29 08:40:49.420812216 +0000 UTC m=+0.052561921 container create 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:40:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046175666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.455 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:49 np0005539550 systemd[1]: Started libpod-conmon-64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d.scope.
Nov 29 03:40:49 np0005539550 podman[376405]: 2025-11-29 08:40:49.397448965 +0000 UTC m=+0.029198690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:40:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:40:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7a13fdc6a402dc66c081df5e1d285c401cedd928de3aab02a810f795d0813d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7a13fdc6a402dc66c081df5e1d285c401cedd928de3aab02a810f795d0813d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7a13fdc6a402dc66c081df5e1d285c401cedd928de3aab02a810f795d0813d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f7a13fdc6a402dc66c081df5e1d285c401cedd928de3aab02a810f795d0813d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:40:49 np0005539550 podman[376405]: 2025-11-29 08:40:49.5466453 +0000 UTC m=+0.178395025 container init 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:40:49 np0005539550 podman[376405]: 2025-11-29 08:40:49.553817982 +0000 UTC m=+0.185567687 container start 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:40:49 np0005539550 podman[376405]: 2025-11-29 08:40:49.556942011 +0000 UTC m=+0.188691716 container attach 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.571 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.572 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000bb as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.577 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.577 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.752 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.753 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3690MB free_disk=20.829952239990234GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.753 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.754 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.856 257641 DEBUG nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.856 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.857 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.857 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.857 257641 DEBUG nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.858 257641 DEBUG nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-unplugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.858 257641 DEBUG nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.858 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "78534f04-30a6-4f58-9768-091f48082c9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.858 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.859 257641 DEBUG oslo_concurrency.lockutils [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.859 257641 DEBUG nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] No waiting events found dispatching network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.859 257641 WARNING nova.compute.manager [req-447c065b-6442-49ec-aa01-d6d70d88c744 req-33c9ccfb-70d3-4f17-abad-090ecb3966e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received unexpected event network-vif-plugged-df823be2-d3ae-4d3c-b70a-37db097fc356 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.904 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance db9a33b0-f745-4457-b7a6-d22017777a85 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 78534f04-30a6-4f58-9768-091f48082c9c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance eb84e8b8-cb17-4f34-8f44-41f2063a152a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.905 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.947 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.966 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.967 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:40:49 np0005539550 nova_compute[257631]: 2025-11-29 08:40:49.994 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.021 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.099 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:50.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.387 257641 DEBUG nova.network.neutron [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.401 257641 INFO nova.compute.manager [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Took 1.52 seconds to deallocate network for instance.#033[00m
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/551308392' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]: {
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:        "osd_id": 0,
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:        "type": "bluestore"
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]:    }
Nov 29 03:40:50 np0005539550 jovial_murdock[376423]: }
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.448 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:40:50 np0005539550 systemd[1]: libpod-64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d.scope: Deactivated successfully.
Nov 29 03:40:50 np0005539550 podman[376405]: 2025-11-29 08:40:50.461813059 +0000 UTC m=+1.093562754 container died 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:40:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1f7a13fdc6a402dc66c081df5e1d285c401cedd928de3aab02a810f795d0813d-merged.mount: Deactivated successfully.
Nov 29 03:40:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 665 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 113 op/s
Nov 29 03:40:50 np0005539550 podman[376405]: 2025-11-29 08:40:50.512991804 +0000 UTC m=+1.144741509 container remove 64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:40:50 np0005539550 systemd[1]: libpod-conmon-64368d21dcc9bceb8c2ac446247043b7ac4503d9ec407ec68a407dddc450cd3d.scope: Deactivated successfully.
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98128487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:50.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.551 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.560 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:40:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b6f73503-88ec-4a19-8551-1ee26cae3b00 does not exist
Nov 29 03:40:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev cbb1343c-ae92-441f-ba24-fc6f49806a67 does not exist
Nov 29 03:40:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b7f1fd48-42fc-429e-9575-f7f818dad93c does not exist
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.583 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.608 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.609 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.609 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:40:50 np0005539550 nova_compute[257631]: 2025-11-29 08:40:50.686 257641 DEBUG oslo_concurrency.processutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:40:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:40:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817919100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.184 257641 DEBUG oslo_concurrency.processutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.189 257641 DEBUG nova.compute.provider_tree [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.203 257641 DEBUG nova.scheduler.client.report [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.226 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.265 257641 INFO nova.scheduler.client.report [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Deleted allocations for instance 78534f04-30a6-4f58-9768-091f48082c9c#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.348 257641 DEBUG oslo_concurrency.lockutils [None req-f8891d00-f451-4dc2-b1ae-c06725552cc8 eeba34466b8f4a1bb5f742f1e811053c 889608c71d13429fb37793575792ae74 - - default default] Lock "78534f04-30a6-4f58-9768-091f48082c9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:40:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.971 257641 DEBUG nova.compute.manager [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Received event network-vif-deleted-df823be2-d3ae-4d3c-b70a-37db097fc356 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.971 257641 DEBUG nova.compute.manager [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.972 257641 DEBUG nova.compute.manager [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing instance network info cache due to event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.972 257641 DEBUG oslo_concurrency.lockutils [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.972 257641 DEBUG oslo_concurrency.lockutils [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:40:51 np0005539550 nova_compute[257631]: 2025-11-29 08:40:51.972 257641 DEBUG nova.network.neutron [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:40:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 599 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 164 op/s
Nov 29 03:40:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:52.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:53 np0005539550 nova_compute[257631]: 2025-11-29 08:40:53.241 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:53 np0005539550 nova_compute[257631]: 2025-11-29 08:40:53.382 257641 DEBUG nova.network.neutron [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updated VIF entry in instance network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:40:53 np0005539550 nova_compute[257631]: 2025-11-29 08:40:53.383 257641 DEBUG nova.network.neutron [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:40:53 np0005539550 nova_compute[257631]: 2025-11-29 08:40:53.402 257641 DEBUG oslo_concurrency.lockutils [req-ec0070f0-22b0-4ba9-8ff8-569c185f5940 req-acb356a2-8599-4f18-bf10-450242ea2549 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:40:53 np0005539550 nova_compute[257631]: 2025-11-29 08:40:53.610 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708423064' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1708423064' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:40:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/803755045' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:40:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:54.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:54 np0005539550 nova_compute[257631]: 2025-11-29 08:40:54.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 566 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 134 op/s
Nov 29 03:40:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:54 np0005539550 nova_compute[257631]: 2025-11-29 08:40:54.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292551620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:40:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1292551620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:40:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:40:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:56.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 505 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 29 KiB/s wr, 158 op/s
Nov 29 03:40:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:40:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:56.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:40:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:40:58.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:40:58 np0005539550 nova_compute[257631]: 2025-11-29 08:40:58.244 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:40:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:40:58 np0005539550 podman[376557]: 2025-11-29 08:40:58.339512969 +0000 UTC m=+0.065018767 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:40:58 np0005539550 podman[376556]: 2025-11-29 08:40:58.348582018 +0000 UTC m=+0.075241615 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:40:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 439 KiB/s wr, 151 op/s
Nov 29 03:40:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:40:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:40:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:40:58.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:40:59 np0005539550 nova_compute[257631]: 2025-11-29 08:40:59.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:40:59
Nov 29 03:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'vms', 'volumes']
Nov 29 03:40:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:40:59 np0005539550 nova_compute[257631]: 2025-11-29 08:40:59.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:00Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:10:6a 10.100.0.11
Nov 29 03:41:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:00Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:10:6a 10.100.0.11
Nov 29 03:41:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:00.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 507 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 857 KiB/s rd, 437 KiB/s wr, 117 op/s
Nov 29 03:41:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:00.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:00Z|00880|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:00Z|00881|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:00 np0005539550 nova_compute[257631]: 2025-11-29 08:41:00.832 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:02.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 536 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 170 op/s
Nov 29 03:41:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:02.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:03 np0005539550 nova_compute[257631]: 2025-11-29 08:41:03.212 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405648.2108846, 78534f04-30a6-4f58-9768-091f48082c9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:03 np0005539550 nova_compute[257631]: 2025-11-29 08:41:03.213 257641 INFO nova.compute.manager [-] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:03 np0005539550 nova_compute[257631]: 2025-11-29 08:41:03.238 257641 DEBUG nova.compute.manager [None req-b831cefd-d6e2-460c-aae1-77bfec59cc58 - - - - - -] [instance: 78534f04-30a6-4f58-9768-091f48082c9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:03 np0005539550 nova_compute[257631]: 2025-11-29 08:41:03.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:04.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:04 np0005539550 nova_compute[257631]: 2025-11-29 08:41:04.410 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 526 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 371 KiB/s rd, 2.1 MiB/s wr, 127 op/s
Nov 29 03:41:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:04.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.3 MiB/s wr, 145 op/s
Nov 29 03:41:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:06.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:06 np0005539550 nova_compute[257631]: 2025-11-29 08:41:06.894 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:06.894 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:06 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:06.896 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Nov 29 03:41:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Nov 29 03:41:07 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Nov 29 03:41:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:08Z|00882|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:41:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:08Z|00883|binding|INFO|Releasing lport 1642a0e3-a8d4-4ee4-8971-26f27541a04e from this chassis (sb_readonly=0)
Nov 29 03:41:08 np0005539550 nova_compute[257631]: 2025-11-29 08:41:08.292 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539550 nova_compute[257631]: 2025-11-29 08:41:08.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:41:08 np0005539550 podman[376601]: 2025-11-29 08:41:08.42362773 +0000 UTC m=+0.114455527 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 29 03:41:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 444 KiB/s rd, 2.3 MiB/s wr, 129 op/s
Nov 29 03:41:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:08.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:41:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:41:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:41:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:41:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.183753) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669183889, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1718, "num_deletes": 254, "total_data_size": 2772850, "memory_usage": 2823024, "flush_reason": "Manual Compaction"}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669208315, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 2726750, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59352, "largest_seqno": 61069, "table_properties": {"data_size": 2718957, "index_size": 4606, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17585, "raw_average_key_size": 20, "raw_value_size": 2702832, "raw_average_value_size": 3202, "num_data_blocks": 200, "num_entries": 844, "num_filter_entries": 844, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405525, "oldest_key_time": 1764405525, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 24646 microseconds, and 7700 cpu microseconds.
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.208405) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 2726750 bytes OK
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.208443) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.210468) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.210497) EVENT_LOG_v1 {"time_micros": 1764405669210491, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.210518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2765490, prev total WAL file size 2765490, number of live WAL files 2.
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.211560) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(2662KB)], [134(10MB)]
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669211646, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13244274, "oldest_snapshot_seqno": -1}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 9488 keys, 11345352 bytes, temperature: kUnknown
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669297834, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11345352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11285425, "index_size": 35110, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23749, "raw_key_size": 250424, "raw_average_key_size": 26, "raw_value_size": 11119847, "raw_average_value_size": 1171, "num_data_blocks": 1332, "num_entries": 9488, "num_filter_entries": 9488, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405669, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.298265) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11345352 bytes
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.300265) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.4 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.0 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 10015, records dropped: 527 output_compression: NoCompression
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.300285) EVENT_LOG_v1 {"time_micros": 1764405669300275, "job": 82, "event": "compaction_finished", "compaction_time_micros": 86346, "compaction_time_cpu_micros": 32190, "output_level": 6, "num_output_files": 1, "total_output_size": 11345352, "num_input_records": 10015, "num_output_records": 9488, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669301032, "job": 82, "event": "table_file_deletion", "file_number": 136}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405669303165, "job": 82, "event": "table_file_deletion", "file_number": 134}
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.211405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.303237) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.303243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.303244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.303246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:41:09.303248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:41:09 np0005539550 nova_compute[257631]: 2025-11-29 08:41:09.412 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:09 np0005539550 nova_compute[257631]: 2025-11-29 08:41:09.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:10.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:10 np0005539550 nova_compute[257631]: 2025-11-29 08:41:10.353 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 242 KiB/s rd, 506 KiB/s wr, 82 op/s
Nov 29 03:41:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:10.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:11 np0005539550 nova_compute[257631]: 2025-11-29 08:41:11.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:12.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 106 KiB/s rd, 567 KiB/s wr, 128 op/s
Nov 29 03:41:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:12.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:13 np0005539550 nova_compute[257631]: 2025-11-29 08:41:13.294 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:13 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:13.898 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:14 np0005539550 nova_compute[257631]: 2025-11-29 08:41:14.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 460 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 298 KiB/s wr, 83 op/s
Nov 29 03:41:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:14.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Nov 29 03:41:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Nov 29 03:41:15 np0005539550 nova_compute[257631]: 2025-11-29 08:41:15.614 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:15 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Nov 29 03:41:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 276 KiB/s wr, 82 op/s
Nov 29 03:41:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:16.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.016 257641 DEBUG nova.compute.manager [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.016 257641 DEBUG nova.compute.manager [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing instance network info cache due to event network-changed-d3028052-76d1-49d3-9c2d-9126edc45935. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.016 257641 DEBUG oslo_concurrency.lockutils [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.016 257641 DEBUG oslo_concurrency.lockutils [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.016 257641 DEBUG nova.network.neutron [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Refreshing network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:18.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:18 np0005539550 nova_compute[257631]: 2025-11-29 08:41:18.296 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 237 KiB/s wr, 70 op/s
Nov 29 03:41:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:18.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:18.972 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:18.973 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:18.973 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:19 np0005539550 nova_compute[257631]: 2025-11-29 08:41:19.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:19 np0005539550 nova_compute[257631]: 2025-11-29 08:41:19.583 257641 DEBUG nova.network.neutron [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updated VIF entry in instance network info cache for port d3028052-76d1-49d3-9c2d-9126edc45935. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:19 np0005539550 nova_compute[257631]: 2025-11-29 08:41:19.584 257641 DEBUG nova.network.neutron [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [{"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:19 np0005539550 nova_compute[257631]: 2025-11-29 08:41:19.600 257641 DEBUG oslo_concurrency.lockutils [req-722e2f94-06c0-4d0c-8e05-70c846468712 req-7dadf9a8-cbd2-497c-8e54-8fedb91c45d0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-db9a33b0-f745-4457-b7a6-d22017777a85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:20.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002191222315280104 of space, bias 1.0, pg target 0.6573666945840313 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.008671467871766238 of space, bias 1.0, pg target 2.601440361529871 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:41:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 458 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 47 KiB/s rd, 222 KiB/s wr, 66 op/s
Nov 29 03:41:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:20.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:22.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 486 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.3 MiB/s wr, 51 op/s
Nov 29 03:41:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:22.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:23 np0005539550 nova_compute[257631]: 2025-11-29 08:41:23.298 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:24.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.464 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 504 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:41:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:24.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.969 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.970 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.970 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.970 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.970 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.971 257641 INFO nova.compute.manager [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Terminating instance#033[00m
Nov 29 03:41:24 np0005539550 nova_compute[257631]: 2025-11-29 08:41:24.972 257641 DEBUG nova.compute.manager [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:41:25 np0005539550 kernel: tapd3028052-76 (unregistering): left promiscuous mode
Nov 29 03:41:25 np0005539550 NetworkManager[49039]: <info>  [1764405685.1490] device (tapd3028052-76): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.158 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:25Z|00884|binding|INFO|Releasing lport d3028052-76d1-49d3-9c2d-9126edc45935 from this chassis (sb_readonly=0)
Nov 29 03:41:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:25Z|00885|binding|INFO|Setting lport d3028052-76d1-49d3-9c2d-9126edc45935 down in Southbound
Nov 29 03:41:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:25Z|00886|binding|INFO|Removing iface tapd3028052-76 ovn-installed in OVS
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.161 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.165 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:60:ba 10.100.0.4'], port_security=['fa:16:3e:52:60:ba 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'db9a33b0-f745-4457-b7a6-d22017777a85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c114bc23-cd62-4198-a95d-5595953a88bd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62ca01275fe34ea0af31d00b34d6d9a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '9ca74358-0566-4f32-a6ba-a0c4dcd1723c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6cd3a0f0-9ad7-457d-b2e3-d5300cfee042, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d3028052-76d1-49d3-9c2d-9126edc45935) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.167 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d3028052-76d1-49d3-9c2d-9126edc45935 in datapath c114bc23-cd62-4198-a95d-5595953a88bd unbound from our chassis#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.168 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c114bc23-cd62-4198-a95d-5595953a88bd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.170 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[86bc9afb-95e3-4b92-8267-b0caaeea37a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.170 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd namespace which is not needed anymore#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.176 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000bb.scope: Deactivated successfully.
Nov 29 03:41:25 np0005539550 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d000000bb.scope: Consumed 19.156s CPU time.
Nov 29 03:41:25 np0005539550 systemd-machined[216673]: Machine qemu-101-instance-000000bb terminated.
Nov 29 03:41:25 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [NOTICE]   (373686) : haproxy version is 2.8.14-c23fe91
Nov 29 03:41:25 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [NOTICE]   (373686) : path to executable is /usr/sbin/haproxy
Nov 29 03:41:25 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [WARNING]  (373686) : Exiting Master process...
Nov 29 03:41:25 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [ALERT]    (373686) : Current worker (373695) exited with code 143 (Terminated)
Nov 29 03:41:25 np0005539550 neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd[373678]: [WARNING]  (373686) : All workers exited. Exiting... (0)
Nov 29 03:41:25 np0005539550 systemd[1]: libpod-391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec.scope: Deactivated successfully.
Nov 29 03:41:25 np0005539550 podman[376710]: 2025-11-29 08:41:25.2943773 +0000 UTC m=+0.041107181 container died 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec-userdata-shm.mount: Deactivated successfully.
Nov 29 03:41:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-56667402d78935e6d38138c556e8a0adb793175078c438c7a9fb606791fad71a-merged.mount: Deactivated successfully.
Nov 29 03:41:25 np0005539550 podman[376710]: 2025-11-29 08:41:25.333621263 +0000 UTC m=+0.080351134 container cleanup 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:25 np0005539550 systemd[1]: libpod-conmon-391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec.scope: Deactivated successfully.
Nov 29 03:41:25 np0005539550 podman[376737]: 2025-11-29 08:41:25.394239567 +0000 UTC m=+0.041422469 container remove 391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.400 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9545dca6-cd49-4d3a-b164-7709654bb443]: (4, ('Sat Nov 29 08:41:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec)\n391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec\nSat Nov 29 08:41:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd (391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec)\n391a1f9161f401187c06f883067489e49e22bb2bd83c2a25a07c55d360d629ec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.402 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5d59b7-20d0-4cf8-af6a-8d528a7f3e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.402 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc114bc23-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.409 257641 INFO nova.virt.libvirt.driver [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Instance destroyed successfully.#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.411 257641 DEBUG nova.objects.instance [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lazy-loading 'resources' on Instance uuid db9a33b0-f745-4457-b7a6-d22017777a85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.427 257641 DEBUG nova.virt.libvirt.vif [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:39:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-1460949178',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-1460949178',id=187,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFG6wdTr4YnTt5IOi90oQevRIaDEFT6evKD2WqzrA5InuHLLPBBDt+A3IDlfUfF0+VTQ8wx7jPD+CP0zgY5zll3JN5Id1HeD6V5ixHcQktu+0EcaYFcg2TVX8XapVterdw==',key_name='tempest-TestInstancesWithCinderVolumes-1193741997',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:39:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='62ca01275fe34ea0af31d00b34d6d9a5',ramdisk_id='',reservation_id='r-nbecgxbn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-911868990',owner_user_name='tempest-TestInstancesWithCinderVolumes-911868990-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:39:37Z,user_data=None,user_id='facf4db8501041ab9628ff9f5684c992',uuid=db9a33b0-f745-4457-b7a6-d22017777a85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.427 257641 DEBUG nova.network.os_vif_util [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converting VIF {"id": "d3028052-76d1-49d3-9c2d-9126edc45935", "address": "fa:16:3e:52:60:ba", "network": {"id": "c114bc23-cd62-4198-a95d-5595953a88bd", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1844463313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "62ca01275fe34ea0af31d00b34d6d9a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3028052-76", "ovs_interfaceid": "d3028052-76d1-49d3-9c2d-9126edc45935", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.428 257641 DEBUG nova.network.os_vif_util [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.429 257641 DEBUG os_vif [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.431 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.431 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3028052-76, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:25 np0005539550 kernel: tapc114bc23-c0: left promiscuous mode
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.437 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.454 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.458 257641 INFO os_vif [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:60:ba,bridge_name='br-int',has_traffic_filtering=True,id=d3028052-76d1-49d3-9c2d-9126edc45935,network=Network(c114bc23-cd62-4198-a95d-5595953a88bd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3028052-76')#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.458 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[78d75f5f-3094-4a88-97cc-6914cb198be0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.478 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eb19b5f0-4967-4eb2-aa98-1c78f6fc6d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.479 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[241273c6-fdec-4507-92f0-253eec1f775a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.494 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[67970480-63b0-4592-8a04-8d9f38f94cad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 862345, 'reachable_time': 30631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376776, 'error': None, 'target': 'ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 systemd[1]: run-netns-ovnmeta\x2dc114bc23\x2dcd62\x2d4198\x2da95d\x2d5595953a88bd.mount: Deactivated successfully.
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.496 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c114bc23-cd62-4198-a95d-5595953a88bd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:41:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:25.496 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[d9057e9c-c591-4b78-883f-c0f62cb438e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.553 257641 DEBUG nova.compute.manager [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-unplugged-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.553 257641 DEBUG oslo_concurrency.lockutils [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.554 257641 DEBUG oslo_concurrency.lockutils [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.554 257641 DEBUG oslo_concurrency.lockutils [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.554 257641 DEBUG nova.compute.manager [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] No waiting events found dispatching network-vif-unplugged-d3028052-76d1-49d3-9c2d-9126edc45935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.554 257641 DEBUG nova.compute.manager [req-d3a0f831-44c5-4870-8279-ab29b8ce9024 req-c5bc09bc-bbed-46a1-a476-04e2344c72c7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-unplugged-d3028052-76d1-49d3-9c2d-9126edc45935 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.671 257641 INFO nova.virt.libvirt.driver [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Deleting instance files /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85_del#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.672 257641 INFO nova.virt.libvirt.driver [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Deletion of /var/lib/nova/instances/db9a33b0-f745-4457-b7a6-d22017777a85_del complete#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.719 257641 INFO nova.compute.manager [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.719 257641 DEBUG oslo.service.loopingcall [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.719 257641 DEBUG nova.compute.manager [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:41:25 np0005539550 nova_compute[257631]: 2025-11-29 08:41:25.719 257641 DEBUG nova.network.neutron [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:41:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:26.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.363 257641 DEBUG nova.network.neutron [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.381 257641 INFO nova.compute.manager [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Took 0.66 seconds to deallocate network for instance.#033[00m
Nov 29 03:41:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 541 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 80 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Nov 29 03:41:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:26.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.693 257641 INFO nova.compute.manager [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.775 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.775 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:26 np0005539550 nova_compute[257631]: 2025-11-29 08:41:26.853 257641 DEBUG oslo_concurrency.processutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310980425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.278 257641 DEBUG oslo_concurrency.processutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.284 257641 DEBUG nova.compute.provider_tree [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.306 257641 DEBUG nova.scheduler.client.report [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.339 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.358 257641 INFO nova.scheduler.client.report [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Deleted allocations for instance db9a33b0-f745-4457-b7a6-d22017777a85#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.447 257641 DEBUG oslo_concurrency.lockutils [None req-24df2a25-28c7-4cf9-9c57-e2614e7c97bf facf4db8501041ab9628ff9f5684c992 62ca01275fe34ea0af31d00b34d6d9a5 - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.840 257641 DEBUG nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.840 257641 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.841 257641 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.841 257641 DEBUG oslo_concurrency.lockutils [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "db9a33b0-f745-4457-b7a6-d22017777a85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.841 257641 DEBUG nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] No waiting events found dispatching network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.841 257641 WARNING nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received unexpected event network-vif-plugged-d3028052-76d1-49d3-9c2d-9126edc45935 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:41:27 np0005539550 nova_compute[257631]: 2025-11-29 08:41:27.841 257641 DEBUG nova.compute.manager [req-08b20e10-3631-4003-ae18-6e286fac4e8b req-7cf745e4-5fde-4abb-8d02-851aaf69d8ae 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Received event network-vif-deleted-d3028052-76d1-49d3-9c2d-9126edc45935 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:28 np0005539550 nova_compute[257631]: 2025-11-29 08:41:28.017 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:28.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 3.6 MiB/s wr, 128 op/s
Nov 29 03:41:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:28.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:28 np0005539550 podman[376828]: 2025-11-29 08:41:28.65060099 +0000 UTC m=+0.064128574 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:41:28 np0005539550 podman[376829]: 2025-11-29 08:41:28.667482577 +0000 UTC m=+0.079557144 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 03:41:29 np0005539550 nova_compute[257631]: 2025-11-29 08:41:29.466 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:30.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:30 np0005539550 nova_compute[257631]: 2025-11-29 08:41:30.436 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 773 KiB/s rd, 3.6 MiB/s wr, 128 op/s
Nov 29 03:41:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:30.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:30 np0005539550 nova_compute[257631]: 2025-11-29 08:41:30.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 551 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 219 op/s
Nov 29 03:41:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:32.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:34.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:34 np0005539550 nova_compute[257631]: 2025-11-29 08:41:34.468 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 496 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.5 MiB/s wr, 220 op/s
Nov 29 03:41:34 np0005539550 nova_compute[257631]: 2025-11-29 08:41:34.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:34.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:41:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/491592527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:41:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:41:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/491592527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:41:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:35 np0005539550 nova_compute[257631]: 2025-11-29 08:41:35.489 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:36.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 299 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 280 op/s
Nov 29 03:41:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:36.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Nov 29 03:41:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Nov 29 03:41:37 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Nov 29 03:41:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:38.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 265 op/s
Nov 29 03:41:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:38.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:38 np0005539550 nova_compute[257631]: 2025-11-29 08:41:38.829 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:39 np0005539550 podman[376899]: 2025-11-29 08:41:39.352755873 +0000 UTC m=+0.085200297 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:41:39 np0005539550 nova_compute[257631]: 2025-11-29 08:41:39.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:40 np0005539550 nova_compute[257631]: 2025-11-29 08:41:40.408 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405685.4077642, db9a33b0-f745-4457-b7a6-d22017777a85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:40 np0005539550 nova_compute[257631]: 2025-11-29 08:41:40.408 257641 INFO nova.compute.manager [-] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:41:40 np0005539550 nova_compute[257631]: 2025-11-29 08:41:40.427 257641 DEBUG nova.compute.manager [None req-bfbdf3b9-3023-4ad6-9e8d-008687a20619 - - - - - -] [instance: db9a33b0-f745-4457-b7a6-d22017777a85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:40 np0005539550 nova_compute[257631]: 2025-11-29 08:41:40.491 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 270 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 265 op/s
Nov 29 03:41:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:41Z|00887|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:41:41 np0005539550 nova_compute[257631]: 2025-11-29 08:41:41.863 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:42.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.5 MiB/s wr, 208 op/s
Nov 29 03:41:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:42.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:44 np0005539550 nova_compute[257631]: 2025-11-29 08:41:44.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.6 MiB/s wr, 204 op/s
Nov 29 03:41:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:44.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Nov 29 03:41:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Nov 29 03:41:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Nov 29 03:41:45 np0005539550 nova_compute[257631]: 2025-11-29 08:41:45.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:46.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 320 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 521 KiB/s rd, 4.8 MiB/s wr, 172 op/s
Nov 29 03:41:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:46 np0005539550 nova_compute[257631]: 2025-11-29 08:41:46.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:46 np0005539550 nova_compute[257631]: 2025-11-29 08:41:46.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:41:46 np0005539550 nova_compute[257631]: 2025-11-29 08:41:46.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.116 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.116 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.117 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.117 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid eb84e8b8-cb17-4f34-8f44-41f2063a152a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:47.400 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:47.402 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:41:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:47.403 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:47Z|00888|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.559 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.560 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.586 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.661 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.669 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.670 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.676 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.676 257641 INFO nova.compute.claims [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:41:47 np0005539550 nova_compute[257631]: 2025-11-29 08:41:47.778 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981368804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.236 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.242 257641 DEBUG nova.compute.provider_tree [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.263 257641 DEBUG nova.scheduler.client.report [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.299 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.299 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:41:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 412 KiB/s rd, 3.4 MiB/s wr, 115 op/s
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.572 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.572 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.608 257641 INFO nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:41:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:48.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.642 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.801 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.803 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.803 257641 INFO nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Creating image(s)#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.831 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.860 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.888 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.892 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.981 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.983 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.984 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:48 np0005539550 nova_compute[257631]: 2025-11-29 08:41:48.984 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.013 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.017 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.399 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.445 257641 DEBUG nova.policy [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ea98d0ceb3954515a9c726d0d32d30cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c80b570a17ee4094b96a75465fc34ae7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.501 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] resizing rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.612 257641 DEBUG nova.objects.instance [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'migration_context' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.630 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.630 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Ensure instance console log exists: /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.631 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.631 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.631 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.722 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.752 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.753 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.753 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.753 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.943 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.943 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:41:49 np0005539550 nova_compute[257631]: 2025-11-29 08:41:49.944 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770533325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.451 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 325 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 412 KiB/s rd, 3.4 MiB/s wr, 115 op/s
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.536 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.537 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000be as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:41:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:50.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.708 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.710 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=20.876445770263672GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.710 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.710 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.734 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Successfully created port: 8853309e-7522-4c33-8ef8-68e265aa66bc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.779 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance eb84e8b8-cb17-4f34-8f44-41f2063a152a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.780 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.780 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.780 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:41:50 np0005539550 nova_compute[257631]: 2025-11-29 08:41:50.869 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:41:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711235245' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:41:51 np0005539550 nova_compute[257631]: 2025-11-29 08:41:51.350 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:51 np0005539550 nova_compute[257631]: 2025-11-29 08:41:51.356 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:41:51 np0005539550 nova_compute[257631]: 2025-11-29 08:41:51.371 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:41:51 np0005539550 nova_compute[257631]: 2025-11-29 08:41:51.401 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:41:51 np0005539550 nova_compute[257631]: 2025-11-29 08:41:51.402 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 85fb07af-56bc-47e5-addc-e9e9cc54c389 does not exist
Nov 29 03:41:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7795f7ee-ad7d-441d-acff-b45e158c42c9 does not exist
Nov 29 03:41:52 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 136802a6-7b43-463c-b2b3-a597cd926502 does not exist
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.247 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Successfully updated port: 8853309e-7522-4c33-8ef8-68e265aa66bc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.263 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.263 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquired lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.264 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:41:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:41:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.348 257641 DEBUG nova.compute.manager [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-changed-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.349 257641 DEBUG nova.compute.manager [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Refreshing instance network info cache due to event network-changed-8853309e-7522-4c33-8ef8-68e265aa66bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.349 257641 DEBUG oslo_concurrency.lockutils [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:41:52 np0005539550 nova_compute[257631]: 2025-11-29 08:41:52.425 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:41:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 358 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 102 op/s
Nov 29 03:41:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.647898 +0000 UTC m=+0.046610190 container create 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:52 np0005539550 systemd[1]: Started libpod-conmon-54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f.scope.
Nov 29 03:41:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.626394166 +0000 UTC m=+0.025106386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.739771225 +0000 UTC m=+0.138483445 container init 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.747285485 +0000 UTC m=+0.145997675 container start 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.750507107 +0000 UTC m=+0.149219327 container attach 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:52 np0005539550 optimistic_goldberg[377506]: 167 167
Nov 29 03:41:52 np0005539550 systemd[1]: libpod-54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f.scope: Deactivated successfully.
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.755683218 +0000 UTC m=+0.154395408 container died 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:41:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-dc75588481f12bf170a49a197b78fd54a64902f82233002ae31ce128ca25668a-merged.mount: Deactivated successfully.
Nov 29 03:41:52 np0005539550 podman[377489]: 2025-11-29 08:41:52.796297256 +0000 UTC m=+0.195009446 container remove 54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:41:52 np0005539550 systemd[1]: libpod-conmon-54a8368def63e48e7985a255da279bcb16ad9df81a05854026ea22841bc7250f.scope: Deactivated successfully.
Nov 29 03:41:52 np0005539550 podman[377531]: 2025-11-29 08:41:52.956844219 +0000 UTC m=+0.040486766 container create e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:41:52 np0005539550 systemd[1]: Started libpod-conmon-e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc.scope.
Nov 29 03:41:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:53 np0005539550 podman[377531]: 2025-11-29 08:41:52.940629148 +0000 UTC m=+0.024271715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:53 np0005539550 podman[377531]: 2025-11-29 08:41:53.038414173 +0000 UTC m=+0.122056740 container init e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:53 np0005539550 podman[377531]: 2025-11-29 08:41:53.054005767 +0000 UTC m=+0.137648354 container start e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:41:53 np0005539550 podman[377531]: 2025-11-29 08:41:53.058670765 +0000 UTC m=+0.142313332 container attach e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.402 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.547 257641 DEBUG nova.network.neutron [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updating instance_info_cache with network_info: [{"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.576 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Releasing lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.577 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Instance network_info: |[{"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.577 257641 DEBUG oslo_concurrency.lockutils [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.577 257641 DEBUG nova.network.neutron [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Refreshing network info cache for port 8853309e-7522-4c33-8ef8-68e265aa66bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.580 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Start _get_guest_xml network_info=[{"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.584 257641 WARNING nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.590 257641 DEBUG nova.virt.libvirt.host [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.591 257641 DEBUG nova.virt.libvirt.host [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.595 257641 DEBUG nova.virt.libvirt.host [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.596 257641 DEBUG nova.virt.libvirt.host [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.597 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.597 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.597 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.598 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.599 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.599 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.599 257641 DEBUG nova.virt.hardware [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:41:53 np0005539550 nova_compute[257631]: 2025-11-29 08:41:53.602 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:53 np0005539550 suspicious_beaver[377547]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:41:53 np0005539550 suspicious_beaver[377547]: --> relative data size: 1.0
Nov 29 03:41:53 np0005539550 suspicious_beaver[377547]: --> All data devices are unavailable
Nov 29 03:41:53 np0005539550 systemd[1]: libpod-e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc.scope: Deactivated successfully.
Nov 29 03:41:53 np0005539550 podman[377531]: 2025-11-29 08:41:53.926770363 +0000 UTC m=+1.010412910 container died e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:41:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562435216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.056 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.081 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.086 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4fbad3787033fe069be951b551f2baebcbe5785954f24d309393161c79872913-merged.mount: Deactivated successfully.
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.474 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 4.3 MiB/s wr, 80 op/s
Nov 29 03:41:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:41:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1701503169' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.640 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.642 257641 DEBUG nova.virt.libvirt.vif [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-379179721',display_name='tempest-TestEncryptedCinderVolumes-server-379179721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-379179721',id=194,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU6R3weT3GiiYDt9m0b75PL/66gnHnnjWqCRS7wwU7olxjt6ZymuTuevhZjCJC5f1DoHCAQw/cmQ75aXPjgtF/klzEmcYrj3tPKcITrdCstg8d7yxofMGw1A9EhXJR6yA==',key_name='tempest-keypair-1489350560',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c80b570a17ee4094b96a75465fc34ae7',ramdisk_id='',reservation_id='r-s29d0ca2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2095799727',owner_user_name='tempest-TestEncryptedCinderVolumes-2095799727-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea98d0ceb3954515a9c726d0d32d30cb',uuid=60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.642 257641 DEBUG nova.network.os_vif_util [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converting VIF {"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.643 257641 DEBUG nova.network.os_vif_util [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.645 257641 DEBUG nova.objects.instance [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:41:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:54.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:54 np0005539550 podman[377531]: 2025-11-29 08:41:54.669229101 +0000 UTC m=+1.752871658 container remove e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_beaver, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:54 np0005539550 systemd[1]: libpod-conmon-e43eb4b0d46f763c48d8d07fb710411be2f59927415cd6e4088583a5397013fc.scope: Deactivated successfully.
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.788 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <uuid>60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe</uuid>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <name>instance-000000c2</name>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestEncryptedCinderVolumes-server-379179721</nova:name>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:41:53</nova:creationTime>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:user uuid="ea98d0ceb3954515a9c726d0d32d30cb">tempest-TestEncryptedCinderVolumes-2095799727-project-member</nova:user>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:project uuid="c80b570a17ee4094b96a75465fc34ae7">tempest-TestEncryptedCinderVolumes-2095799727</nova:project>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <nova:port uuid="8853309e-7522-4c33-8ef8-68e265aa66bc">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="serial">60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="uuid">60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:1a:80:54"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <target dev="tap8853309e-75"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/console.log" append="off"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:41:54 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:41:54 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:41:54 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:41:54 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.790 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Preparing to wait for external event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.790 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.791 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.791 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.792 257641 DEBUG nova.virt.libvirt.vif [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-379179721',display_name='tempest-TestEncryptedCinderVolumes-server-379179721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-379179721',id=194,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU6R3weT3GiiYDt9m0b75PL/66gnHnnjWqCRS7wwU7olxjt6ZymuTuevhZjCJC5f1DoHCAQw/cmQ75aXPjgtF/klzEmcYrj3tPKcITrdCstg8d7yxofMGw1A9EhXJR6yA==',key_name='tempest-keypair-1489350560',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c80b570a17ee4094b96a75465fc34ae7',ramdisk_id='',reservation_id='r-s29d0ca2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-2095799727',owner_user_name='tempest-TestEncryptedCinderVolumes-2095799727-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:41:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea98d0ceb3954515a9c726d0d32d30cb',uuid=60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.792 257641 DEBUG nova.network.os_vif_util [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converting VIF {"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.792 257641 DEBUG nova.network.os_vif_util [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.793 257641 DEBUG os_vif [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.793 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.794 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.794 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.797 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.798 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8853309e-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.798 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8853309e-75, col_values=(('external_ids', {'iface-id': '8853309e-7522-4c33-8ef8-68e265aa66bc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:80:54', 'vm-uuid': '60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539550 NetworkManager[49039]: <info>  [1764405714.8014] manager: (tap8853309e-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.807 257641 INFO os_vif [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75')#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.873 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.873 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.873 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No VIF found with MAC fa:16:3e:1a:80:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.874 257641 INFO nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Using config drive#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.905 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:54 np0005539550 nova_compute[257631]: 2025-11-29 08:41:54.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.289136088 +0000 UTC m=+0.036998607 container create 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:41:55 np0005539550 systemd[1]: Started libpod-conmon-148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb.scope.
Nov 29 03:41:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.272022895 +0000 UTC m=+0.019885444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.458 257641 INFO nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Creating config drive at /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config#033[00m
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.464 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6esol9_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.490606816 +0000 UTC m=+0.238469335 container init 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.495 257641 DEBUG nova.network.neutron [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updated VIF entry in instance network info cache for port 8853309e-7522-4c33-8ef8-68e265aa66bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.496 257641 DEBUG nova.network.neutron [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updating instance_info_cache with network_info: [{"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.498482196 +0000 UTC m=+0.246344715 container start 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:41:55 np0005539550 practical_yonath[377815]: 167 167
Nov 29 03:41:55 np0005539550 systemd[1]: libpod-148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb.scope: Deactivated successfully.
Nov 29 03:41:55 np0005539550 conmon[377815]: conmon 148fac74f13204995e69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb.scope/container/memory.events
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.519 257641 DEBUG oslo_concurrency.lockutils [req-f3fb2434-f4a7-4fb4-8956-e6b7781fe9a8 req-196e2e8c-3297-499a-b160-5b0d4bb66be9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:41:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.602 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6esol9_o" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.628 257641 DEBUG nova.storage.rbd_utils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] rbd image 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:41:55 np0005539550 nova_compute[257631]: 2025-11-29 08:41:55.631 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.911324462 +0000 UTC m=+0.659187001 container attach 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:55 np0005539550 podman[377799]: 2025-11-29 08:41:55.911935407 +0000 UTC m=+0.659797936 container died 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:41:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-810711b739b141236c46ff4ddb859cef61954e13a27938417ce690f1cbfd70a5-merged.mount: Deactivated successfully.
Nov 29 03:41:56 np0005539550 podman[377799]: 2025-11-29 08:41:56.498093531 +0000 UTC m=+1.245956040 container remove 148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:41:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 330 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 157 op/s
Nov 29 03:41:56 np0005539550 systemd[1]: libpod-conmon-148fac74f13204995e6970371c70ee1690968f559c69cdfef00340abd4328ddb.scope: Deactivated successfully.
Nov 29 03:41:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:56.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:56 np0005539550 podman[377877]: 2025-11-29 08:41:56.696566433 +0000 UTC m=+0.069872729 container create eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:56 np0005539550 systemd[1]: Started libpod-conmon-eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0.scope.
Nov 29 03:41:56 np0005539550 podman[377877]: 2025-11-29 08:41:56.651035711 +0000 UTC m=+0.024342027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4e895a7e2b8328c58716e2fc9fd12e3faaebbfc6a259c9bfca151eac602909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4e895a7e2b8328c58716e2fc9fd12e3faaebbfc6a259c9bfca151eac602909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4e895a7e2b8328c58716e2fc9fd12e3faaebbfc6a259c9bfca151eac602909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c4e895a7e2b8328c58716e2fc9fd12e3faaebbfc6a259c9bfca151eac602909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:56 np0005539550 podman[377877]: 2025-11-29 08:41:56.807361757 +0000 UTC m=+0.180668073 container init eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:41:56 np0005539550 podman[377877]: 2025-11-29 08:41:56.815578505 +0000 UTC m=+0.188884801 container start eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:56 np0005539550 podman[377877]: 2025-11-29 08:41:56.818971891 +0000 UTC m=+0.192278187 container attach eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]: {
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:    "0": [
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:        {
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "devices": [
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "/dev/loop3"
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            ],
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "lv_name": "ceph_lv0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "lv_size": "7511998464",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "name": "ceph_lv0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "tags": {
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.cluster_name": "ceph",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.crush_device_class": "",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.encrypted": "0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.osd_id": "0",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.type": "block",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:                "ceph.vdo": "0"
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            },
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "type": "block",
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:            "vg_name": "ceph_vg0"
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:        }
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]:    ]
Nov 29 03:41:57 np0005539550 practical_lamarr[377894]: }
Nov 29 03:41:57 np0005539550 systemd[1]: libpod-eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0.scope: Deactivated successfully.
Nov 29 03:41:57 np0005539550 podman[377877]: 2025-11-29 08:41:57.621754936 +0000 UTC m=+0.995061232 container died eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:41:57 np0005539550 nova_compute[257631]: 2025-11-29 08:41:57.653 257641 DEBUG oslo_concurrency.processutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:41:57 np0005539550 nova_compute[257631]: 2025-11-29 08:41:57.654 257641 INFO nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Deleting local config drive /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe/disk.config because it was imported into RBD.#033[00m
Nov 29 03:41:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4c4e895a7e2b8328c58716e2fc9fd12e3faaebbfc6a259c9bfca151eac602909-merged.mount: Deactivated successfully.
Nov 29 03:41:57 np0005539550 virtqemud[256287]: End of file while reading data: Input/output error
Nov 29 03:41:57 np0005539550 podman[377877]: 2025-11-29 08:41:57.690125336 +0000 UTC m=+1.063431632 container remove eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:41:57 np0005539550 systemd[1]: libpod-conmon-eeacfa328be472bbe6d4e3f8d46d5b8d43ce94c1257f055279da3c9ea364ffe0.scope: Deactivated successfully.
Nov 29 03:41:57 np0005539550 kernel: tap8853309e-75: entered promiscuous mode
Nov 29 03:41:57 np0005539550 NetworkManager[49039]: <info>  [1764405717.7027] manager: (tap8853309e-75): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Nov 29 03:41:57 np0005539550 systemd-udevd[377927]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:41:57 np0005539550 nova_compute[257631]: 2025-11-29 08:41:57.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:57Z|00889|binding|INFO|Claiming lport 8853309e-7522-4c33-8ef8-68e265aa66bc for this chassis.
Nov 29 03:41:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:57Z|00890|binding|INFO|8853309e-7522-4c33-8ef8-68e265aa66bc: Claiming fa:16:3e:1a:80:54 10.100.0.13
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.764 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:80:54 10.100.0.13'], port_security=['fa:16:3e:1a:80:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c80b570a17ee4094b96a75465fc34ae7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5aef7224-6192-4b62-9083-c46c2faa4d85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63116488-2ad3-4f39-b89c-533f5dfc7be4, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8853309e-7522-4c33-8ef8-68e265aa66bc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.765 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8853309e-7522-4c33-8ef8-68e265aa66bc in datapath dbf070db-94dd-4ca0-b2e8-d618a9d52a6c bound to our chassis#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.767 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dbf070db-94dd-4ca0-b2e8-d618a9d52a6c#033[00m
Nov 29 03:41:57 np0005539550 NetworkManager[49039]: <info>  [1764405717.7747] device (tap8853309e-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:41:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:57Z|00891|binding|INFO|Setting lport 8853309e-7522-4c33-8ef8-68e265aa66bc ovn-installed in OVS
Nov 29 03:41:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:57Z|00892|binding|INFO|Setting lport 8853309e-7522-4c33-8ef8-68e265aa66bc up in Southbound
Nov 29 03:41:57 np0005539550 NetworkManager[49039]: <info>  [1764405717.7764] device (tap8853309e-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:41:57 np0005539550 nova_compute[257631]: 2025-11-29 08:41:57.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.781 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a0612f93-9f05-49a1-acd5-a6bae9720d7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.782 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdbf070db-91 in ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.784 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdbf070db-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.784 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[875d4e67-5508-4cf5-b3a8-ab103b23d883]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.787 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0378b05a-2d13-49b2-a35e-42bae8d5e37f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 systemd-machined[216673]: New machine qemu-104-instance-000000c2.
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.797 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[1afcdca0-4707-4106-b7d2-78d03f246273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 systemd[1]: Started Virtual Machine qemu-104-instance-000000c2.
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.810 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e1870770-b38b-4139-9629-6e3a60050665]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.849 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c37cd695-52b3-47b7-ba08-a8f893f96ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 NetworkManager[49039]: <info>  [1764405717.8571] manager: (tapdbf070db-90): new Veth device (/org/freedesktop/NetworkManager/Devices/392)
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.856 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccf23d4-6889-4229-8810-56bca0b76ece]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.897 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[343c637a-b7a2-4af6-bf98-ecea524d261a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.900 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[742c7338-7563-40a7-91e2-335282291cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:57 np0005539550 NetworkManager[49039]: <info>  [1764405717.9227] device (tapdbf070db-90): carrier: link connected
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.929 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5cedce-fdc3-45fe-b63a-54b70de3db1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.948 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[90e3fa63-da2e-486e-9fae-346909c2c301]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbf070db-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:2d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876558, 'reachable_time': 37312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378034, 'error': None, 'target': 'ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.969 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4a6323a6-e3ea-41d8-ac70-849ea224dfb8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:2dbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 876558, 'tstamp': 876558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378038, 'error': None, 'target': 'ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:57.988 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ad216c-bbde-40c2-9851-64a040a100f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdbf070db-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:2d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876558, 'reachable_time': 37312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378042, 'error': None, 'target': 'ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.020 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5053eb-321d-4942-9e09-b9e47db2be1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.080 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b8324b88-bd83-414c-9d58-7f6bbecbe8c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.082 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbf070db-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.082 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.082 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbf070db-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:58 np0005539550 NetworkManager[49039]: <info>  [1764405718.0857] manager: (tapdbf070db-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/393)
Nov 29 03:41:58 np0005539550 kernel: tapdbf070db-90: entered promiscuous mode
Nov 29 03:41:58 np0005539550 nova_compute[257631]: 2025-11-29 08:41:58.085 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.088 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdbf070db-90, col_values=(('external_ids', {'iface-id': 'ebcef6fc-c09d-47a6-b520-0811a6aede28'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:41:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:58Z|00893|binding|INFO|Releasing lport ebcef6fc-c09d-47a6-b520-0811a6aede28 from this chassis (sb_readonly=0)
Nov 29 03:41:58 np0005539550 nova_compute[257631]: 2025-11-29 08:41:58.089 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:58 np0005539550 nova_compute[257631]: 2025-11-29 08:41:58.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.105 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dbf070db-94dd-4ca0-b2e8-d618a9d52a6c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dbf070db-94dd-4ca0-b2e8-d618a9d52a6c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.106 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7573a2-6dda-4532-b291-703687710404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.107 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/dbf070db-94dd-4ca0-b2e8-d618a9d52a6c.pid.haproxy
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID dbf070db-94dd-4ca0-b2e8-d618a9d52a6c
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:41:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:41:58.107 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'env', 'PROCESS_TAG=haproxy-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dbf070db-94dd-4ca0-b2e8-d618a9d52a6c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:41:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:41:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:41:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:41:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:41:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.363931987 +0000 UTC m=+0.038562647 container create faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:41:58 np0005539550 systemd[1]: Started libpod-conmon-faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162.scope.
Nov 29 03:41:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.348402514 +0000 UTC m=+0.023033194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.445510171 +0000 UTC m=+0.120140851 container init faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.455008832 +0000 UTC m=+0.129639492 container start faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.458042208 +0000 UTC m=+0.132672888 container attach faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:41:58 np0005539550 systemd[1]: libpod-faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162.scope: Deactivated successfully.
Nov 29 03:41:58 np0005539550 wizardly_payne[378146]: 167 167
Nov 29 03:41:58 np0005539550 conmon[378146]: conmon faeaa3ffdd38652e675b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162.scope/container/memory.events
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.46402679 +0000 UTC m=+0.138657450 container died faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:41:58 np0005539550 podman[378154]: 2025-11-29 08:41:58.474762651 +0000 UTC m=+0.046769244 container create 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:41:58 np0005539550 systemd[1]: Started libpod-conmon-69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd.scope.
Nov 29 03:41:58 np0005539550 podman[378113]: 2025-11-29 08:41:58.509577893 +0000 UTC m=+0.184208553 container remove faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_payne, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 287 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 140 op/s
Nov 29 03:41:58 np0005539550 systemd[1]: libpod-conmon-faeaa3ffdd38652e675b42bab2f76d4288135e60d02ff70955b3a3cf3e3dc162.scope: Deactivated successfully.
Nov 29 03:41:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a04837d31128987c98bb463b4a418cc3ebad397a071b829ea1a1ffe40fc59bd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:58 np0005539550 podman[378154]: 2025-11-29 08:41:58.450673042 +0000 UTC m=+0.022679665 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:41:58 np0005539550 podman[378154]: 2025-11-29 08:41:58.553772821 +0000 UTC m=+0.125779444 container init 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 03:41:58 np0005539550 podman[378154]: 2025-11-29 08:41:58.559298751 +0000 UTC m=+0.131305354 container start 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:41:58 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [NOTICE]   (378184) : New worker (378187) forked
Nov 29 03:41:58 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [NOTICE]   (378184) : Loading success.
Nov 29 03:41:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:41:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:41:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:41:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:41:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-529ff21ce1dd583b1d0b7d2ddca2fcaa09417a82b2649663f2ed7b0130d8e55e-merged.mount: Deactivated successfully.
Nov 29 03:41:58 np0005539550 podman[378201]: 2025-11-29 08:41:58.674988238 +0000 UTC m=+0.039856829 container create e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:41:58 np0005539550 systemd[1]: Started libpod-conmon-e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a.scope.
Nov 29 03:41:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:41:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e21aac9a5d76270f3330ceeb8926b202ede8d56283dddb76f71e97589b10b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e21aac9a5d76270f3330ceeb8926b202ede8d56283dddb76f71e97589b10b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e21aac9a5d76270f3330ceeb8926b202ede8d56283dddb76f71e97589b10b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e21aac9a5d76270f3330ceeb8926b202ede8d56283dddb76f71e97589b10b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:41:58 np0005539550 podman[378201]: 2025-11-29 08:41:58.656948292 +0000 UTC m=+0.021816913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:41:58 np0005539550 podman[378214]: 2025-11-29 08:41:58.76158676 +0000 UTC m=+0.067027187 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:41:58 np0005539550 podman[378201]: 2025-11-29 08:41:58.772520316 +0000 UTC m=+0.137388907 container init e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:41:58 np0005539550 podman[378201]: 2025-11-29 08:41:58.779430891 +0000 UTC m=+0.144299482 container start e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:58 np0005539550 podman[378201]: 2025-11-29 08:41:58.786516841 +0000 UTC m=+0.151385462 container attach e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:41:58 np0005539550 podman[378216]: 2025-11-29 08:41:58.789011734 +0000 UTC m=+0.096225746 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:41:58 np0005539550 nova_compute[257631]: 2025-11-29 08:41:58.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.093 257641 DEBUG nova.compute.manager [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.094 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.094 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.094 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.094 257641 DEBUG nova.compute.manager [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Processing event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 DEBUG nova.compute.manager [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 DEBUG oslo_concurrency.lockutils [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 DEBUG nova.compute.manager [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] No waiting events found dispatching network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.095 257641 WARNING nova.compute.manager [req-4269c978-c36e-43e5-823f-1c5b6be151e9 req-0a9b5ed9-d666-4a4c-acc2-ce677b139c00 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received unexpected event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.225 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.226 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405719.224758, 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.226 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] VM Started (Lifecycle Event)#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.232 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.236 257641 INFO nova.virt.libvirt.driver [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Instance spawned successfully.#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.236 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.247 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.254 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.257 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.257 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.258 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.258 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.258 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.259 257641 DEBUG nova.virt.libvirt.driver [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.296 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.296 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405719.2259297, 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.296 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.332 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.334 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405719.2310991, 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.334 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.362 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.367 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.371 257641 INFO nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Took 10.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.371 257641 DEBUG nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.417 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:41:59
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr']
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.458 257641 INFO nova.compute.manager [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Took 11.81 seconds to build instance.#033[00m
Nov 29 03:41:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:59Z|00894|binding|INFO|Releasing lport ebcef6fc-c09d-47a6-b520-0811a6aede28 from this chassis (sb_readonly=0)
Nov 29 03:41:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:41:59Z|00895|binding|INFO|Releasing lport dbfc0c95-bb59-4f67-943f-4a79ab89b6c4 from this chassis (sb_readonly=0)
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.486 257641 DEBUG oslo_concurrency.lockutils [None req-e935a33c-ba52-48b2-99e3-d4b45dfa8f19 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.552 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539550 sad_khorana[378241]: {
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:        "osd_id": 0,
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:        "type": "bluestore"
Nov 29 03:41:59 np0005539550 sad_khorana[378241]:    }
Nov 29 03:41:59 np0005539550 sad_khorana[378241]: }
Nov 29 03:41:59 np0005539550 systemd[1]: libpod-e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a.scope: Deactivated successfully.
Nov 29 03:41:59 np0005539550 podman[378201]: 2025-11-29 08:41:59.685380356 +0000 UTC m=+1.050248947 container died e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:41:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-14e21aac9a5d76270f3330ceeb8926b202ede8d56283dddb76f71e97589b10b8-merged.mount: Deactivated successfully.
Nov 29 03:41:59 np0005539550 podman[378201]: 2025-11-29 08:41:59.741043844 +0000 UTC m=+1.105912435 container remove e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_khorana, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:41:59 np0005539550 systemd[1]: libpod-conmon-e1f69c67180bb5e650ecd1ff1786127738cbb7f13ca9ca01b42409b64c1ec43a.scope: Deactivated successfully.
Nov 29 03:41:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:41:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:41:59 np0005539550 nova_compute[257631]: 2025-11-29 08:41:59.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:41:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fac9d018-92e8-4f09-af5b-9e15e316b22e does not exist
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev eba038f0-3900-4d9d-9cbe-ea71d113e105 does not exist
Nov 29 03:41:59 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2dd9aaf1-9573-41db-9470-96e3053756c0 does not exist
Nov 29 03:42:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 287 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Nov 29 03:42:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:00.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:00 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:42:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:42:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1708 writes, 7633 keys, 1708 commit groups, 1.0 writes per commit group, ingest: 11.05 MB, 0.02 MB/s#012Interval WAL: 1708 writes, 1708 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     34.4      2.38              0.26        41    0.058       0      0       0.0       0.0#012  L6      1/0   10.82 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.3     39.6     34.0     12.75              1.27        40    0.319    294K    21K       0.0       0.0#012 Sum      1/0   10.82 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.3     33.4     34.1     15.13              1.54        81    0.187    294K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     60.7     60.9      1.35              0.24        12    0.112     58K   3150       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     39.6     34.0     12.75              1.27        40    0.319    294K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     34.5      2.37              0.26        40    0.059       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.080, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.50 GB write, 0.10 MB/s write, 0.49 GB read, 0.09 MB/s read, 15.1 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 53.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000421 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3066,51.68 MB,16.9988%) FilterBlock(82,815.73 KB,0.262045%) IndexBlock(82,1.37 MB,0.449417%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.337 257641 DEBUG nova.compute.manager [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-changed-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.337 257641 DEBUG nova.compute.manager [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Refreshing instance network info cache due to event network-changed-8853309e-7522-4c33-8ef8-68e265aa66bc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.338 257641 DEBUG oslo_concurrency.lockutils [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.338 257641 DEBUG oslo_concurrency.lockutils [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.338 257641 DEBUG nova.network.neutron [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Refreshing network info cache for port 8853309e-7522-4c33-8ef8-68e265aa66bc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.341 257641 DEBUG nova.compute.manager [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.341 257641 DEBUG nova.compute.manager [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing instance network info cache due to event network-changed-3026e417-ee61-4f08-8c26-06f78013fc48. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.341 257641 DEBUG oslo_concurrency.lockutils [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.341 257641 DEBUG oslo_concurrency.lockutils [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.341 257641 DEBUG nova.network.neutron [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Refreshing network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.409 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.409 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.410 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.410 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.410 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.411 257641 INFO nova.compute.manager [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Terminating instance#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.412 257641 DEBUG nova.compute.manager [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:01 np0005539550 kernel: tap3026e417-ee (unregistering): left promiscuous mode
Nov 29 03:42:01 np0005539550 NetworkManager[49039]: <info>  [1764405721.4685] device (tap3026e417-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.538 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:01Z|00896|binding|INFO|Releasing lport 3026e417-ee61-4f08-8c26-06f78013fc48 from this chassis (sb_readonly=0)
Nov 29 03:42:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:01Z|00897|binding|INFO|Setting lport 3026e417-ee61-4f08-8c26-06f78013fc48 down in Southbound
Nov 29 03:42:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:01Z|00898|binding|INFO|Removing iface tap3026e417-ee ovn-installed in OVS
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.545 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:10:6a 10.100.0.11'], port_security=['fa:16:3e:1e:10:6a 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'eb84e8b8-cb17-4f34-8f44-41f2063a152a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bbe9de28-a3ca-45b3-82e4-243328809a43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a30ef2cc-c9d3-4e8a-9b25-77976762a1b1 de70ef26-d410-46da-a8db-c626d12e2323', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5b4d98b-ad08-4e05-9935-0f27596c26ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=3026e417-ee61-4f08-8c26-06f78013fc48) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.547 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 3026e417-ee61-4f08-8c26-06f78013fc48 in datapath bbe9de28-a3ca-45b3-82e4-243328809a43 unbound from our chassis#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.548 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bbe9de28-a3ca-45b3-82e4-243328809a43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.549 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0d88f28d-ebc5-4b37-bee2-cf66c3648efa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.549 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43 namespace which is not needed anymore#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.562 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000be.scope: Deactivated successfully.
Nov 29 03:42:01 np0005539550 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d000000be.scope: Consumed 16.548s CPU time.
Nov 29 03:42:01 np0005539550 systemd-machined[216673]: Machine qemu-103-instance-000000be terminated.
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.639 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.642 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.652 257641 INFO nova.virt.libvirt.driver [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Instance destroyed successfully.#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.653 257641 DEBUG nova.objects.instance [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid eb84e8b8-cb17-4f34-8f44-41f2063a152a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.665 257641 DEBUG nova.virt.libvirt.vif [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:40:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-1429604794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=190,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDNZ+BnKkVfjY6of6GMTavtr+ZOiNYfMudtVmziwMebSFDNerpQamEFJH3x+Vl9cRJZbecAD6+Xjl6aHihOigg85spjIzHavMMidp+NkOyN3qdZUTPVsH+OVXf+hEueHIw==',key_name='tempest-TestSecurityGroupsBasicOps-165052372',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:40:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-nfdbbpsd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:40:45Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=eb84e8b8-cb17-4f34-8f44-41f2063a152a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.665 257641 DEBUG nova.network.os_vif_util [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.666 257641 DEBUG nova.network.os_vif_util [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.667 257641 DEBUG os_vif [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.671 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.672 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3026e417-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.674 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.677 257641 INFO os_vif [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:10:6a,bridge_name='br-int',has_traffic_filtering=True,id=3026e417-ee61-4f08-8c26-06f78013fc48,network=Network(bbe9de28-a3ca-45b3-82e4-243328809a43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3026e417-ee')#033[00m
Nov 29 03:42:01 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [NOTICE]   (375830) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:01 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [NOTICE]   (375830) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:01 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [WARNING]  (375830) : Exiting Master process...
Nov 29 03:42:01 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [ALERT]    (375830) : Current worker (375832) exited with code 143 (Terminated)
Nov 29 03:42:01 np0005539550 neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43[375804]: [WARNING]  (375830) : All workers exited. Exiting... (0)
Nov 29 03:42:01 np0005539550 systemd[1]: libpod-1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac.scope: Deactivated successfully.
Nov 29 03:42:01 np0005539550 podman[378407]: 2025-11-29 08:42:01.718429273 +0000 UTC m=+0.058794759 container died 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:42:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e7139e764d5637fa6b0606764b359d64b780623f67731e20255a39584834f4ad-merged.mount: Deactivated successfully.
Nov 29 03:42:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:01 np0005539550 podman[378407]: 2025-11-29 08:42:01.764404927 +0000 UTC m=+0.104770393 container cleanup 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:01 np0005539550 systemd[1]: libpod-conmon-1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac.scope: Deactivated successfully.
Nov 29 03:42:01 np0005539550 podman[378459]: 2025-11-29 08:42:01.86255534 +0000 UTC m=+0.069086499 container remove 1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.870 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a88b285d-ff7f-445c-ac90-4ed7c84e16ec]: (4, ('Sat Nov 29 08:42:01 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43 (1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac)\n1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac\nSat Nov 29 08:42:01 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43 (1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac)\n1244c47a872bc7f5df06b34f0264e75d5f8b8afa7f8d039a97aa303ac9801cac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.872 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[245a242b-028f-4c9d-bb91-d04be2772018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.874 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbbe9de28-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:01 np0005539550 kernel: tapbbe9de28-a0: left promiscuous mode
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.881 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f375600-4f59-44fe-ae36-5e50cb3141ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.885 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.898 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.900 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[705be6dc-0ded-4856-8dbf-c502a763a737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.901 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[73cc99e7-de71-46bc-bf8e-661a72a2584a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 nova_compute[257631]: 2025-11-29 08:42:01.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.918 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[29b80a0e-2502-4db0-a282-c6dca611c928]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 869253, 'reachable_time': 34396, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378475, 'error': None, 'target': 'ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:01 np0005539550 systemd[1]: run-netns-ovnmeta\x2dbbe9de28\x2da3ca\x2d45b3\x2d82e4\x2d243328809a43.mount: Deactivated successfully.
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.924 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bbe9de28-a3ca-45b3-82e4-243328809a43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:01.924 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed2d6ee-1b34-4cd6-ab80-1275d136bb3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.097 257641 INFO nova.virt.libvirt.driver [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Deleting instance files /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a_del#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.098 257641 INFO nova.virt.libvirt.driver [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Deletion of /var/lib/nova/instances/eb84e8b8-cb17-4f34-8f44-41f2063a152a_del complete#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.146 257641 INFO nova.compute.manager [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.146 257641 DEBUG oslo.service.loopingcall [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.147 257641 DEBUG nova.compute.manager [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.147 257641 DEBUG nova.network.neutron [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:02.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 220 op/s
Nov 29 03:42:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:02.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.759 257641 DEBUG nova.network.neutron [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updated VIF entry in instance network info cache for port 3026e417-ee61-4f08-8c26-06f78013fc48. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.759 257641 DEBUG nova.network.neutron [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [{"id": "3026e417-ee61-4f08-8c26-06f78013fc48", "address": "fa:16:3e:1e:10:6a", "network": {"id": "bbe9de28-a3ca-45b3-82e4-243328809a43", "bridge": "br-int", "label": "tempest-network-smoke--498208558", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3026e417-ee", "ovs_interfaceid": "3026e417-ee61-4f08-8c26-06f78013fc48", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.776 257641 DEBUG nova.network.neutron [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updated VIF entry in instance network info cache for port 8853309e-7522-4c33-8ef8-68e265aa66bc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.776 257641 DEBUG nova.network.neutron [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updating instance_info_cache with network_info: [{"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.783 257641 DEBUG oslo_concurrency.lockutils [req-7363c2c4-773e-4c3c-bc44-d431a3b1b1f0 req-3afced42-1b20-4cfb-8a66-d3c818de6076 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-eb84e8b8-cb17-4f34-8f44-41f2063a152a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.798 257641 DEBUG oslo_concurrency.lockutils [req-c5bb85a6-d362-458d-9c85-345eab06312c req-ed7e5dcb-afcf-4e25-a18b-d09eb8f6401e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.892 257641 DEBUG nova.network.neutron [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.923 257641 INFO nova.compute.manager [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Took 0.78 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.977 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:02 np0005539550 nova_compute[257631]: 2025-11-29 08:42:02.977 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.465 257641 DEBUG nova.compute.manager [req-f66668fc-1052-45dc-bbbc-8bac1b17398b req-65428b7e-590e-46a6-96a8-f4b3b4f39d10 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-vif-deleted-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.490 257641 DEBUG nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-vif-unplugged-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.490 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.490 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.490 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.490 257641 DEBUG nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] No waiting events found dispatching network-vif-unplugged-3026e417-ee61-4f08-8c26-06f78013fc48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 WARNING nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received unexpected event network-vif-unplugged-3026e417-ee61-4f08-8c26-06f78013fc48 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 DEBUG nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 DEBUG oslo_concurrency.lockutils [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.491 257641 DEBUG nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] No waiting events found dispatching network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.492 257641 WARNING nova.compute.manager [req-e1f42a26-c311-46b4-9804-859e41589e99 req-414ed223-979d-4b77-ae95-cfaec93cc506 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Received unexpected event network-vif-plugged-3026e417-ee61-4f08-8c26-06f78013fc48 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:42:03 np0005539550 nova_compute[257631]: 2025-11-29 08:42:03.795 257641 DEBUG oslo_concurrency.processutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496273064' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.285 257641 DEBUG oslo_concurrency.processutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.290 257641 DEBUG nova.compute.provider_tree [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.312 257641 DEBUG nova.scheduler.client.report [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.332 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.366 257641 INFO nova.scheduler.client.report [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance eb84e8b8-cb17-4f34-8f44-41f2063a152a#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:04 np0005539550 nova_compute[257631]: 2025-11-29 08:42:04.498 257641 DEBUG oslo_concurrency.lockutils [None req-bc66b21e-56ea-4fdd-9eb3-3303b54bed08 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "eb84e8b8-cb17-4f34-8f44-41f2063a152a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 224 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 349 KiB/s wr, 211 op/s
Nov 29 03:42:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 224 op/s
Nov 29 03:42:06 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:06Z|00899|binding|INFO|Releasing lport ebcef6fc-c09d-47a6-b520-0811a6aede28 from this chassis (sb_readonly=0)
Nov 29 03:42:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:06.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:06 np0005539550 nova_compute[257631]: 2025-11-29 08:42:06.671 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:06 np0005539550 nova_compute[257631]: 2025-11-29 08:42:06.673 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:06 np0005539550 nova_compute[257631]: 2025-11-29 08:42:06.995 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:08Z|00900|binding|INFO|Releasing lport ebcef6fc-c09d-47a6-b520-0811a6aede28 from this chassis (sb_readonly=0)
Nov 29 03:42:08 np0005539550 nova_compute[257631]: 2025-11-29 08:42:08.166 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:42:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 17 KiB/s wr, 146 op/s
Nov 29 03:42:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:08.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:42:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:42:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:42:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:42:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:42:09 np0005539550 nova_compute[257631]: 2025-11-29 08:42:09.155 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:09 np0005539550 nova_compute[257631]: 2025-11-29 08:42:09.481 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:09 np0005539550 podman[378555]: 2025-11-29 08:42:09.535486636 +0000 UTC m=+0.102036893 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:10.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:10 np0005539550 nova_compute[257631]: 2025-11-29 08:42:10.331 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 124 op/s
Nov 29 03:42:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:10.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:11 np0005539550 nova_compute[257631]: 2025-11-29 08:42:11.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:11 np0005539550 nova_compute[257631]: 2025-11-29 08:42:11.718 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:12.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.6 KiB/s wr, 130 op/s
Nov 29 03:42:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:12.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:13 np0005539550 nova_compute[257631]: 2025-11-29 08:42:13.091 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:14Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:80:54 10.100.0.13
Nov 29 03:42:14 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:14Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:80:54 10.100.0.13
Nov 29 03:42:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:14 np0005539550 nova_compute[257631]: 2025-11-29 08:42:14.486 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 176 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 798 KiB/s wr, 71 op/s
Nov 29 03:42:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:14.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:14 np0005539550 nova_compute[257631]: 2025-11-29 08:42:14.866 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 239 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 358 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Nov 29 03:42:16 np0005539550 nova_compute[257631]: 2025-11-29 08:42:16.651 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405721.6495905, eb84e8b8-cb17-4f34-8f44-41f2063a152a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:16 np0005539550 nova_compute[257631]: 2025-11-29 08:42:16.651 257641 INFO nova.compute.manager [-] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:16 np0005539550 nova_compute[257631]: 2025-11-29 08:42:16.674 257641 DEBUG nova.compute.manager [None req-f4de1950-8b3c-4e09-ba7a-fe45defdc539 - - - - - -] [instance: eb84e8b8-cb17-4f34-8f44-41f2063a152a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:16 np0005539550 nova_compute[257631]: 2025-11-29 08:42:16.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.721 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.721 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.738 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.807 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.808 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.813 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.813 257641 INFO nova.compute.claims [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:42:17 np0005539550 nova_compute[257631]: 2025-11-29 08:42:17.991 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136522322' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.468 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.476 257641 DEBUG nova.compute.provider_tree [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.485 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.486 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.505 257641 DEBUG nova.scheduler.client.report [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.511 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:42:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 253 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 4.1 MiB/s wr, 105 op/s
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.549 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.550 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.634 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.634 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.709 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.709 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.712 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.712 257641 INFO nova.compute.claims [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.717 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.717 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.758 257641 INFO nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.760 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.817 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.877 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.921 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.922 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.923 257641 INFO nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Creating image(s)#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.956 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:18.973 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:18.974 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:18.974 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:18 np0005539550 nova_compute[257631]: 2025-11-29 08:42:18.993 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.025 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.029 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.063 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.109 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.110 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.111 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.111 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.140 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.146 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.334 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Successfully created port: 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.442 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.489 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2969740127' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.528 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] resizing rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.558 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.563 257641 DEBUG nova.compute.provider_tree [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.578 257641 DEBUG nova.scheduler.client.report [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.594 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.594 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.596 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.605 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.605 257641 INFO nova.compute.claims [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.670 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.671 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.690 257641 INFO nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.729 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.736 257641 DEBUG nova.objects.instance [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'migration_context' on Instance uuid 2d10cd8b-9390-45ac-b686-ea34b999fb8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.760 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.760 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Ensure instance console log exists: /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.761 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.761 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.761 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.844 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.919 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.922 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.922 257641 INFO nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Creating image(s)#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.949 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:19 np0005539550 nova_compute[257631]: 2025-11-29 08:42:19.975 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.002 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.006 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.035 257641 DEBUG nova.policy [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.070 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.071 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.072 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.072 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.098 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.102 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b8166862-6c06-4726-b53e-e53f69cda3df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.240 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Successfully updated port: 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.258 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.259 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquired lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.259 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:42:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56151960' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.308 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.313 257641 DEBUG nova.compute.provider_tree [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0032793815722129164 of space, bias 1.0, pg target 0.9838144716638749 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002173410455053043 of space, bias 1.0, pg target 0.6520231365159129 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.328 257641 DEBUG nova.scheduler.client.report [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.363 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.363 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.378 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 b8166862-6c06-4726-b53e-e53f69cda3df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.410 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.410 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.454 257641 INFO nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.462 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.500 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:42:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 253 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 4.1 MiB/s wr, 105 op/s
Nov 29 03:42:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.576 257641 DEBUG nova.objects.instance [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid b8166862-6c06-4726-b53e-e53f69cda3df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.592 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.593 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Ensure instance console log exists: /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.593 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.594 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.594 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.618 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.619 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.620 257641 INFO nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Creating image(s)#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.641 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.667 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:20.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.694 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.697 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.762 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.762 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.763 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.763 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.786 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:20 np0005539550 nova_compute[257631]: 2025-11-29 08:42:20.789 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.058 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.123 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.240 257641 DEBUG nova.objects.instance [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.260 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.260 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Ensure instance console log exists: /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.261 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.261 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.261 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.316 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.369 257641 DEBUG nova.policy [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:42:21 np0005539550 nova_compute[257631]: 2025-11-29 08:42:21.762 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:22 np0005539550 nova_compute[257631]: 2025-11-29 08:42:22.331 257641 DEBUG nova.compute.manager [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-changed-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:22 np0005539550 nova_compute[257631]: 2025-11-29 08:42:22.332 257641 DEBUG nova.compute.manager [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Refreshing instance network info cache due to event network-changed-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:22 np0005539550 nova_compute[257631]: 2025-11-29 08:42:22.332 257641 DEBUG oslo_concurrency.lockutils [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 392 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 9.2 MiB/s wr, 189 op/s
Nov 29 03:42:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:22.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:22 np0005539550 nova_compute[257631]: 2025-11-29 08:42:22.730 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Successfully created port: 36bbd65b-abae-4c9f-975a-f15edf566301 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.082 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Successfully created port: 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.125 257641 DEBUG nova.network.neutron [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Updating instance_info_cache with network_info: [{"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.164 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Releasing lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.164 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Instance network_info: |[{"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.165 257641 DEBUG oslo_concurrency.lockutils [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.165 257641 DEBUG nova.network.neutron [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Refreshing network info cache for port 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.167 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Start _get_guest_xml network_info=[{"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.172 257641 WARNING nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.178 257641 DEBUG nova.virt.libvirt.host [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.179 257641 DEBUG nova.virt.libvirt.host [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.181 257641 DEBUG nova.virt.libvirt.host [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.181 257641 DEBUG nova.virt.libvirt.host [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.182 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.182 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.183 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.183 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.183 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.183 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.184 257641 DEBUG nova.virt.hardware [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.187 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214675121' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.624 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.650 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.653 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:23 np0005539550 nova_compute[257631]: 2025-11-29 08:42:23.995 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Successfully updated port: 36bbd65b-abae-4c9f-975a-f15edf566301 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.025 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.025 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.025 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:42:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3697684646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.104 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.106 257641 DEBUG nova.virt.libvirt.vif [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-391566222',display_name='tempest-TestServerMultinode-server-391566222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-391566222',id=197,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-sescoia7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:18Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=2d10cd8b-9390-45ac-b686-ea34b999fb8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.106 257641 DEBUG nova.network.os_vif_util [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.107 257641 DEBUG nova.network.os_vif_util [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.108 257641 DEBUG nova.objects.instance [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'pci_devices' on Instance uuid 2d10cd8b-9390-45ac-b686-ea34b999fb8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.131 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <uuid>2d10cd8b-9390-45ac-b686-ea34b999fb8c</uuid>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <name>instance-000000c5</name>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestServerMultinode-server-391566222</nova:name>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:42:23</nova:creationTime>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:user uuid="dda7b3867e5c45a7bb78d049103bc095">tempest-TestServerMultinode-1895688433-project-admin</nova:user>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:project uuid="6519f321a4954567ab99a11cc07cc5ac">tempest-TestServerMultinode-1895688433</nova:project>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <nova:port uuid="72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="serial">2d10cd8b-9390-45ac-b686-ea34b999fb8c</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="uuid">2d10cd8b-9390-45ac-b686-ea34b999fb8c</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:c1:2e:c5"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <target dev="tap72ccc11c-64"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/console.log" append="off"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:42:24 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:42:24 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:42:24 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:42:24 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.131 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Preparing to wait for external event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.132 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.132 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.132 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.133 257641 DEBUG nova.virt.libvirt.vif [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-391566222',display_name='tempest-TestServerMultinode-server-391566222',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-391566222',id=197,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-sescoia7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:18Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=2d10cd8b-9390-45ac-b686-ea34b999fb8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.133 257641 DEBUG nova.network.os_vif_util [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.133 257641 DEBUG nova.network.os_vif_util [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.134 257641 DEBUG os_vif [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.134 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.135 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.135 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.138 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.139 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72ccc11c-64, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.139 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72ccc11c-64, col_values=(('external_ids', {'iface-id': '72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:2e:c5', 'vm-uuid': '2d10cd8b-9390-45ac-b686-ea34b999fb8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539550 NetworkManager[49039]: <info>  [1764405744.1419] manager: (tap72ccc11c-64): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.143 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.147 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.148 257641 INFO os_vif [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64')#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.192 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Successfully updated port: 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.210 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.210 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.210 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.212 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.213 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.213 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] No VIF found with MAC fa:16:3e:c1:2e:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.213 257641 INFO nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Using config drive#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.239 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.284 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 432 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 459 KiB/s rd, 11 MiB/s wr, 206 op/s
Nov 29 03:42:24 np0005539550 nova_compute[257631]: 2025-11-29 08:42:24.538 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:42:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:24.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.195 257641 DEBUG nova.compute.manager [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.196 257641 DEBUG nova.compute.manager [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing instance network info cache due to event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.196 257641 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.345 257641 INFO nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Creating config drive at /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.350 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuhww2xzi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.485 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuhww2xzi" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.513 257641 DEBUG nova.storage.rbd_utils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] rbd image 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.518 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.557 257641 DEBUG nova.compute.manager [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.558 257641 DEBUG nova.compute.manager [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing instance network info cache due to event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.558 257641 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.569 257641 DEBUG nova.network.neutron [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Updated VIF entry in instance network info cache for port 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.570 257641 DEBUG nova.network.neutron [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Updating instance_info_cache with network_info: [{"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.592 257641 DEBUG oslo_concurrency.lockutils [req-30eaab98-4d18-4324-afbf-b3d0aa996c6b req-6ccc611b-a6a5-4aa5-974f-0b8a5e4985a3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-2d10cd8b-9390-45ac-b686-ea34b999fb8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.680 257641 DEBUG oslo_concurrency.processutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config 2d10cd8b-9390-45ac-b686-ea34b999fb8c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.681 257641 INFO nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Deleting local config drive /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:25 np0005539550 kernel: tap72ccc11c-64: entered promiscuous mode
Nov 29 03:42:25 np0005539550 NetworkManager[49039]: <info>  [1764405745.7296] manager: (tap72ccc11c-64): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.731 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:25Z|00901|binding|INFO|Claiming lport 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 for this chassis.
Nov 29 03:42:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:25Z|00902|binding|INFO|72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3: Claiming fa:16:3e:c1:2e:c5 10.100.0.14
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.741 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:2e:c5 10.100.0.14'], port_security=['fa:16:3e:c1:2e:c5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2d10cd8b-9390-45ac-b686-ea34b999fb8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.742 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 in datapath 379eb72a-ec90-4461-897a-adab6a88928f bound to our chassis#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.744 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 379eb72a-ec90-4461-897a-adab6a88928f#033[00m
Nov 29 03:42:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:25Z|00903|binding|INFO|Setting lport 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 ovn-installed in OVS
Nov 29 03:42:25 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:25Z|00904|binding|INFO|Setting lport 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 up in Southbound
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.752 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:25 np0005539550 nova_compute[257631]: 2025-11-29 08:42:25.756 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.754 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5574b387-fc09-4f86-ab50-331dd8a6b3a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.755 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap379eb72a-e1 in ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.756 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap379eb72a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.756 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[edfb10cb-1010-4dfa-a286-4c646d24807f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.758 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b65b3abf-b0c6-41df-bf96-5e6fbe25f07c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 systemd-udevd[379289]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:25 np0005539550 systemd-machined[216673]: New machine qemu-105-instance-000000c5.
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.770 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[167f0bc9-4200-4b52-ab00-5cd973a37206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 NetworkManager[49039]: <info>  [1764405745.7737] device (tap72ccc11c-64): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:25 np0005539550 NetworkManager[49039]: <info>  [1764405745.7746] device (tap72ccc11c-64): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.783 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3a05ee71-e001-4a94-9152-9b8f85972639]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 systemd[1]: Started Virtual Machine qemu-105-instance-000000c5.
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.812 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa5a7e4-4781-4786-b027-1c3a7399d49a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 systemd-udevd[379293]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.817 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[54186f26-40da-46fb-bf6c-0966f7717ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 NetworkManager[49039]: <info>  [1764405745.8183] manager: (tap379eb72a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/396)
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.849 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ee3a9707-8369-45c2-941b-5b0c6cee559a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.853 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c8764f-3eaa-4b67-a1d6-136579444481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 NetworkManager[49039]: <info>  [1764405745.8774] device (tap379eb72a-e0): carrier: link connected
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.884 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[dab83b9e-27ee-4357-b691-06e5d5bec672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.903 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[61a8fa8f-1d29-4580-b589-3f4deb085b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879354, 'reachable_time': 15668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379322, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.917 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1eea39-83ac-4815-aa23-20ed31f81b68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe56:585f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 879354, 'tstamp': 879354}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379323, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.936 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[667383fc-b1f5-444d-934d-e502e86b440d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap379eb72a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:56:58:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879354, 'reachable_time': 15668, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379324, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:25 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:25.972 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[322f5889-d33b-41f0-bde9-699c5e5a35a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.028 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e62ff87b-f102-49cc-8f22-94c8ae081604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.029 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.030 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.030 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap379eb72a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:26 np0005539550 NetworkManager[49039]: <info>  [1764405746.0333] manager: (tap379eb72a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:26 np0005539550 kernel: tap379eb72a-e0: entered promiscuous mode
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.035 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap379eb72a-e0, col_values=(('external_ids', {'iface-id': '788af420-8eef-4a56-95d5-80ebf4f9f71c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.039 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:26Z|00905|binding|INFO|Releasing lport 788af420-8eef-4a56-95d5-80ebf4f9f71c from this chassis (sb_readonly=0)
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.049 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[260ca436-4afa-4eef-9210-95c523d3f777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.051 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/379eb72a-ec90-4461-897a-adab6a88928f.pid.haproxy
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 379eb72a-ec90-4461-897a-adab6a88928f
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:26.052 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'env', 'PROCESS_TAG=haproxy-379eb72a-ec90-4461-897a-adab6a88928f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/379eb72a-ec90-4461-897a-adab6a88928f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.056 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:26 np0005539550 podman[379398]: 2025-11-29 08:42:26.413066102 +0000 UTC m=+0.054408167 container create 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.430 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405746.4299667, 2d10cd8b-9390-45ac-b686-ea34b999fb8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.431 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:26 np0005539550 systemd[1]: Started libpod-conmon-04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879.scope.
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.466 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.470 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405746.4306452, 2d10cd8b-9390-45ac-b686-ea34b999fb8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.470 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:42:26 np0005539550 podman[379398]: 2025-11-29 08:42:26.383590176 +0000 UTC m=+0.024932261 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:26 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:42:26 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad3ba4360203651a41952b4188ac6c9975ad8dcd10fc40206f77966c477e350/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.517 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.521 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 432 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 10 MiB/s wr, 319 op/s
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.541 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:26 np0005539550 podman[379398]: 2025-11-29 08:42:26.554233655 +0000 UTC m=+0.195575720 container init 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:42:26 np0005539550 podman[379398]: 2025-11-29 08:42:26.560497323 +0000 UTC m=+0.201839378 container start 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:42:26 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [NOTICE]   (379419) : New worker (379421) forked
Nov 29 03:42:26 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [NOTICE]   (379419) : Loading success.
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.614 257641 DEBUG nova.network.neutron [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.634 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.635 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Instance network_info: |[{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.635 257641 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.636 257641 DEBUG nova.network.neutron [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.638 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Start _get_guest_xml network_info=[{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.643 257641 WARNING nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.648 257641 DEBUG nova.virt.libvirt.host [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.648 257641 DEBUG nova.virt.libvirt.host [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.655 257641 DEBUG nova.virt.libvirt.host [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.656 257641 DEBUG nova.virt.libvirt.host [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.657 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.657 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.657 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.658 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.659 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.659 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.659 257641 DEBUG nova.virt.hardware [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.662 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:26.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.946 257641 DEBUG nova.network.neutron [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updating instance_info_cache with network_info: [{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.967 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.967 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Instance network_info: |[{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.968 257641 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.968 257641 DEBUG nova.network.neutron [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.971 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Start _get_guest_xml network_info=[{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.976 257641 WARNING nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.980 257641 DEBUG nova.virt.libvirt.host [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.981 257641 DEBUG nova.virt.libvirt.host [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.983 257641 DEBUG nova.virt.libvirt.host [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.984 257641 DEBUG nova.virt.libvirt.host [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.985 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.985 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.985 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.986 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.986 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.986 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.986 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.987 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.987 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.987 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.987 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.987 257641 DEBUG nova.virt.hardware [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:42:26 np0005539550 nova_compute[257631]: 2025-11-29 08:42:26.991 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2311320004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.097 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.134 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.138 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.308 257641 DEBUG nova.compute.manager [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.309 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.309 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.310 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.310 257641 DEBUG nova.compute.manager [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Processing event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.310 257641 DEBUG nova.compute.manager [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.310 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.310 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.311 257641 DEBUG oslo_concurrency.lockutils [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.311 257641 DEBUG nova.compute.manager [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] No waiting events found dispatching network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.311 257641 WARNING nova.compute.manager [req-75631d0f-05c0-49a4-9d0a-04e5da91df3c req-be86d680-ccb8-4848-a308-b3896752b941 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received unexpected event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.312 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.315 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405747.315365, 2d10cd8b-9390-45ac-b686-ea34b999fb8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.316 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.318 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.321 257641 INFO nova.virt.libvirt.driver [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Instance spawned successfully.#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.322 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.367 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.373 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.376 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.376 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.377 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.377 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.378 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.378 257641 DEBUG nova.virt.libvirt.driver [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.416 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.450 257641 INFO nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Took 8.53 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.451 257641 DEBUG nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2557013766' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.503 257641 INFO nova.compute.manager [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Took 9.72 seconds to build instance.#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.511 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.536 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.540 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.575 257641 DEBUG oslo_concurrency.lockutils [None req-b9e10481-074a-44fa-8b27-443ce42f67ff dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4293746224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.675 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.677 257641 DEBUG nova.virt.libvirt.vif [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1412022516',display_name='tempest-TestNetworkBasicOps-server-1412022516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1412022516',id=198,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkMiB7VSMkIXM08/GoMdzhhd8TFMVF3mLsD9mGylGnNIP1Ozss4QNM4Uc7CeoKLc9PlUFbw1O3KhhFIuBDfVCfuAol3+txALEwIQIEwTuJE0phns/J+qqSCVY+ApxMIeg==',key_name='tempest-TestNetworkBasicOps-1848039179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-bcnx3nky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:19Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=b8166862-6c06-4726-b53e-e53f69cda3df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.677 257641 DEBUG nova.network.os_vif_util [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.678 257641 DEBUG nova.network.os_vif_util [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.679 257641 DEBUG nova.objects.instance [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8166862-6c06-4726-b53e-e53f69cda3df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.696 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <uuid>b8166862-6c06-4726-b53e-e53f69cda3df</uuid>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <name>instance-000000c6</name>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-1412022516</nova:name>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:42:26</nova:creationTime>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <nova:port uuid="36bbd65b-abae-4c9f-975a-f15edf566301">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="serial">b8166862-6c06-4726-b53e-e53f69cda3df</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="uuid">b8166862-6c06-4726-b53e-e53f69cda3df</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/b8166862-6c06-4726-b53e-e53f69cda3df_disk">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/b8166862-6c06-4726-b53e-e53f69cda3df_disk.config">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:e0:3d:6e"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <target dev="tap36bbd65b-ab"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/console.log" append="off"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:42:27 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:42:27 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:42:27 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:42:27 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.696 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Preparing to wait for external event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.697 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.697 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.697 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.698 257641 DEBUG nova.virt.libvirt.vif [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1412022516',display_name='tempest-TestNetworkBasicOps-server-1412022516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1412022516',id=198,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkMiB7VSMkIXM08/GoMdzhhd8TFMVF3mLsD9mGylGnNIP1Ozss4QNM4Uc7CeoKLc9PlUFbw1O3KhhFIuBDfVCfuAol3+txALEwIQIEwTuJE0phns/J+qqSCVY+ApxMIeg==',key_name='tempest-TestNetworkBasicOps-1848039179',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-bcnx3nky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:19Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=b8166862-6c06-4726-b53e-e53f69cda3df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.698 257641 DEBUG nova.network.os_vif_util [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.698 257641 DEBUG nova.network.os_vif_util [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.699 257641 DEBUG os_vif [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.700 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.700 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.703 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.703 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36bbd65b-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.704 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36bbd65b-ab, col_values=(('external_ids', {'iface-id': '36bbd65b-abae-4c9f-975a-f15edf566301', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:3d:6e', 'vm-uuid': 'b8166862-6c06-4726-b53e-e53f69cda3df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:27 np0005539550 NetworkManager[49039]: <info>  [1764405747.7065] manager: (tap36bbd65b-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/398)
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.708 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.713 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.714 257641 INFO os_vif [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab')#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.769 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.770 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.770 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:e0:3d:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.771 257641 INFO nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Using config drive#033[00m
Nov 29 03:42:27 np0005539550 nova_compute[257631]: 2025-11-29 08:42:27.797 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206811380' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.004 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.006 257641 DEBUG nova.virt.libvirt.vif [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=199,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-08tv1d98',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:20Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.006 257641 DEBUG nova.network.os_vif_util [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.006 257641 DEBUG nova.network.os_vif_util [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.007 257641 DEBUG nova.objects.instance [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.020 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <uuid>f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3</uuid>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <name>instance-000000c7</name>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333</nova:name>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:42:26</nova:creationTime>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <nova:port uuid="2c50a91b-2e4c-4a7f-9f14-15004d9b2af6">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="serial">f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="uuid">f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:64:30:27"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <target dev="tap2c50a91b-2e"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/console.log" append="off"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:42:28 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:42:28 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:42:28 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:42:28 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.020 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Preparing to wait for external event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.020 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.021 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.021 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.021 257641 DEBUG nova.virt.libvirt.vif [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=199,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-08tv1d98',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:42:20Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.021 257641 DEBUG nova.network.os_vif_util [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.022 257641 DEBUG nova.network.os_vif_util [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.022 257641 DEBUG os_vif [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.023 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.023 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.025 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c50a91b-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.025 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c50a91b-2e, col_values=(('external_ids', {'iface-id': '2c50a91b-2e4c-4a7f-9f14-15004d9b2af6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:64:30:27', 'vm-uuid': 'f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 NetworkManager[49039]: <info>  [1764405748.0276] manager: (tap2c50a91b-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.034 257641 INFO os_vif [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e')#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.081 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.081 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.081 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:64:30:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.082 257641 INFO nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Using config drive#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.102 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:28.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.495 257641 INFO nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Creating config drive at /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.502 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuothl2rn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 432 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.2 MiB/s wr, 271 op/s
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.610 257641 INFO nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Creating config drive at /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.616 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpibtf88ko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.648 257641 DEBUG nova.network.neutron [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updated VIF entry in instance network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.649 257641 DEBUG nova.network.neutron [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.656 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuothl2rn" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.687 257641 DEBUG nova.storage.rbd_utils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image b8166862-6c06-4726-b53e-e53f69cda3df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.690 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config b8166862-6c06-4726-b53e-e53f69cda3df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:28.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.728 257641 DEBUG oslo_concurrency.lockutils [req-ebe92811-35de-4d90-8b3e-7503e4dbb210 req-389d421d-d458-4d6c-bbc3-95781e82096c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.754 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpibtf88ko" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.795 257641 DEBUG nova.storage.rbd_utils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.801 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.884 257641 DEBUG oslo_concurrency.processutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config b8166862-6c06-4726-b53e-e53f69cda3df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.885 257641 INFO nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Deleting local config drive /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:28 np0005539550 kernel: tap36bbd65b-ab: entered promiscuous mode
Nov 29 03:42:28 np0005539550 NetworkManager[49039]: <info>  [1764405748.9424] manager: (tap36bbd65b-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/400)
Nov 29 03:42:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:28Z|00906|binding|INFO|Claiming lport 36bbd65b-abae-4c9f-975a-f15edf566301 for this chassis.
Nov 29 03:42:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:28Z|00907|binding|INFO|36bbd65b-abae-4c9f-975a-f15edf566301: Claiming fa:16:3e:e0:3d:6e 10.100.0.7
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.947 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.953 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3d:6e 10.100.0.7'], port_security=['fa:16:3e:e0:3d:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b8166862-6c06-4726-b53e-e53f69cda3df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35110e9e-c6e6-4ced-900a-33a172169d31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '78033c0f-93bd-4432-9d10-d335deddb5fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=539fb972-8e8f-4365-a267-e878104a66cf, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=36bbd65b-abae-4c9f-975a-f15edf566301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.954 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 36bbd65b-abae-4c9f-975a-f15edf566301 in datapath 35110e9e-c6e6-4ced-900a-33a172169d31 bound to our chassis#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.957 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35110e9e-c6e6-4ced-900a-33a172169d31#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.971 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc17523-bfa8-4d68-8ae7-141192766d0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.972 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35110e9e-c1 in ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:28Z|00908|binding|INFO|Setting lport 36bbd65b-abae-4c9f-975a-f15edf566301 ovn-installed in OVS
Nov 29 03:42:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:28Z|00909|binding|INFO|Setting lport 36bbd65b-abae-4c9f-975a-f15edf566301 up in Southbound
Nov 29 03:42:28 np0005539550 nova_compute[257631]: 2025-11-29 08:42:28.974 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.974 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35110e9e-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.975 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fc64b012-4d90-4348-864b-3dfd2ce9686a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9a019687-9c4d-4563-904f-f3f988543af7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:28 np0005539550 systemd-machined[216673]: New machine qemu-106-instance-000000c6.
Nov 29 03:42:28 np0005539550 systemd-udevd[379711]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:28.992 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[ef3acf1e-2e28-40b7-ab16-bf781894a333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:28 np0005539550 systemd[1]: Started Virtual Machine qemu-106-instance-000000c6.
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.0083] device (tap36bbd65b-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.0095] device (tap36bbd65b-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.009 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d7f7bce4-0914-4f18-be5c-4274711feb2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.015 257641 DEBUG oslo_concurrency.processutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.015 257641 INFO nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Deleting local config drive /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3/disk.config because it was imported into RBD.#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.047 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7b4f43-325d-48e9-8934-5fff6c7bb1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 podman[379687]: 2025-11-29 08:42:29.047121688 +0000 UTC m=+0.081953025 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.055 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1f6e46-8490-48f2-bfe3-f647ab85385e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 systemd-udevd[379719]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.0576] manager: (tap35110e9e-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/401)
Nov 29 03:42:29 np0005539550 podman[379688]: 2025-11-29 08:42:29.0677727 +0000 UTC m=+0.098100543 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.0972] manager: (tap2c50a91b-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/402)
Nov 29 03:42:29 np0005539550 kernel: tap2c50a91b-2e: entered promiscuous mode
Nov 29 03:42:29 np0005539550 systemd-udevd[379750]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.099 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7bff79d3-b717-4373-9991-270d4b2ee4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00910|binding|INFO|Claiming lport 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 for this chassis.
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00911|binding|INFO|2c50a91b-2e4c-4a7f-9f14-15004d9b2af6: Claiming fa:16:3e:64:30:27 10.100.0.11
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.103 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0fa1eb-19b9-486d-9d1c-d821c789ef4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.109 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:30:27 10.100.0.11'], port_security=['fa:16:3e:64:30:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1e268cee-9d36-413b-9d89-7a6c381c8f3d 23111efd-29d0-4a29-b4b5-9c0e8276cee4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f7c3086-1e59-4178-81ca-2a80fe1fd75b, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.1146] device (tap2c50a91b-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.1153] device (tap2c50a91b-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00912|binding|INFO|Setting lport 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 ovn-installed in OVS
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00913|binding|INFO|Setting lport 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 up in Southbound
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.121 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.126 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.1344] device (tap35110e9e-c0): carrier: link connected
Nov 29 03:42:29 np0005539550 systemd-machined[216673]: New machine qemu-107-instance-000000c7.
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.141 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[dee0d4c0-1d4a-4885-9e97-c1a866da4fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 systemd[1]: Started Virtual Machine qemu-107-instance-000000c7.
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.160 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[267f7338-b42b-4839-a3a1-bc6e0c02c5bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35110e9e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:82:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879679, 'reachable_time': 40421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379799, 'error': None, 'target': 'ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.179 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7148731c-565a-43b0-963f-e21b2d6e9c57]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:82db'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 879679, 'tstamp': 879679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379800, 'error': None, 'target': 'ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.202 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[09ae5ff8-4fe0-4a6c-9c84-a85959ac0188]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35110e9e-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:82:db'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 263], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879679, 'reachable_time': 40421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379805, 'error': None, 'target': 'ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.237 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[46af848e-50e1-4680-ae62-356ca846f605]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.297 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4b68cb-f194-4b74-83cc-78b92306b99a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.299 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35110e9e-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.299 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.299 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35110e9e-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.353 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 kernel: tap35110e9e-c0: entered promiscuous mode
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.358 257641 DEBUG nova.network.neutron [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updated VIF entry in instance network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.3599] manager: (tap35110e9e-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.358 257641 DEBUG nova.network.neutron [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updating instance_info_cache with network_info: [{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.361 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.362 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35110e9e-c0, col_values=(('external_ids', {'iface-id': 'b9ac4386-837b-4ba8-ba0d-37623a2924f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00914|binding|INFO|Releasing lport b9ac4386-837b-4ba8-ba0d-37623a2924f0 from this chassis (sb_readonly=0)
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.366 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35110e9e-c6e6-4ced-900a-33a172169d31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35110e9e-c6e6-4ced-900a-33a172169d31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.366 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d5a96a-b0a0-416d-b9a4-46492e7f833b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.367 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-35110e9e-c6e6-4ced-900a-33a172169d31
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/35110e9e-c6e6-4ced-900a-33a172169d31.pid.haproxy
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 35110e9e-c6e6-4ced-900a-33a172169d31
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.368 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31', 'env', 'PROCESS_TAG=haproxy-35110e9e-c6e6-4ced-900a-33a172169d31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35110e9e-c6e6-4ced-900a-33a172169d31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.380 257641 DEBUG oslo_concurrency.lockutils [req-d02ade9d-ce51-45e1-b25d-92dc654967ef req-79ffe8d9-33a0-403d-b83d-b5ba95c8461a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.381 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.525 257641 DEBUG nova.compute.manager [req-409447e5-4883-48c9-b5ce-2fda3b6c327c req-0bae2fb5-92cd-4a45-b7f1-1d5ec19c0c28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.525 257641 DEBUG oslo_concurrency.lockutils [req-409447e5-4883-48c9-b5ce-2fda3b6c327c req-0bae2fb5-92cd-4a45-b7f1-1d5ec19c0c28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.526 257641 DEBUG oslo_concurrency.lockutils [req-409447e5-4883-48c9-b5ce-2fda3b6c327c req-0bae2fb5-92cd-4a45-b7f1-1d5ec19c0c28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.526 257641 DEBUG oslo_concurrency.lockutils [req-409447e5-4883-48c9-b5ce-2fda3b6c327c req-0bae2fb5-92cd-4a45-b7f1-1d5ec19c0c28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.526 257641 DEBUG nova.compute.manager [req-409447e5-4883-48c9-b5ce-2fda3b6c327c req-0bae2fb5-92cd-4a45-b7f1-1d5ec19c0c28 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Processing event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.666 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405749.6662009, f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.668 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.690 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.691 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.692 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.692 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.693 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.694 257641 INFO nova.compute.manager [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Terminating instance#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.696 257641 DEBUG nova.compute.manager [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.698 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.702 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405749.6676676, f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.703 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.721 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.723 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.725 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.727 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.729 257641 INFO nova.virt.libvirt.driver [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Instance spawned successfully.#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.729 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:29 np0005539550 kernel: tap72ccc11c-64 (unregistering): left promiscuous mode
Nov 29 03:42:29 np0005539550 NetworkManager[49039]: <info>  [1764405749.7394] device (tap72ccc11c-64): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:29 np0005539550 podman[379946]: 2025-11-29 08:42:29.746495076 +0000 UTC m=+0.058908992 container create de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.754 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00915|binding|INFO|Releasing lport 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 from this chassis (sb_readonly=0)
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00916|binding|INFO|Setting lport 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 down in Southbound
Nov 29 03:42:29 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:29Z|00917|binding|INFO|Removing iface tap72ccc11c-64 ovn-installed in OVS
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.763 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.763 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405749.720746, b8166862-6c06-4726-b53e-e53f69cda3df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.763 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] VM Started (Lifecycle Event)#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.765 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.764 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:2e:c5 10.100.0.14'], port_security=['fa:16:3e:c1:2e:c5 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2d10cd8b-9390-45ac-b686-ea34b999fb8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-379eb72a-ec90-4461-897a-adab6a88928f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6519f321a4954567ab99a11cc07cc5ac', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7ac3748-9331-4cc2-bcd0-273842e7e38b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02c1f455-8d7e-4f22-83b6-df0a05597294, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.769 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.769 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.770 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.770 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.770 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.771 257641 DEBUG nova.virt.libvirt.driver [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:29 np0005539550 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Nov 29 03:42:29 np0005539550 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d000000c5.scope: Consumed 3.090s CPU time.
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.801 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:29 np0005539550 systemd[1]: Started libpod-conmon-de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f.scope.
Nov 29 03:42:29 np0005539550 systemd-machined[216673]: Machine qemu-105-instance-000000c5 terminated.
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.807 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:29 np0005539550 podman[379946]: 2025-11-29 08:42:29.714725472 +0000 UTC m=+0.027139408 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:42:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b94ca205a38d5d55d200dcf68a6882da2614caffa1aa1755dc7ae2c516bc5fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:29 np0005539550 podman[379946]: 2025-11-29 08:42:29.84902848 +0000 UTC m=+0.161442416 container init de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:29 np0005539550 podman[379946]: 2025-11-29 08:42:29.854681793 +0000 UTC m=+0.167095709 container start de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:29 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [NOTICE]   (379967) : New worker (379969) forked
Nov 29 03:42:29 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [NOTICE]   (379967) : Loading success.
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.914 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 in datapath e9e8d3c4-db7b-4d58-959e-9279d976835d unbound from our chassis#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.918 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e9e8d3c4-db7b-4d58-959e-9279d976835d#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.929 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[812466b5-7d2c-4be9-a72f-653fb6a19c72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.930 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape9e8d3c4-d1 in ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.935 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape9e8d3c4-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.935 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d9406829-3f43-49ff-8217-7d9bc275c5cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.937 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f6a7c4-a50e-43c6-8d2d-597f02c300d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.949 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dd58aeaa-b1e0-4491-9d90-ffb462d568e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.973 257641 INFO nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Took 10.05 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.973 257641 DEBUG nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:29 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:29.973 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fe96a3-26cc-4cdf-91b6-2ca299c99d90]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.981 257641 INFO nova.virt.libvirt.driver [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Instance destroyed successfully.#033[00m
Nov 29 03:42:29 np0005539550 nova_compute[257631]: 2025-11-29 08:42:29.982 257641 DEBUG nova.objects.instance [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lazy-loading 'resources' on Instance uuid 2d10cd8b-9390-45ac-b686-ea34b999fb8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.004 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[8d4def5a-da32-4081-a21e-882835531e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 NetworkManager[49039]: <info>  [1764405750.0135] manager: (tape9e8d3c4-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/404)
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.010 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[37fce432-8f40-4888-b8de-6d4dd3990f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.018 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.018 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405749.7208178, b8166862-6c06-4726-b53e-e53f69cda3df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.019 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.021 257641 DEBUG nova.virt.libvirt.vif [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-391566222',display_name='tempest-TestServerMultinode-server-391566222',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-391566222',id=197,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6519f321a4954567ab99a11cc07cc5ac',ramdisk_id='',reservation_id='r-sescoia7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1895688433',owner_user_name='tempest-TestServerMultinode-1895688433-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:27Z,user_data=None,user_id='dda7b3867e5c45a7bb78d049103bc095',uuid=2d10cd8b-9390-45ac-b686-ea34b999fb8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.021 257641 DEBUG nova.network.os_vif_util [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converting VIF {"id": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "address": "fa:16:3e:c1:2e:c5", "network": {"id": "379eb72a-ec90-4461-897a-adab6a88928f", "bridge": "br-int", "label": "tempest-TestServerMultinode-1337172465-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df59ee3c04c04efabfae553312366b99", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72ccc11c-64", "ovs_interfaceid": "72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.022 257641 DEBUG nova.network.os_vif_util [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.023 257641 DEBUG os_vif [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.027 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.028 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72ccc11c-64, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.033 257641 INFO os_vif [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:2e:c5,bridge_name='br-int',has_traffic_filtering=True,id=72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3,network=Network(379eb72a-ec90-4461-897a-adab6a88928f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72ccc11c-64')#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.041 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[41686355-7690-4656-a3f0-1c2bba911cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.044 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fff4aca7-29c7-4fde-b5e0-18f57718bc06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.060 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.067 257641 INFO nova.compute.manager [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Took 11.46 seconds to build instance.#033[00m
Nov 29 03:42:30 np0005539550 NetworkManager[49039]: <info>  [1764405750.0691] device (tape9e8d3c4-d0): carrier: link connected
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.070 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405749.724127, b8166862-6c06-4726-b53e-e53f69cda3df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.070 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.076 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbbebd7-14e2-4a97-a909-6353ffef13a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.091 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.093 257641 DEBUG oslo_concurrency.lockutils [None req-3cb9dda5-d76f-4865-b2b1-5666569c9439 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.095 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[53c24852-6ba5-4ab2-82bf-ebc568fb93a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9e8d3c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:52:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879773, 'reachable_time': 20496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380015, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.097 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3840d6ca-4b4a-46eb-9cbb-93dd8133d835]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:5214'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 879773, 'tstamp': 879773}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380019, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.125 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3b6341-92be-40dd-9f79-61e4a53da36b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape9e8d3c4-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:52:14'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 266], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879773, 'reachable_time': 20496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 380020, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.158 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[da433021-af19-474c-99d0-31b061c45b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.216 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[27edc8f3-edb6-408e-a4c8-27fb2ac97acc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.217 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9e8d3c4-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.218 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.218 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape9e8d3c4-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:30 np0005539550 NetworkManager[49039]: <info>  [1764405750.2204] manager: (tape9e8d3c4-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.220 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 kernel: tape9e8d3c4-d0: entered promiscuous mode
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.223 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.224 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape9e8d3c4-d0, col_values=(('external_ids', {'iface-id': '145c794a-351b-44e3-94d7-5ac8d914c828'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.225 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:30Z|00918|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.243 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.245 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.245 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c1b590-2ba6-4695-aebc-05c386abcefb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.246 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-e9e8d3c4-db7b-4d58-959e-9279d976835d
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/e9e8d3c4-db7b-4d58-959e-9279d976835d.pid.haproxy
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID e9e8d3c4-db7b-4d58-959e-9279d976835d
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.246 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'env', 'PROCESS_TAG=haproxy-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e9e8d3c4-db7b-4d58-959e-9279d976835d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:42:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:30.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.458 257641 INFO nova.virt.libvirt.driver [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Deleting instance files /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c_del#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.459 257641 INFO nova.virt.libvirt.driver [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Deletion of /var/lib/nova/instances/2d10cd8b-9390-45ac-b686-ea34b999fb8c_del complete#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.515 257641 INFO nova.compute.manager [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.515 257641 DEBUG oslo.service.loopingcall [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.516 257641 DEBUG nova.compute.manager [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:30 np0005539550 nova_compute[257631]: 2025-11-29 08:42:30.516 257641 DEBUG nova.network.neutron [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 432 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.9 MiB/s wr, 258 op/s
Nov 29 03:42:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:30 np0005539550 podman[380054]: 2025-11-29 08:42:30.621748324 +0000 UTC m=+0.052381236 container create ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:42:30 np0005539550 systemd[1]: Started libpod-conmon-ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68.scope.
Nov 29 03:42:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:42:30 np0005539550 podman[380054]: 2025-11-29 08:42:30.594088864 +0000 UTC m=+0.024721796 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:42:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da92d758081b86f93be0f26cc860db5f37b4340f6227df9528d133914633a628/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:42:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:30.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:30 np0005539550 podman[380054]: 2025-11-29 08:42:30.71089161 +0000 UTC m=+0.141524542 container init ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 03:42:30 np0005539550 podman[380054]: 2025-11-29 08:42:30.716745438 +0000 UTC m=+0.147378350 container start ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [NOTICE]   (380075) : New worker (380077) forked
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [NOTICE]   (380075) : Loading success.
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.790 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 in datapath 379eb72a-ec90-4461-897a-adab6a88928f unbound from our chassis#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.792 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 379eb72a-ec90-4461-897a-adab6a88928f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.793 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3d4839-9bb2-4bef-8539-3b6456082476]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:30.793 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f namespace which is not needed anymore#033[00m
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [NOTICE]   (379419) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [NOTICE]   (379419) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [WARNING]  (379419) : Exiting Master process...
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [WARNING]  (379419) : Exiting Master process...
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [ALERT]    (379419) : Current worker (379421) exited with code 143 (Terminated)
Nov 29 03:42:30 np0005539550 neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f[379415]: [WARNING]  (379419) : All workers exited. Exiting... (0)
Nov 29 03:42:30 np0005539550 systemd[1]: libpod-04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879.scope: Deactivated successfully.
Nov 29 03:42:30 np0005539550 podman[380103]: 2025-11-29 08:42:30.934108189 +0000 UTC m=+0.052463639 container died 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:42:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6ad3ba4360203651a41952b4188ac6c9975ad8dcd10fc40206f77966c477e350-merged.mount: Deactivated successfully.
Nov 29 03:42:30 np0005539550 podman[380103]: 2025-11-29 08:42:30.970763746 +0000 UTC m=+0.089119206 container cleanup 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:42:30 np0005539550 systemd[1]: libpod-conmon-04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879.scope: Deactivated successfully.
Nov 29 03:42:31 np0005539550 podman[380133]: 2025-11-29 08:42:31.033425052 +0000 UTC m=+0.042253400 container remove 04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.039 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cb3e48fd-be00-4843-b167-6d62fb92d53b]: (4, ('Sat Nov 29 08:42:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879)\n04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879\nSat Nov 29 08:42:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f (04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879)\n04c80c2cc1a39f7a3447408815ff5da8a6984e5c830906278992c96322565879\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.041 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a191277a-c7a0-46fa-892a-0ded7dbda48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.042 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap379eb72a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:31 np0005539550 kernel: tap379eb72a-e0: left promiscuous mode
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.062 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.066 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2f44ab47-f553-4108-a6be-3c3dde043b48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.080 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[564ba3fb-d271-45b3-9237-867929472bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.082 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f0b893-6963-42d3-b523-39945a3d855c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.097 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[961a133e-bee7-4882-bd0f-124769b383c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879347, 'reachable_time': 35906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380148, 'error': None, 'target': 'ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.100 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-379eb72a-ec90-4461-897a-adab6a88928f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:31.100 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[384e9c66-1a79-43cc-b62e-8695052d2da4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:31 np0005539550 systemd[1]: run-netns-ovnmeta\x2d379eb72a\x2dec90\x2d4461\x2d897a\x2dadab6a88928f.mount: Deactivated successfully.
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.965 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.966 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.966 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.966 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.966 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] No waiting events found dispatching network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.967 257641 WARNING nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received unexpected event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.967 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.967 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.968 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.968 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.968 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Processing event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.968 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.969 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.969 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.969 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.970 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] No waiting events found dispatching network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.970 257641 WARNING nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received unexpected event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.970 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-unplugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.970 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.971 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.971 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.971 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] No waiting events found dispatching network-vif-unplugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.971 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-unplugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.972 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.972 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.972 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.973 257641 DEBUG oslo_concurrency.lockutils [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.973 257641 DEBUG nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] No waiting events found dispatching network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.973 257641 WARNING nova.compute.manager [req-2cc93ef1-ba5a-40ce-93dc-c0464d84a984 req-2187cef5-dedd-4863-8638-ffe743668bf7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received unexpected event network-vif-plugged-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.974 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.978 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.979 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405751.9795928, f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.980 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.985 257641 INFO nova.virt.libvirt.driver [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Instance spawned successfully.#033[00m
Nov 29 03:42:31 np0005539550 nova_compute[257631]: 2025-11-29 08:42:31.985 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.003 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.007 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.018 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.018 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.019 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.019 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.019 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.020 257641 DEBUG nova.virt.libvirt.driver [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:42:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:32.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:32 np0005539550 nova_compute[257631]: 2025-11-29 08:42:32.456 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:42:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.9 MiB/s wr, 380 op/s
Nov 29 03:42:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:32.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.638 257641 DEBUG nova.network.neutron [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.648 257641 INFO nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Took 13.03 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.650 257641 DEBUG nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.698 257641 INFO nova.compute.manager [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Took 3.18 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.765 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.765 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.780 257641 INFO nova.compute.manager [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Took 14.93 seconds to build instance.#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.809 257641 DEBUG oslo_concurrency.lockutils [None req-7d2b4242-8baa-4fd7-b5ac-cc4b0a05a23b de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:33 np0005539550 nova_compute[257631]: 2025-11-29 08:42:33.894 257641 DEBUG oslo_concurrency.processutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:34 np0005539550 nova_compute[257631]: 2025-11-29 08:42:34.042 257641 DEBUG nova.compute.manager [req-3ecfdf5a-a5ba-45e1-9cbf-c988f08778ea req-1ffe1216-136a-4487-9aca-83840e10f663 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Received event network-vif-deleted-72ccc11c-64f6-43ee-bbcd-a8e9c7185aa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:34.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853967639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:34 np0005539550 nova_compute[257631]: 2025-11-29 08:42:34.350 257641 DEBUG oslo_concurrency.processutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:34 np0005539550 nova_compute[257631]: 2025-11-29 08:42:34.356 257641 DEBUG nova.compute.provider_tree [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:34 np0005539550 nova_compute[257631]: 2025-11-29 08:42:34.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 390 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.1 MiB/s rd, 2.2 MiB/s wr, 360 op/s
Nov 29 03:42:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.030 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.557 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.558 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.570 257641 DEBUG nova.scheduler.client.report [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.600 257641 DEBUG nova.objects.instance [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'flavor' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.604 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.658 257641 INFO nova.scheduler.client.report [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Deleted allocations for instance 2d10cd8b-9390-45ac-b686-ea34b999fb8c#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.676 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:35 np0005539550 nova_compute[257631]: 2025-11-29 08:42:35.743 257641 DEBUG oslo_concurrency.lockutils [None req-45bb3bf7-9f4b-4096-b891-505e7251e96e dda7b3867e5c45a7bb78d049103bc095 6519f321a4954567ab99a11cc07cc5ac - - default default] Lock "2d10cd8b-9390-45ac-b686-ea34b999fb8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 432 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 MiB/s rd, 3.8 MiB/s wr, 482 op/s
Nov 29 03:42:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:36.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:36 np0005539550 nova_compute[257631]: 2025-11-29 08:42:36.916 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:36 np0005539550 nova_compute[257631]: 2025-11-29 08:42:36.917 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:36 np0005539550 nova_compute[257631]: 2025-11-29 08:42:36.917 257641 INFO nova.compute.manager [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Attaching volume 12074156-11fd-4503-8e66-ac1cd1f8e83a to /dev/vdb#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.314 257641 DEBUG os_brick.utils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.315 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.328 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.328 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[26f60e9b-6c2b-4e97-996f-0ac3afabfd67]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.330 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.338 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.338 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[853d4ad6-d2d9-4802-bf2d-3791769c9928]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.339 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.348 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.348 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3ceae1c2-8970-4966-a40e-563e00a13173]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.350 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[910eec25-a34b-4b61-ab78-a30876b7fd12]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.351 257641 DEBUG oslo_concurrency.processutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.384 257641 DEBUG oslo_concurrency.processutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.387 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.387 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.388 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.388 257641 DEBUG os_brick.utils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.389 257641 DEBUG nova.virt.block_device [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updating existing volume attachment record: 783ed1d0-0137-422c-8723-899faeb6c232 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.771 257641 DEBUG nova.compute.manager [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.771 257641 DEBUG nova.compute.manager [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing instance network info cache due to event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.772 257641 DEBUG oslo_concurrency.lockutils [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.772 257641 DEBUG oslo_concurrency.lockutils [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:37 np0005539550 nova_compute[257631]: 2025-11-29 08:42:37.773 257641 DEBUG nova.network.neutron [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:42:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3452707831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:42:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 428 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.8 MiB/s rd, 4.2 MiB/s wr, 380 op/s
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.686 257641 DEBUG os_brick.encryptors [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Using volume encryption metadata '{'encryption_key_id': 'f1784f4d-bd83-4c57-a711-5366b7510394', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-12074156-11fd-4503-8e66-ac1cd1f8e83a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '12074156-11fd-4503-8e66-ac1cd1f8e83a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe', 'attached_at': '', 'detached_at': '', 'volume_id': '12074156-11fd-4503-8e66-ac1cd1f8e83a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.693 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163#033[00m
Nov 29 03:42:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:38.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.716 257641 DEBUG barbicanclient.v1.secrets [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/f1784f4d-bd83-4c57-a711-5366b7510394 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.716 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.746 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.747 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.775 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.776 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.806 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.807 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.826 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.827 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.846 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.847 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.867 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.868 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.899 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.900 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.939 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.940 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.975 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:38 np0005539550 nova_compute[257631]: 2025-11-29 08:42:38.975 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.020 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.021 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.056 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.057 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.102 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.103 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.123 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.124 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.144 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.144 257641 INFO barbicanclient.base [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Calculated Secrets uuid ref: secrets/f1784f4d-bd83-4c57-a711-5366b7510394#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.166 257641 DEBUG barbicanclient.client [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.168 257641 DEBUG nova.virt.libvirt.host [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <usage type="volume">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <volume>12074156-11fd-4503-8e66-ac1cd1f8e83a</volume>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  </usage>
Nov 29 03:42:39 np0005539550 nova_compute[257631]: </secret>
Nov 29 03:42:39 np0005539550 nova_compute[257631]: create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.179 257641 DEBUG nova.objects.instance [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'flavor' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.259 257641 DEBUG nova.virt.libvirt.driver [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Attempting to attach volume 12074156-11fd-4503-8e66-ac1cd1f8e83a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.263 257641 DEBUG nova.virt.libvirt.guest [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-12074156-11fd-4503-8e66-ac1cd1f8e83a">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <serial>12074156-11fd-4503-8e66-ac1cd1f8e83a</serial>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  <encryption format="luks">
Nov 29 03:42:39 np0005539550 nova_compute[257631]:    <secret type="passphrase" uuid="0c1f7d3a-fa57-40b8-bd33-a2c90da2aac2"/>
Nov 29 03:42:39 np0005539550 nova_compute[257631]:  </encryption>
Nov 29 03:42:39 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:42:39 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:42:39 np0005539550 nova_compute[257631]: 2025-11-29 08:42:39.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.284 257641 DEBUG nova.network.neutron [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updated VIF entry in instance network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.284 257641 DEBUG nova.network.neutron [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.315 257641 DEBUG oslo_concurrency.lockutils [req-e34e79be-8b09-45a4-9649-830452146d2e req-3b0c240b-4ab5-4157-b1f0-484c1310811e 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:40 np0005539550 podman[380202]: 2025-11-29 08:42:40.354239757 +0000 UTC m=+0.092223705 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:42:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 428 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 4.2 MiB/s wr, 364 op/s
Nov 29 03:42:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.658 257641 DEBUG nova.compute.manager [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.658 257641 DEBUG nova.compute.manager [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing instance network info cache due to event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.658 257641 DEBUG oslo_concurrency.lockutils [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.659 257641 DEBUG oslo_concurrency.lockutils [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:40 np0005539550 nova_compute[257631]: 2025-11-29 08:42:40.659 257641 DEBUG nova.network.neutron [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:42 np0005539550 nova_compute[257631]: 2025-11-29 08:42:42.011 257641 DEBUG nova.virt.libvirt.driver [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:42 np0005539550 nova_compute[257631]: 2025-11-29 08:42:42.012 257641 DEBUG nova.virt.libvirt.driver [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:42 np0005539550 nova_compute[257631]: 2025-11-29 08:42:42.012 257641 DEBUG nova.virt.libvirt.driver [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:42:42 np0005539550 nova_compute[257631]: 2025-11-29 08:42:42.012 257641 DEBUG nova.virt.libvirt.driver [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] No VIF found with MAC fa:16:3e:1a:80:54, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:42:42 np0005539550 nova_compute[257631]: 2025-11-29 08:42:42.273 257641 DEBUG oslo_concurrency.lockutils [None req-ae00ebac-bcdd-4256-90fd-04f523a18dbe ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 MiB/s rd, 4.3 MiB/s wr, 398 op/s
Nov 29 03:42:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:42.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.388 257641 DEBUG nova.network.neutron [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updated VIF entry in instance network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.389 257641 DEBUG nova.network.neutron [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updating instance_info_cache with network_info: [{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.409 257641 DEBUG oslo_concurrency.lockutils [req-db297c1e-d7d6-4abe-beb6-ebb641fb9411 req-a7787ea0-79d8-4004-b3e5-0e00e9487bb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.550 257641 DEBUG oslo_concurrency.lockutils [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.550 257641 DEBUG oslo_concurrency.lockutils [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.569 257641 INFO nova.compute.manager [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Detaching volume 12074156-11fd-4503-8e66-ac1cd1f8e83a#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.702 257641 INFO nova.virt.block_device [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Attempting to driver detach volume 12074156-11fd-4503-8e66-ac1cd1f8e83a from mountpoint /dev/vdb#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.833 257641 DEBUG os_brick.encryptors [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Using volume encryption metadata '{'encryption_key_id': 'f1784f4d-bd83-4c57-a711-5366b7510394', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-12074156-11fd-4503-8e66-ac1cd1f8e83a', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '12074156-11fd-4503-8e66-ac1cd1f8e83a', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe', 'attached_at': '', 'detached_at': '', 'volume_id': '12074156-11fd-4503-8e66-ac1cd1f8e83a', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.841 257641 DEBUG nova.virt.libvirt.driver [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Attempting to detach device vdb from instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:42:43 np0005539550 nova_compute[257631]: 2025-11-29 08:42:43.842 257641 DEBUG nova.virt.libvirt.guest [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-12074156-11fd-4503-8e66-ac1cd1f8e83a">
Nov 29 03:42:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <serial>12074156-11fd-4503-8e66-ac1cd1f8e83a</serial>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  <encryption format="luks">
Nov 29 03:42:43 np0005539550 nova_compute[257631]:    <secret type="passphrase" uuid="0c1f7d3a-fa57-40b8-bd33-a2c90da2aac2"/>
Nov 29 03:42:43 np0005539550 nova_compute[257631]:  </encryption>
Nov 29 03:42:43 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:42:43 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.052 257641 INFO nova.virt.libvirt.driver [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Successfully detached device vdb from instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe from the persistent domain config.#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.054 257641 DEBUG nova.virt.libvirt.driver [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.054 257641 DEBUG nova.virt.libvirt.guest [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-12074156-11fd-4503-8e66-ac1cd1f8e83a">
Nov 29 03:42:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <serial>12074156-11fd-4503-8e66-ac1cd1f8e83a</serial>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  <encryption format="luks">
Nov 29 03:42:44 np0005539550 nova_compute[257631]:    <secret type="passphrase" uuid="0c1f7d3a-fa57-40b8-bd33-a2c90da2aac2"/>
Nov 29 03:42:44 np0005539550 nova_compute[257631]:  </encryption>
Nov 29 03:42:44 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:42:44 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:42:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:44Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:3d:6e 10.100.0.7
Nov 29 03:42:44 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:44Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:3d:6e 10.100.0.7
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.195 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405764.1944952, 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.197 257641 DEBUG nova.virt.libvirt.driver [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.200 257641 INFO nova.virt.libvirt.driver [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Successfully detached device vdb from instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe from the live domain config.#033[00m
Nov 29 03:42:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.445 257641 DEBUG nova.objects.instance [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'flavor' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.502 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:44 np0005539550 nova_compute[257631]: 2025-11-29 08:42:44.509 257641 DEBUG oslo_concurrency.lockutils [None req-d88435ac-ec4f-4e53-8b61-35a23f8c1dd3 ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.0 MiB/s wr, 294 op/s
Nov 29 03:42:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.034 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.118 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405749.9708376, 2d10cd8b-9390-45ac-b686-ea34b999fb8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.119 257641 INFO nova.compute.manager [-] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.151 257641 DEBUG nova.compute.manager [None req-d2a5319b-4d74-4b20-8004-ec2fafc221f8 - - - - - -] [instance: 2d10cd8b-9390-45ac-b686-ea34b999fb8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.263 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.263 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.264 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.264 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.264 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.265 257641 INFO nova.compute.manager [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Terminating instance#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.266 257641 DEBUG nova.compute.manager [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:45 np0005539550 kernel: tap8853309e-75 (unregistering): left promiscuous mode
Nov 29 03:42:45 np0005539550 NetworkManager[49039]: <info>  [1764405765.3153] device (tap8853309e-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:45Z|00919|binding|INFO|Releasing lport 8853309e-7522-4c33-8ef8-68e265aa66bc from this chassis (sb_readonly=0)
Nov 29 03:42:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:45Z|00920|binding|INFO|Setting lport 8853309e-7522-4c33-8ef8-68e265aa66bc down in Southbound
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.325 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:45Z|00921|binding|INFO|Removing iface tap8853309e-75 ovn-installed in OVS
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.330 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:80:54 10.100.0.13'], port_security=['fa:16:3e:1a:80:54 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c80b570a17ee4094b96a75465fc34ae7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5aef7224-6192-4b62-9083-c46c2faa4d85', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63116488-2ad3-4f39-b89c-533f5dfc7be4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8853309e-7522-4c33-8ef8-68e265aa66bc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.332 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8853309e-7522-4c33-8ef8-68e265aa66bc in datapath dbf070db-94dd-4ca0-b2e8-d618a9d52a6c unbound from our chassis#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.333 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.335 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d5489f-fb4f-4edb-8d99-68989ab775d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.335 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c namespace which is not needed anymore#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Nov 29 03:42:45 np0005539550 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d000000c2.scope: Consumed 18.506s CPU time.
Nov 29 03:42:45 np0005539550 systemd-machined[216673]: Machine qemu-104-instance-000000c2 terminated.
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [NOTICE]   (378184) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [NOTICE]   (378184) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [WARNING]  (378184) : Exiting Master process...
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [WARNING]  (378184) : Exiting Master process...
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [ALERT]    (378184) : Current worker (378187) exited with code 143 (Terminated)
Nov 29 03:42:45 np0005539550 neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c[378177]: [WARNING]  (378184) : All workers exited. Exiting... (0)
Nov 29 03:42:45 np0005539550 systemd[1]: libpod-69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd.scope: Deactivated successfully.
Nov 29 03:42:45 np0005539550 podman[380258]: 2025-11-29 08:42:45.510350305 +0000 UTC m=+0.059217470 container died 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.522 257641 INFO nova.virt.libvirt.driver [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Instance destroyed successfully.#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.522 257641 DEBUG nova.objects.instance [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lazy-loading 'resources' on Instance uuid 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.526 257641 DEBUG nova.compute.manager [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-unplugged-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.526 257641 DEBUG oslo_concurrency.lockutils [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.527 257641 DEBUG oslo_concurrency.lockutils [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.527 257641 DEBUG oslo_concurrency.lockutils [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.527 257641 DEBUG nova.compute.manager [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] No waiting events found dispatching network-vif-unplugged-8853309e-7522-4c33-8ef8-68e265aa66bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.527 257641 DEBUG nova.compute.manager [req-d039cd22-3923-49d4-933b-04c895772601 req-cb7fe801-f0b5-4fd2-b815-9561644ac962 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-unplugged-8853309e-7522-4c33-8ef8-68e265aa66bc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.535 257641 DEBUG nova.virt.libvirt.vif [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:41:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-379179721',display_name='tempest-TestEncryptedCinderVolumes-server-379179721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-379179721',id=194,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMU6R3weT3GiiYDt9m0b75PL/66gnHnnjWqCRS7wwU7olxjt6ZymuTuevhZjCJC5f1DoHCAQw/cmQ75aXPjgtF/klzEmcYrj3tPKcITrdCstg8d7yxofMGw1A9EhXJR6yA==',key_name='tempest-keypair-1489350560',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:41:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c80b570a17ee4094b96a75465fc34ae7',ramdisk_id='',reservation_id='r-s29d0ca2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-2095799727',owner_user_name='tempest-TestEncryptedCinderVolumes-2095799727-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:41:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ea98d0ceb3954515a9c726d0d32d30cb',uuid=60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.536 257641 DEBUG nova.network.os_vif_util [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converting VIF {"id": "8853309e-7522-4c33-8ef8-68e265aa66bc", "address": "fa:16:3e:1a:80:54", "network": {"id": "dbf070db-94dd-4ca0-b2e8-d618a9d52a6c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-114630403-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c80b570a17ee4094b96a75465fc34ae7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8853309e-75", "ovs_interfaceid": "8853309e-7522-4c33-8ef8-68e265aa66bc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.536 257641 DEBUG nova.network.os_vif_util [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.537 257641 DEBUG os_vif [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.539 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8853309e-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.543 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a04837d31128987c98bb463b4a418cc3ebad397a071b829ea1a1ffe40fc59bd7-merged.mount: Deactivated successfully.
Nov 29 03:42:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.549 257641 INFO os_vif [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:80:54,bridge_name='br-int',has_traffic_filtering=True,id=8853309e-7522-4c33-8ef8-68e265aa66bc,network=Network(dbf070db-94dd-4ca0-b2e8-d618a9d52a6c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8853309e-75')#033[00m
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:45 np0005539550 podman[380258]: 2025-11-29 08:42:45.56035276 +0000 UTC m=+0.109219925 container cleanup 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.565019) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765565099, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1225, "num_deletes": 260, "total_data_size": 1745105, "memory_usage": 1779648, "flush_reason": "Manual Compaction"}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Nov 29 03:42:45 np0005539550 systemd[1]: libpod-conmon-69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd.scope: Deactivated successfully.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765581691, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 1722866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61070, "largest_seqno": 62294, "table_properties": {"data_size": 1717243, "index_size": 2890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13197, "raw_average_key_size": 20, "raw_value_size": 1705380, "raw_average_value_size": 2599, "num_data_blocks": 127, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405670, "oldest_key_time": 1764405670, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 16821 microseconds, and 5770 cpu microseconds.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.581834) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 1722866 bytes OK
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.581873) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.583760) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.583773) EVENT_LOG_v1 {"time_micros": 1764405765583769, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.583789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1739558, prev total WAL file size 1739558, number of live WAL files 2.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.584596) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(1682KB)], [137(10MB)]
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765584647, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 13068218, "oldest_snapshot_seqno": -1}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 9606 keys, 12919013 bytes, temperature: kUnknown
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765679045, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 12919013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12856431, "index_size": 37445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 253971, "raw_average_key_size": 26, "raw_value_size": 12687072, "raw_average_value_size": 1320, "num_data_blocks": 1427, "num_entries": 9606, "num_filter_entries": 9606, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.679342) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 12919013 bytes
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.680770) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.3 rd, 136.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(15.1) write-amplify(7.5) OK, records in: 10144, records dropped: 538 output_compression: NoCompression
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.680793) EVENT_LOG_v1 {"time_micros": 1764405765680782, "job": 84, "event": "compaction_finished", "compaction_time_micros": 94472, "compaction_time_cpu_micros": 31164, "output_level": 6, "num_output_files": 1, "total_output_size": 12919013, "num_input_records": 10144, "num_output_records": 9606, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765681237, "job": 84, "event": "table_file_deletion", "file_number": 139}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405765683457, "job": 84, "event": "table_file_deletion", "file_number": 137}
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.584501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.683533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.683537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.683539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.683540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:42:45.683542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:42:45 np0005539550 podman[380310]: 2025-11-29 08:42:45.690638457 +0000 UTC m=+0.105393938 container remove 69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.696 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[715260e1-a6e8-4563-8c0e-a8c5c04b0428]: (4, ('Sat Nov 29 08:42:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c (69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd)\n69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd\nSat Nov 29 08:42:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c (69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd)\n69cbb2c0aa0c0fe39560bc6c6d9c9f612e30e58375ff072a496dafc7ac33e9dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.698 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dd07b23f-4581-4768-a755-520f603327e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.699 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbf070db-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.700 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 kernel: tapdbf070db-90: left promiscuous mode
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.715 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.720 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30050d04-92f5-4a40-8b5f-57cc460171a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.737 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0a4f86-ccf4-4e21-ba54-de0fbb7b3a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.741 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c292c6-d28d-45e9-8fbd-a4d598ed7a1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.764 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f987d8-fce0-4608-a212-87ae9748d0da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876550, 'reachable_time': 25471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380330, 'error': None, 'target': 'ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.766 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dbf070db-94dd-4ca0-b2e8-d618a9d52a6c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.766 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[1101fdc0-302b-4c8c-a4f5-ddf7fda5608d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:45 np0005539550 systemd[1]: run-netns-ovnmeta\x2ddbf070db\x2d94dd\x2d4ca0\x2db2e8\x2dd618a9d52a6c.mount: Deactivated successfully.
Nov 29 03:42:45 np0005539550 nova_compute[257631]: 2025-11-29 08:42:45.893 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.895 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:45 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:45.895 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.085 257641 INFO nova.virt.libvirt.driver [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Deleting instance files /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_del#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.086 257641 INFO nova.virt.libvirt.driver [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Deletion of /var/lib/nova/instances/60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe_del complete#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.129 257641 INFO nova.compute.manager [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.129 257641 DEBUG oslo.service.loopingcall [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.130 257641 DEBUG nova.compute.manager [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.130 257641 DEBUG nova.network.neutron [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:46Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:64:30:27 10.100.0.11
Nov 29 03:42:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:46Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:64:30:27 10.100.0.11
Nov 29 03:42:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 295 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.9 MiB/s wr, 364 op/s
Nov 29 03:42:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:46.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:42:46 np0005539550 nova_compute[257631]: 2025-11-29 08:42:46.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.008 257641 DEBUG nova.network.neutron [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.036 257641 INFO nova.compute.manager [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.101 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.102 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.122 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.123 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.123 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.123 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8166862-6c06-4726-b53e-e53f69cda3df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.126 257641 DEBUG nova.compute.manager [req-3c52371d-9a38-4428-aceb-32cf06f344ac req-7d08c4cf-823b-4035-81ae-8036849550a9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-deleted-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.187 257641 DEBUG oslo_concurrency.processutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4278621492' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.630 257641 DEBUG oslo_concurrency.processutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.636 257641 DEBUG nova.compute.manager [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.637 257641 DEBUG oslo_concurrency.lockutils [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.638 257641 DEBUG oslo_concurrency.lockutils [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.638 257641 DEBUG oslo_concurrency.lockutils [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.638 257641 DEBUG nova.compute.manager [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] No waiting events found dispatching network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.638 257641 WARNING nova.compute.manager [req-7f6ac123-1205-4728-bdb7-9a5ec0a56ffc req-70f66031-678d-4449-a240-ee865e8fca09 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Received unexpected event network-vif-plugged-8853309e-7522-4c33-8ef8-68e265aa66bc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.641 257641 DEBUG nova.compute.provider_tree [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.655 257641 DEBUG nova.scheduler.client.report [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.685 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.707 257641 INFO nova.scheduler.client.report [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Deleted allocations for instance 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe#033[00m
Nov 29 03:42:47 np0005539550 nova_compute[257631]: 2025-11-29 08:42:47.775 257641 DEBUG oslo_concurrency.lockutils [None req-01056756-fe37-4610-a44d-a686ab0c6e9d ea98d0ceb3954515a9c726d0d32d30cb c80b570a17ee4094b96a75465fc34ae7 - - default default] Lock "60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:48Z|00922|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:42:48 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:48Z|00923|binding|INFO|Releasing lport b9ac4386-837b-4ba8-ba0d-37623a2924f0 from this chassis (sb_readonly=0)
Nov 29 03:42:48 np0005539550 nova_compute[257631]: 2025-11-29 08:42:48.221 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 917 KiB/s rd, 4.8 MiB/s wr, 240 op/s
Nov 29 03:42:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:48 np0005539550 nova_compute[257631]: 2025-11-29 08:42:48.884 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:48 np0005539550 nova_compute[257631]: 2025-11-29 08:42:48.906 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:48 np0005539550 nova_compute[257631]: 2025-11-29 08:42:48.907 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:42:48 np0005539550 nova_compute[257631]: 2025-11-29 08:42:48.908 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:49 np0005539550 nova_compute[257631]: 2025-11-29 08:42:49.505 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:49.897 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:49 np0005539550 nova_compute[257631]: 2025-11-29 08:42:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:49 np0005539550 nova_compute[257631]: 2025-11-29 08:42:49.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:49 np0005539550 nova_compute[257631]: 2025-11-29 08:42:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:42:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 277 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 793 KiB/s rd, 4.3 MiB/s wr, 207 op/s
Nov 29 03:42:50 np0005539550 nova_compute[257631]: 2025-11-29 08:42:50.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 03:42:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 03:42:50 np0005539550 nova_compute[257631]: 2025-11-29 08:42:50.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.047 257641 INFO nova.compute.manager [None req-45b216ea-51bb-4889-8537-20fd235a6b1c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Get console output#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.054 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.943 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:42:51 np0005539550 nova_compute[257631]: 2025-11-29 08:42:51.943 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:52Z|00924|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:42:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:52Z|00925|binding|INFO|Releasing lport b9ac4386-837b-4ba8-ba0d-37623a2924f0 from this chassis (sb_readonly=0)
Nov 29 03:42:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:52.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.417 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020502506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.484 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 861 KiB/s rd, 4.4 MiB/s wr, 234 op/s
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.571 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.571 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.575 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.575 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000c7 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:42:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.768 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.769 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3748MB free_disk=20.897598266601562GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.769 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.770 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.852 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance b8166862-6c06-4726-b53e-e53f69cda3df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.853 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.853 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.853 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:42:52 np0005539550 nova_compute[257631]: 2025-11-29 08:42:52.919 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228186484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.360 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.366 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.389 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.417 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.417 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:53Z|00926|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:42:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:53Z|00927|binding|INFO|Releasing lport b9ac4386-837b-4ba8-ba0d-37623a2924f0 from this chassis (sb_readonly=0)
Nov 29 03:42:53 np0005539550 nova_compute[257631]: 2025-11-29 08:42:53.520 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:53Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:3d:6e 10.100.0.7
Nov 29 03:42:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:54.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:54 np0005539550 nova_compute[257631]: 2025-11-29 08:42:54.418 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:54 np0005539550 nova_compute[257631]: 2025-11-29 08:42:54.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 804 KiB/s rd, 4.3 MiB/s wr, 202 op/s
Nov 29 03:42:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:42:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:54.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:42:55 np0005539550 nova_compute[257631]: 2025-11-29 08:42:55.545 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:42:55 np0005539550 nova_compute[257631]: 2025-11-29 08:42:55.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:42:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:56Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:3d:6e 10.100.0.7
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.132 257641 DEBUG nova.compute.manager [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.132 257641 DEBUG nova.compute.manager [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing instance network info cache due to event network-changed-36bbd65b-abae-4c9f-975a-f15edf566301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.132 257641 DEBUG oslo_concurrency.lockutils [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.133 257641 DEBUG oslo_concurrency.lockutils [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.133 257641 DEBUG nova.network.neutron [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Refreshing network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.200 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.200 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.200 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.201 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.201 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.202 257641 INFO nova.compute.manager [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Terminating instance#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.203 257641 DEBUG nova.compute.manager [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:42:56 np0005539550 kernel: tap36bbd65b-ab (unregistering): left promiscuous mode
Nov 29 03:42:56 np0005539550 NetworkManager[49039]: <info>  [1764405776.2526] device (tap36bbd65b-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:42:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:56Z|00928|binding|INFO|Releasing lport 36bbd65b-abae-4c9f-975a-f15edf566301 from this chassis (sb_readonly=0)
Nov 29 03:42:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:56Z|00929|binding|INFO|Setting lport 36bbd65b-abae-4c9f-975a-f15edf566301 down in Southbound
Nov 29 03:42:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:56Z|00930|binding|INFO|Removing iface tap36bbd65b-ab ovn-installed in OVS
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.264 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Nov 29 03:42:56 np0005539550 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d000000c6.scope: Consumed 14.714s CPU time.
Nov 29 03:42:56 np0005539550 systemd-machined[216673]: Machine qemu-106-instance-000000c6 terminated.
Nov 29 03:42:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.424 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:3d:6e 10.100.0.7'], port_security=['fa:16:3e:e0:3d:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b8166862-6c06-4726-b53e-e53f69cda3df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35110e9e-c6e6-4ced-900a-33a172169d31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '78033c0f-93bd-4432-9d10-d335deddb5fe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=539fb972-8e8f-4365-a267-e878104a66cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=36bbd65b-abae-4c9f-975a-f15edf566301) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.426 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 36bbd65b-abae-4c9f-975a-f15edf566301 in datapath 35110e9e-c6e6-4ced-900a-33a172169d31 unbound from our chassis#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.428 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35110e9e-c6e6-4ced-900a-33a172169d31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.430 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[046d52f9-2766-4224-afc4-5d5a83d16bbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.431 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31 namespace which is not needed anymore#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.452 257641 INFO nova.virt.libvirt.driver [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Instance destroyed successfully.#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.453 257641 DEBUG nova.objects.instance [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid b8166862-6c06-4726-b53e-e53f69cda3df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.469 257641 DEBUG nova.virt.libvirt.vif [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1412022516',display_name='tempest-TestNetworkBasicOps-server-1412022516',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1412022516',id=198,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJkMiB7VSMkIXM08/GoMdzhhd8TFMVF3mLsD9mGylGnNIP1Ozss4QNM4Uc7CeoKLc9PlUFbw1O3KhhFIuBDfVCfuAol3+txALEwIQIEwTuJE0phns/J+qqSCVY+ApxMIeg==',key_name='tempest-TestNetworkBasicOps-1848039179',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-bcnx3nky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:30Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=b8166862-6c06-4726-b53e-e53f69cda3df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.469 257641 DEBUG nova.network.os_vif_util [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.473 257641 DEBUG nova.network.os_vif_util [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.473 257641 DEBUG os_vif [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.475 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.475 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36bbd65b-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.477 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.482 257641 INFO os_vif [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:3d:6e,bridge_name='br-int',has_traffic_filtering=True,id=36bbd65b-abae-4c9f-975a-f15edf566301,network=Network(35110e9e-c6e6-4ced-900a-33a172169d31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36bbd65b-ab')#033[00m
Nov 29 03:42:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 729 KiB/s rd, 3.6 MiB/s wr, 183 op/s
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [NOTICE]   (379967) : haproxy version is 2.8.14-c23fe91
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [NOTICE]   (379967) : path to executable is /usr/sbin/haproxy
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [WARNING]  (379967) : Exiting Master process...
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [WARNING]  (379967) : Exiting Master process...
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [ALERT]    (379967) : Current worker (379969) exited with code 143 (Terminated)
Nov 29 03:42:56 np0005539550 neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31[379963]: [WARNING]  (379967) : All workers exited. Exiting... (0)
Nov 29 03:42:56 np0005539550 systemd[1]: libpod-de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f.scope: Deactivated successfully.
Nov 29 03:42:56 np0005539550 podman[380507]: 2025-11-29 08:42:56.591971129 +0000 UTC m=+0.048576970 container died de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:42:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:42:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4b94ca205a38d5d55d200dcf68a6882da2614caffa1aa1755dc7ae2c516bc5fe-merged.mount: Deactivated successfully.
Nov 29 03:42:56 np0005539550 podman[380507]: 2025-11-29 08:42:56.629050778 +0000 UTC m=+0.085656599 container cleanup de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:42:56 np0005539550 systemd[1]: libpod-conmon-de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f.scope: Deactivated successfully.
Nov 29 03:42:56 np0005539550 podman[380538]: 2025-11-29 08:42:56.695210082 +0000 UTC m=+0.040227549 container remove de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.700 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5b1b40-43d2-451c-913c-40d660b01173]: (4, ('Sat Nov 29 08:42:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31 (de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f)\nde1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f\nSat Nov 29 08:42:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31 (de1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f)\nde1d54ec4ebd1ac3a6b2ffb4915924b78c738f2e2b62a6e76556091a8239b10f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bea1d5b5-6794-4711-ac1a-d52b71696f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.703 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35110e9e-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.705 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 kernel: tap35110e9e-c0: left promiscuous mode
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.720 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.723 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b519478a-4c07-4b76-b590-b73e7bdb9230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.735 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aa78d5fa-b049-4e92-87a5-429942393227]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.736 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[19f8127b-28cf-4822-9b13-bf01baa7e3fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.751 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b68cc6f5-9cba-446e-9823-1d4b0f47748b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879670, 'reachable_time': 36706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380555, 'error': None, 'target': 'ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.754 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35110e9e-c6e6-4ced-900a-33a172169d31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:42:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:42:56.754 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[ca058d60-f97a-4274-9a3a-46884dd4f007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:42:56 np0005539550 systemd[1]: run-netns-ovnmeta\x2d35110e9e\x2dc6e6\x2d4ced\x2d900a\x2d33a172169d31.mount: Deactivated successfully.
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.881 257641 INFO nova.virt.libvirt.driver [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Deleting instance files /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df_del#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.882 257641 INFO nova.virt.libvirt.driver [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Deletion of /var/lib/nova/instances/b8166862-6c06-4726-b53e-e53f69cda3df_del complete#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.944 257641 INFO nova.compute.manager [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Took 0.74 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.945 257641 DEBUG oslo.service.loopingcall [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.945 257641 DEBUG nova.compute.manager [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:42:56 np0005539550 nova_compute[257631]: 2025-11-29 08:42:56.946 257641 DEBUG nova.network.neutron [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.224 257641 DEBUG nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-unplugged-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.226 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.226 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.227 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.227 257641 DEBUG nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] No waiting events found dispatching network-vif-unplugged-36bbd65b-abae-4c9f-975a-f15edf566301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.228 257641 DEBUG nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-unplugged-36bbd65b-abae-4c9f-975a-f15edf566301 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.228 257641 DEBUG nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.229 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.229 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.229 257641 DEBUG oslo_concurrency.lockutils [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.230 257641 DEBUG nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] No waiting events found dispatching network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.231 257641 WARNING nova.compute.manager [req-4f8a1380-e83a-47c4-bcef-edb2a4cc60a9 req-9d0b78be-ae77-4a4e-a623-8072c9370def 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received unexpected event network-vif-plugged-36bbd65b-abae-4c9f-975a-f15edf566301 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:42:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:42:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:42:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:42:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:42:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:42:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 110 KiB/s rd, 428 KiB/s wr, 50 op/s
Nov 29 03:42:58 np0005539550 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:42:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:42:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:42:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:42:58.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.850 257641 DEBUG nova.network.neutron [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updated VIF entry in instance network info cache for port 36bbd65b-abae-4c9f-975a-f15edf566301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:42:58 np0005539550 nova_compute[257631]: 2025-11-29 08:42:58.851 257641 DEBUG nova.network.neutron [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [{"id": "36bbd65b-abae-4c9f-975a-f15edf566301", "address": "fa:16:3e:e0:3d:6e", "network": {"id": "35110e9e-c6e6-4ced-900a-33a172169d31", "bridge": "br-int", "label": "tempest-network-smoke--644950316", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36bbd65b-ab", "ovs_interfaceid": "36bbd65b-abae-4c9f-975a-f15edf566301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.121 257641 DEBUG oslo_concurrency.lockutils [req-32161d0b-db4d-4297-bc82-d6372ad4d83c req-9160825c-3012-42f8-bb3c-463bcdaf67f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-b8166862-6c06-4726-b53e-e53f69cda3df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.243 257641 DEBUG nova.compute.manager [req-ee455e26-aea9-4f0f-bc56-e1080357fbe5 req-cfce670b-7c6d-44ca-aa42-e2b801bc17a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Received event network-vif-deleted-36bbd65b-abae-4c9f-975a-f15edf566301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.244 257641 INFO nova.compute.manager [req-ee455e26-aea9-4f0f-bc56-e1080357fbe5 req-cfce670b-7c6d-44ca-aa42-e2b801bc17a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Neutron deleted interface 36bbd65b-abae-4c9f-975a-f15edf566301; detaching it from the instance and deleting it from the info cache#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.244 257641 DEBUG nova.network.neutron [req-ee455e26-aea9-4f0f-bc56-e1080357fbe5 req-cfce670b-7c6d-44ca-aa42-e2b801bc17a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.249 257641 DEBUG nova.network.neutron [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:42:59 np0005539550 podman[380557]: 2025-11-29 08:42:59.319243354 +0000 UTC m=+0.062974254 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:42:59 np0005539550 podman[380558]: 2025-11-29 08:42:59.334970202 +0000 UTC m=+0.068042222 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.356 257641 INFO nova.compute.manager [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Took 2.41 seconds to deallocate network for instance.#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.363 257641 DEBUG nova.compute.manager [req-ee455e26-aea9-4f0f-bc56-e1080357fbe5 req-cfce670b-7c6d-44ca-aa42-e2b801bc17a4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Detach interface failed, port_id=36bbd65b-abae-4c9f-975a-f15edf566301, reason: Instance b8166862-6c06-4726-b53e-e53f69cda3df could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.415 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.415 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:42:59
Nov 29 03:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'volumes', 'vms', '.mgr', 'images', 'default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log']
Nov 29 03:42:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.488 257641 DEBUG oslo_concurrency.processutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:59 np0005539550 ovn_controller[148680]: 2025-11-29T08:42:59Z|00931|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.939 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:42:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:42:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2276654963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.965 257641 DEBUG oslo_concurrency.processutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.970 257641 DEBUG nova.compute.provider_tree [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:42:59 np0005539550 nova_compute[257631]: 2025-11-29 08:42:59.990 257641 DEBUG nova.scheduler.client.report [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.091 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:00.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.520 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405765.5189264, 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.521 257641 INFO nova.compute.manager [-] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:43:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 252 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 65 KiB/s wr, 29 op/s
Nov 29 03:43:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.582 257641 INFO nova.scheduler.client.report [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance b8166862-6c06-4726-b53e-e53f69cda3df#033[00m
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.598 257641 DEBUG nova.compute.manager [None req-2ca394a0-3739-49be-8e58-73c720dd8d9a - - - - - -] [instance: 60d1a5dc-0c2d-4d8d-8833-f51c3497bcbe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:00 np0005539550 nova_compute[257631]: 2025-11-29 08:43:00.853 257641 DEBUG oslo_concurrency.lockutils [None req-026cb843-70c7-4b8a-9dde-05e7f1960533 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "b8166862-6c06-4726-b53e-e53f69cda3df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 03c5940d-3367-4d2d-a5ed-958cfa4cb898 does not exist
Nov 29 03:43:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d6361f16-c054-4b32-875e-b5bbf60d8ca6 does not exist
Nov 29 03:43:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bb4a9583-d0f1-498a-91a7-ee647848d849 does not exist
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:43:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:43:01 np0005539550 nova_compute[257631]: 2025-11-29 08:43:01.478 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.752014006 +0000 UTC m=+0.055574137 container create e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 29 03:43:01 np0005539550 systemd[1]: Started libpod-conmon-e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949.scope.
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.72453081 +0000 UTC m=+0.028090961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.862588014 +0000 UTC m=+0.166148145 container init e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.873986062 +0000 UTC m=+0.177546173 container start e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.877320847 +0000 UTC m=+0.180881038 container attach e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:43:01 np0005539550 laughing_sinoussi[380901]: 167 167
Nov 29 03:43:01 np0005539550 systemd[1]: libpod-e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949.scope: Deactivated successfully.
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.882914518 +0000 UTC m=+0.186474619 container died e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cc2c00dc8c5dc1db305e8d5a5f4a72a73cfd84633782a0729111cdfaab6b7261-merged.mount: Deactivated successfully.
Nov 29 03:43:01 np0005539550 podman[380885]: 2025-11-29 08:43:01.916775245 +0000 UTC m=+0.220335346 container remove e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:43:01 np0005539550 systemd[1]: libpod-conmon-e7e2c24713a78e203e4770538168ab6aae8148d21eeb9e01a38da6db91ae4949.scope: Deactivated successfully.
Nov 29 03:43:02 np0005539550 podman[380925]: 2025-11-29 08:43:02.099381756 +0000 UTC m=+0.039645944 container create e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:43:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:43:02 np0005539550 systemd[1]: Started libpod-conmon-e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d.scope.
Nov 29 03:43:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:02 np0005539550 podman[380925]: 2025-11-29 08:43:02.084462219 +0000 UTC m=+0.024726437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:02 np0005539550 podman[380925]: 2025-11-29 08:43:02.198281299 +0000 UTC m=+0.138545517 container init e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:43:02 np0005539550 podman[380925]: 2025-11-29 08:43:02.204163648 +0000 UTC m=+0.144427846 container start e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:43:02 np0005539550 podman[380925]: 2025-11-29 08:43:02.207490882 +0000 UTC m=+0.147755080 container attach e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:43:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:02.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 67 KiB/s wr, 56 op/s
Nov 29 03:43:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:03 np0005539550 happy_hodgkin[380942]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:43:03 np0005539550 happy_hodgkin[380942]: --> relative data size: 1.0
Nov 29 03:43:03 np0005539550 happy_hodgkin[380942]: --> All data devices are unavailable
Nov 29 03:43:03 np0005539550 systemd[1]: libpod-e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d.scope: Deactivated successfully.
Nov 29 03:43:03 np0005539550 podman[380925]: 2025-11-29 08:43:03.040701407 +0000 UTC m=+0.980965605 container died e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:43:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c054a1fd609e4fcf3c9554e29488dc945c4d5b586bab9474938dc5f0910018a9-merged.mount: Deactivated successfully.
Nov 29 03:43:03 np0005539550 podman[380925]: 2025-11-29 08:43:03.092367544 +0000 UTC m=+1.032631742 container remove e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_hodgkin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:43:03 np0005539550 systemd[1]: libpod-conmon-e685d9c8671e3886ad77758ad5c6225d79c0e168d58e2f8c6b3aaf956328e48d.scope: Deactivated successfully.
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.689340731 +0000 UTC m=+0.056918101 container create ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:43:03 np0005539550 systemd[1]: Started libpod-conmon-ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0.scope.
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.661317282 +0000 UTC m=+0.028894682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.76950502 +0000 UTC m=+0.137082400 container init ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.776458046 +0000 UTC m=+0.144035416 container start ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 03:43:03 np0005539550 clever_edison[381130]: 167 167
Nov 29 03:43:03 np0005539550 systemd[1]: libpod-ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0.scope: Deactivated successfully.
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.780640551 +0000 UTC m=+0.148217951 container attach ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.781336499 +0000 UTC m=+0.148913869 container died ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9220afb36dfdacb15ad190b5963929c1e39e8ae9b4660df9117b76fedf7fbb7c-merged.mount: Deactivated successfully.
Nov 29 03:43:03 np0005539550 podman[381114]: 2025-11-29 08:43:03.816267503 +0000 UTC m=+0.183844863 container remove ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:03 np0005539550 systemd[1]: libpod-conmon-ed45ef5ac5307b1e5107432f7463e4d90701b257edc511e97f7234413f670fa0.scope: Deactivated successfully.
Nov 29 03:43:03 np0005539550 nova_compute[257631]: 2025-11-29 08:43:03.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:04 np0005539550 podman[381153]: 2025-11-29 08:43:04.001770356 +0000 UTC m=+0.045593834 container create c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:04 np0005539550 systemd[1]: Started libpod-conmon-c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81.scope.
Nov 29 03:43:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85af504c3f6c3b788bdc0d42598fe1598775e7ee864906ae5ba44a4df8f3406/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85af504c3f6c3b788bdc0d42598fe1598775e7ee864906ae5ba44a4df8f3406/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85af504c3f6c3b788bdc0d42598fe1598775e7ee864906ae5ba44a4df8f3406/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b85af504c3f6c3b788bdc0d42598fe1598775e7ee864906ae5ba44a4df8f3406/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:04 np0005539550 podman[381153]: 2025-11-29 08:43:03.985242358 +0000 UTC m=+0.029065856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:04 np0005539550 podman[381153]: 2025-11-29 08:43:04.086346167 +0000 UTC m=+0.130169675 container init c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:04 np0005539550 podman[381153]: 2025-11-29 08:43:04.093243471 +0000 UTC m=+0.137066939 container start c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:43:04 np0005539550 podman[381153]: 2025-11-29 08:43:04.097194761 +0000 UTC m=+0.141018259 container attach c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:43:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:04 np0005539550 nova_compute[257631]: 2025-11-29 08:43:04.511 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 20 KiB/s wr, 29 op/s
Nov 29 03:43:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]: {
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:    "0": [
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:        {
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "devices": [
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "/dev/loop3"
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            ],
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "lv_name": "ceph_lv0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "lv_size": "7511998464",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "name": "ceph_lv0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "tags": {
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.cluster_name": "ceph",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.crush_device_class": "",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.encrypted": "0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.osd_id": "0",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.type": "block",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:                "ceph.vdo": "0"
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            },
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "type": "block",
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:            "vg_name": "ceph_vg0"
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:        }
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]:    ]
Nov 29 03:43:04 np0005539550 xenodochial_fermat[381170]: }
Nov 29 03:43:04 np0005539550 systemd[1]: libpod-c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81.scope: Deactivated successfully.
Nov 29 03:43:04 np0005539550 podman[381180]: 2025-11-29 08:43:04.926515197 +0000 UTC m=+0.025771603 container died c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:43:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b85af504c3f6c3b788bdc0d42598fe1598775e7ee864906ae5ba44a4df8f3406-merged.mount: Deactivated successfully.
Nov 29 03:43:04 np0005539550 podman[381180]: 2025-11-29 08:43:04.974169953 +0000 UTC m=+0.073426339 container remove c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:43:04 np0005539550 systemd[1]: libpod-conmon-c8460124c19005012aa7529a3423313d2a77043d9f0f65a5640c5ab8c3876e81.scope: Deactivated successfully.
Nov 29 03:43:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.609208573 +0000 UTC m=+0.040442474 container create 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:43:05 np0005539550 systemd[1]: Started libpod-conmon-6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da.scope.
Nov 29 03:43:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.669464898 +0000 UTC m=+0.100698799 container init 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.675831949 +0000 UTC m=+0.107065850 container start 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.679262856 +0000 UTC m=+0.110496787 container attach 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:43:05 np0005539550 competent_bohr[381353]: 167 167
Nov 29 03:43:05 np0005539550 systemd[1]: libpod-6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da.scope: Deactivated successfully.
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.681106253 +0000 UTC m=+0.112340144 container died 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.592272595 +0000 UTC m=+0.023506516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-143d65cdb9578ee80eaebfe04d3a76df78ce8caa8c09724d57ca9faf3348030f-merged.mount: Deactivated successfully.
Nov 29 03:43:05 np0005539550 podman[381337]: 2025-11-29 08:43:05.723072685 +0000 UTC m=+0.154306596 container remove 6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:43:05 np0005539550 systemd[1]: libpod-conmon-6e9073d686aa3098978439cb62419bf0e7fb951f0367e27f66b3e8b96236b2da.scope: Deactivated successfully.
Nov 29 03:43:05 np0005539550 podman[381377]: 2025-11-29 08:43:05.920233294 +0000 UTC m=+0.041538872 container create 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:43:05 np0005539550 systemd[1]: Started libpod-conmon-3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2.scope.
Nov 29 03:43:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44fa92a5cb7c2391d997803d068c35ed46e119366aa1387b74a73e1023621979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44fa92a5cb7c2391d997803d068c35ed46e119366aa1387b74a73e1023621979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44fa92a5cb7c2391d997803d068c35ed46e119366aa1387b74a73e1023621979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44fa92a5cb7c2391d997803d068c35ed46e119366aa1387b74a73e1023621979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:05 np0005539550 podman[381377]: 2025-11-29 08:43:05.993088698 +0000 UTC m=+0.114394286 container init 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:43:06 np0005539550 podman[381377]: 2025-11-29 08:43:05.906011404 +0000 UTC m=+0.027317002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:43:06 np0005539550 podman[381377]: 2025-11-29 08:43:06.002433104 +0000 UTC m=+0.123738682 container start 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:43:06 np0005539550 podman[381377]: 2025-11-29 08:43:06.005639315 +0000 UTC m=+0.126944933 container attach 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:43:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:06 np0005539550 nova_compute[257631]: 2025-11-29 08:43:06.481 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 8.8 KiB/s wr, 28 op/s
Nov 29 03:43:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:06.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]: {
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:        "osd_id": 0,
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:        "type": "bluestore"
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]:    }
Nov 29 03:43:06 np0005539550 sad_hofstadter[381394]: }
Nov 29 03:43:06 np0005539550 systemd[1]: libpod-3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2.scope: Deactivated successfully.
Nov 29 03:43:06 np0005539550 podman[381377]: 2025-11-29 08:43:06.807698232 +0000 UTC m=+0.929003810 container died 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:43:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-44fa92a5cb7c2391d997803d068c35ed46e119366aa1387b74a73e1023621979-merged.mount: Deactivated successfully.
Nov 29 03:43:06 np0005539550 podman[381377]: 2025-11-29 08:43:06.880764171 +0000 UTC m=+1.002069749 container remove 3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hofstadter, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:43:06 np0005539550 systemd[1]: libpod-conmon-3e58e889bf03163d6970c1ec741b817fe509b062475f3df75b857e9f956ea8c2.scope: Deactivated successfully.
Nov 29 03:43:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:43:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:43:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:07 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:07Z|00932|binding|INFO|Releasing lport 145c794a-351b-44e3-94d7-5ac8d914c828 from this chassis (sb_readonly=0)
Nov 29 03:43:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a8bb5a17-789b-42af-b177-75b76126636a does not exist
Nov 29 03:43:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dff4efad-8254-4397-9025-7ff3f29d1df9 does not exist
Nov 29 03:43:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 22eba995-2db1-4ff5-8113-d60da8acb498 does not exist
Nov 29 03:43:07 np0005539550 nova_compute[257631]: 2025-11-29 08:43:07.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:07 np0005539550 nova_compute[257631]: 2025-11-29 08:43:07.645 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:43:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:43:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 923 KiB/s wr, 28 op/s
Nov 29 03:43:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:08.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:43:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:43:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:43:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:43:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:43:09 np0005539550 nova_compute[257631]: 2025-11-29 08:43:09.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:09 np0005539550 nova_compute[257631]: 2025-11-29 08:43:09.917 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 212 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 923 KiB/s wr, 27 op/s
Nov 29 03:43:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:11 np0005539550 podman[381530]: 2025-11-29 08:43:11.360158442 +0000 UTC m=+0.091686441 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.451 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405776.4505587, b8166862-6c06-4726-b53e-e53f69cda3df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.452 257641 INFO nova.compute.manager [-] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.475 257641 DEBUG nova.compute.manager [None req-a938f2ee-e01c-4122-a341-4cae491abaac - - - - - -] [instance: b8166862-6c06-4726-b53e-e53f69cda3df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.526 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:11 np0005539550 nova_compute[257631]: 2025-11-29 08:43:11.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Nov 29 03:43:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:12.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:14.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:14 np0005539550 nova_compute[257631]: 2025-11-29 08:43:14.517 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Nov 29 03:43:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:14.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:15 np0005539550 nova_compute[257631]: 2025-11-29 08:43:15.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539550 nova_compute[257631]: 2025-11-29 08:43:15.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Nov 29 03:43:16 np0005539550 nova_compute[257631]: 2025-11-29 08:43:16.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:17 np0005539550 nova_compute[257631]: 2025-11-29 08:43:17.249 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Nov 29 03:43:18 np0005539550 nova_compute[257631]: 2025-11-29 08:43:18.772 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:18.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:18.974 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:18.974 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:18.975 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:19 np0005539550 nova_compute[257631]: 2025-11-29 08:43:19.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:20 np0005539550 nova_compute[257631]: 2025-11-29 08:43:20.305 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031663307858738137 of space, bias 1.0, pg target 0.9498992357621441 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:43:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:20.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 908 KiB/s wr, 100 op/s
Nov 29 03:43:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:20.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:21 np0005539550 nova_compute[257631]: 2025-11-29 08:43:21.647 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:22.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 910 KiB/s wr, 101 op/s
Nov 29 03:43:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:22.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:24 np0005539550 nova_compute[257631]: 2025-11-29 08:43:24.521 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 246 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 74 op/s
Nov 29 03:43:24 np0005539550 nova_compute[257631]: 2025-11-29 08:43:24.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:24.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:26.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 301 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.0 MiB/s wr, 126 op/s
Nov 29 03:43:26 np0005539550 nova_compute[257631]: 2025-11-29 08:43:26.649 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:26.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.180 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.180 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.196 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.277 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.277 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.285 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.286 257641 INFO nova.compute.claims [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.408 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226504315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.853 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.861 257641 DEBUG nova.compute.provider_tree [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.881 257641 DEBUG nova.scheduler.client.report [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.939 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:27 np0005539550 nova_compute[257631]: 2025-11-29 08:43:27.940 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.019 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.020 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.049 257641 INFO nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.069 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.160 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.162 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.163 257641 INFO nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Creating image(s)#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.198 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.227 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.258 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.262 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.349 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.350 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.351 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.351 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:28.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.381 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.385 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3941161c-104e-452f-8d56-54600d37d0f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.456 257641 DEBUG nova.policy [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01c0b956e2c74d5798d01fc2be0a8bac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b3b0484057a4e3db51366d29c6b684d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 314 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 93 op/s
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.693 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 3941161c-104e-452f-8d56-54600d37d0f5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.761 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] resizing rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:28.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.857 257641 DEBUG nova.objects.instance [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'migration_context' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.869 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.870 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Ensure instance console log exists: /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.870 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.871 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:28 np0005539550 nova_compute[257631]: 2025-11-29 08:43:28.871 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:29 np0005539550 nova_compute[257631]: 2025-11-29 08:43:29.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:29 np0005539550 podman[381778]: 2025-11-29 08:43:29.799003523 +0000 UTC m=+0.058107741 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 03:43:29 np0005539550 podman[381777]: 2025-11-29 08:43:29.830545841 +0000 UTC m=+0.093324832 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.022 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Successfully created port: 1dcdca62-3969-4483-aefd-58f07dfb018f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:30.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 314 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 205 KiB/s rd, 3.8 MiB/s wr, 62 op/s
Nov 29 03:43:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:30.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.813 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Successfully updated port: 1dcdca62-3969-4483-aefd-58f07dfb018f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.831 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.831 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquired lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.831 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:30 np0005539550 nova_compute[257631]: 2025-11-29 08:43:30.996 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:31 np0005539550 nova_compute[257631]: 2025-11-29 08:43:31.686 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:32.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.392 257641 DEBUG nova.network.neutron [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.417 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Releasing lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.418 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Instance network_info: |[{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.420 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Start _get_guest_xml network_info=[{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.424 257641 WARNING nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.429 257641 DEBUG nova.virt.libvirt.host [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.430 257641 DEBUG nova.virt.libvirt.host [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.434 257641 DEBUG nova.virt.libvirt.host [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.435 257641 DEBUG nova.virt.libvirt.host [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.436 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.436 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.437 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.437 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.437 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.437 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.437 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.438 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.438 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.438 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.438 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.438 257641 DEBUG nova.virt.hardware [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.441 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 365 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 5.5 MiB/s wr, 116 op/s
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.744 257641 DEBUG nova.compute.manager [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.744 257641 DEBUG nova.compute.manager [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing instance network info cache due to event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.745 257641 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.745 257641 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.745 257641 DEBUG nova.network.neutron [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:32.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3185131580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.891 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.917 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:32 np0005539550 nova_compute[257631]: 2025-11-29 08:43:32.921 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272498004' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.397 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.400 257641 DEBUG nova.virt.libvirt.vif [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-18064033',display_name='tempest-TestStampPattern-server-18064033',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-18064033',id=202,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-31vy1fku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:28Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=3941161c-104e-452f-8d56-54600d37d0f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.401 257641 DEBUG nova.network.os_vif_util [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.402 257641 DEBUG nova.network.os_vif_util [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.404 257641 DEBUG nova.objects.instance [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'pci_devices' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.423 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <uuid>3941161c-104e-452f-8d56-54600d37d0f5</uuid>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <name>instance-000000ca</name>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestStampPattern-server-18064033</nova:name>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:43:32</nova:creationTime>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:user uuid="01c0b956e2c74d5798d01fc2be0a8bac">tempest-TestStampPattern-1305027466-project-member</nova:user>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:project uuid="3b3b0484057a4e3db51366d29c6b684d">tempest-TestStampPattern-1305027466</nova:project>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <nova:port uuid="1dcdca62-3969-4483-aefd-58f07dfb018f">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="serial">3941161c-104e-452f-8d56-54600d37d0f5</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="uuid">3941161c-104e-452f-8d56-54600d37d0f5</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/3941161c-104e-452f-8d56-54600d37d0f5_disk">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/3941161c-104e-452f-8d56-54600d37d0f5_disk.config">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:6c:3f:dc"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <target dev="tap1dcdca62-39"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/console.log" append="off"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:43:33 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:43:33 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:43:33 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:43:33 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.425 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Preparing to wait for external event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.426 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.426 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.426 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.427 257641 DEBUG nova.virt.libvirt.vif [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-18064033',display_name='tempest-TestStampPattern-server-18064033',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-18064033',id=202,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-31vy1fku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:28Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=3941161c-104e-452f-8d56-54600d37d0f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.428 257641 DEBUG nova.network.os_vif_util [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.429 257641 DEBUG nova.network.os_vif_util [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.429 257641 DEBUG os_vif [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.430 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.430 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.431 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.434 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.434 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1dcdca62-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.435 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1dcdca62-39, col_values=(('external_ids', {'iface-id': '1dcdca62-3969-4483-aefd-58f07dfb018f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:3f:dc', 'vm-uuid': '3941161c-104e-452f-8d56-54600d37d0f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:33 np0005539550 NetworkManager[49039]: <info>  [1764405813.4374] manager: (tap1dcdca62-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.438 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.443 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.444 257641 INFO os_vif [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39')#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.508 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.508 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.508 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No VIF found with MAC fa:16:3e:6c:3f:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.509 257641 INFO nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Using config drive#033[00m
Nov 29 03:43:33 np0005539550 nova_compute[257631]: 2025-11-29 08:43:33.535 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 03:43:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 03:43:34 np0005539550 nova_compute[257631]: 2025-11-29 08:43:34.525 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 372 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 833 KiB/s rd, 5.7 MiB/s wr, 143 op/s
Nov 29 03:43:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:34.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.364 257641 INFO nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Creating config drive at /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config#033[00m
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.370 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa26bewhb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:36.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.508 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa26bewhb" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.540 257641 DEBUG nova.storage.rbd_utils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 3941161c-104e-452f-8d56-54600d37d0f5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.544 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config 3941161c-104e-452f-8d56-54600d37d0f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 308 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.713 257641 DEBUG oslo_concurrency.processutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config 3941161c-104e-452f-8d56-54600d37d0f5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.714 257641 INFO nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Deleting local config drive /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:36 np0005539550 kernel: tap1dcdca62-39: entered promiscuous mode
Nov 29 03:43:36 np0005539550 NetworkManager[49039]: <info>  [1764405816.7638] manager: (tap1dcdca62-39): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Nov 29 03:43:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:36Z|00933|binding|INFO|Claiming lport 1dcdca62-3969-4483-aefd-58f07dfb018f for this chassis.
Nov 29 03:43:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:36Z|00934|binding|INFO|1dcdca62-3969-4483-aefd-58f07dfb018f: Claiming fa:16:3e:6c:3f:dc 10.100.0.14
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.764 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.775 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:3f:dc 10.100.0.14'], port_security=['fa:16:3e:6c:3f:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3941161c-104e-452f-8d56-54600d37d0f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13244464-ec20-4842-bec9-0ac60372e025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b3b0484057a4e3db51366d29c6b684d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aefb70fb-aad6-4ac2-b50b-5898c331e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca67c27a-85ad-40df-9c83-890f1ece542e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1dcdca62-3969-4483-aefd-58f07dfb018f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.777 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1dcdca62-3969-4483-aefd-58f07dfb018f in datapath 13244464-ec20-4842-bec9-0ac60372e025 bound to our chassis#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.779 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13244464-ec20-4842-bec9-0ac60372e025#033[00m
Nov 29 03:43:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:36Z|00935|binding|INFO|Setting lport 1dcdca62-3969-4483-aefd-58f07dfb018f ovn-installed in OVS
Nov 29 03:43:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:36Z|00936|binding|INFO|Setting lport 1dcdca62-3969-4483-aefd-58f07dfb018f up in Southbound
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.784 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:36 np0005539550 nova_compute[257631]: 2025-11-29 08:43:36.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4104223b-4eec-428f-a949-6563729bf1a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.793 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13244464-e1 in ovnmeta-13244464-ec20-4842-bec9-0ac60372e025 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.795 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13244464-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.795 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[25d60072-ec99-4f41-beba-080b8600c62d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.796 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c45b5669-c117-4b54-a5c2-2dac86a9d7b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:36.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:36 np0005539550 systemd-machined[216673]: New machine qemu-108-instance-000000ca.
Nov 29 03:43:36 np0005539550 systemd-udevd[381983]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.807 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[e6b2e971-abf8-4ec2-acd9-74ecd6bcdb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 systemd[1]: Started Virtual Machine qemu-108-instance-000000ca.
Nov 29 03:43:36 np0005539550 NetworkManager[49039]: <info>  [1764405816.8194] device (tap1dcdca62-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:36 np0005539550 NetworkManager[49039]: <info>  [1764405816.8203] device (tap1dcdca62-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.836 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1f151b6a-1deb-402c-bcb0-8404729a8ddf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.872 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e4c7de-13d0-4fec-867a-fcfff650f2f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 NetworkManager[49039]: <info>  [1764405816.8789] manager: (tap13244464-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/408)
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.878 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[46121797-75b7-476c-b31f-8d10d74b579a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.908 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2d65df68-c70d-4f51-8295-570a6b93697b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.911 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5632b8-9f43-42a6-9f4f-c8e0efd55571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 NetworkManager[49039]: <info>  [1764405816.9305] device (tap13244464-e0): carrier: link connected
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.934 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[7543ef44-1f6d-4568-980f-b31c87e20b65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.949 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[99e96ad0-6246-4f9c-89bb-6b6496673cf8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13244464-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:83:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886459, 'reachable_time': 22739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382015, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:36.985 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0d305fda-227a-43af-ab7a-b612bad35d3c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:8375'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886459, 'tstamp': 886459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382016, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.003 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[83df6e8d-25c7-4046-b8d0-a76945ee62ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13244464-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:83:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886459, 'reachable_time': 22739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382017, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.034 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3d3610-efe0-4ea9-8d5a-a5962484c234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.091 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7e1de0-b9fa-4342-99ba-1dc96a2adb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.092 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13244464-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.093 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.093 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13244464-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.095 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539550 kernel: tap13244464-e0: entered promiscuous mode
Nov 29 03:43:37 np0005539550 NetworkManager[49039]: <info>  [1764405817.0975] manager: (tap13244464-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.098 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.099 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13244464-e0, col_values=(('external_ids', {'iface-id': '58d6f574-370c-46a0-9e95-f7c81493d948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.100 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:37Z|00937|binding|INFO|Releasing lport 58d6f574-370c-46a0-9e95-f7c81493d948 from this chassis (sb_readonly=0)
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.101 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.102 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13244464-ec20-4842-bec9-0ac60372e025.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13244464-ec20-4842-bec9-0ac60372e025.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.102 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[92bf0ad2-da4a-4497-a239-32a025a48a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.103 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-13244464-ec20-4842-bec9-0ac60372e025
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/13244464-ec20-4842-bec9-0ac60372e025.pid.haproxy
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 13244464-ec20-4842-bec9-0ac60372e025
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:37.104 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'env', 'PROCESS_TAG=haproxy-13244464-ec20-4842-bec9-0ac60372e025', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13244464-ec20-4842-bec9-0ac60372e025.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.120 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.326 257641 DEBUG nova.network.neutron [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updated VIF entry in instance network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.327 257641 DEBUG nova.network.neutron [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.343 257641 DEBUG oslo_concurrency.lockutils [req-0965d2ca-2cfa-43a4-8498-21a1f2e33957 req-14146bd1-e001-46d6-9d3e-93c974ab0369 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:37 np0005539550 podman[382049]: 2025-11-29 08:43:37.449916302 +0000 UTC m=+0.048884318 container create bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:43:37 np0005539550 systemd[1]: Started libpod-conmon-bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429.scope.
Nov 29 03:43:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab56709855df16aefd273cba1223d696851acb94f485f8cc5dde2603492e8db/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:37 np0005539550 podman[382049]: 2025-11-29 08:43:37.421876503 +0000 UTC m=+0.020844539 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:37 np0005539550 podman[382049]: 2025-11-29 08:43:37.529788184 +0000 UTC m=+0.128756220 container init bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:43:37 np0005539550 podman[382049]: 2025-11-29 08:43:37.536417981 +0000 UTC m=+0.135385997 container start bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:37 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [NOTICE]   (382069) : New worker (382071) forked
Nov 29 03:43:37 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [NOTICE]   (382069) : Loading success.
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.813 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405817.8134792, 3941161c-104e-452f-8d56-54600d37d0f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.815 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.841 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.845 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405817.813587, 3941161c-104e-452f-8d56-54600d37d0f5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.845 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.871 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.875 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:37 np0005539550 nova_compute[257631]: 2025-11-29 08:43:37.902 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:38.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.437 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.7 MiB/s wr, 166 op/s
Nov 29 03:43:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:38.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.856 257641 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.857 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.857 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.857 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.857 257641 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Processing event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.858 257641 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.858 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.858 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.858 257641 DEBUG oslo_concurrency.lockutils [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.858 257641 DEBUG nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] No waiting events found dispatching network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.859 257641 WARNING nova.compute.manager [req-e298d826-1d56-4c72-b15b-a71b4fbe2be9 req-4d8c0bdb-1dd5-4db7-82f1-8f137dc38596 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received unexpected event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.859 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.863 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405818.863125, 3941161c-104e-452f-8d56-54600d37d0f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.864 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.865 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.869 257641 INFO nova.virt.libvirt.driver [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Instance spawned successfully.#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.869 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.883 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.889 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.894 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.895 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.895 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.896 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.896 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.896 257641 DEBUG nova.virt.libvirt.driver [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.923 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.966 257641 INFO nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Took 10.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:38 np0005539550 nova_compute[257631]: 2025-11-29 08:43:38.967 257641 DEBUG nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:39 np0005539550 nova_compute[257631]: 2025-11-29 08:43:39.066 257641 INFO nova.compute.manager [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Took 11.82 seconds to build instance.#033[00m
Nov 29 03:43:39 np0005539550 nova_compute[257631]: 2025-11-29 08:43:39.082 257641 DEBUG oslo_concurrency.lockutils [None req-824c64c6-3878-4f7e-bf83-08acd3fd9a1e 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:39 np0005539550 nova_compute[257631]: 2025-11-29 08:43:39.529 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:43:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:40.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:43:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 292 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 160 op/s
Nov 29 03:43:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.736 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.737 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.737 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.737 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.737 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.738 257641 INFO nova.compute.manager [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Terminating instance#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.739 257641 DEBUG nova.compute.manager [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:43:40 np0005539550 kernel: tap2c50a91b-2e (unregistering): left promiscuous mode
Nov 29 03:43:40 np0005539550 NetworkManager[49039]: <info>  [1764405820.7997] device (tap2c50a91b-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:43:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:40.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:40Z|00938|binding|INFO|Releasing lport 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 from this chassis (sb_readonly=0)
Nov 29 03:43:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:40Z|00939|binding|INFO|Setting lport 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 down in Southbound
Nov 29 03:43:40 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:40Z|00940|binding|INFO|Removing iface tap2c50a91b-2e ovn-installed in OVS
Nov 29 03:43:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:40.814 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:64:30:27 10.100.0.11'], port_security=['fa:16:3e:64:30:27 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1e268cee-9d36-413b-9d89-7a6c381c8f3d 23111efd-29d0-4a29-b4b5-9c0e8276cee4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6f7c3086-1e59-4178-81ca-2a80fe1fd75b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:40.816 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 in datapath e9e8d3c4-db7b-4d58-959e-9279d976835d unbound from our chassis#033[00m
Nov 29 03:43:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:40.818 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e9e8d3c4-db7b-4d58-959e-9279d976835d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:43:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:40.819 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[00ae8842-cef8-456d-ae07-0c851fc24ed6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:40.819 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d namespace which is not needed anymore#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.836 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539550 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000c7.scope: Deactivated successfully.
Nov 29 03:43:40 np0005539550 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d000000c7.scope: Consumed 16.955s CPU time.
Nov 29 03:43:40 np0005539550 systemd-machined[216673]: Machine qemu-107-instance-000000c7 terminated.
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.978 257641 INFO nova.virt.libvirt.driver [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Instance destroyed successfully.#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.979 257641 DEBUG nova.objects.instance [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.993 257641 DEBUG nova.virt.libvirt.vif [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-access_point-892821333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ac',id=199,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPskF69gJv1PkFDp6sMFcSkGc77HX97zARQ2LpJDHXss7elesntPQ0bKM82mcgUWZSOfl7w6RPJ7rDkGVTramaLBKk3dO7uJ04BIQF5ATuD1RLuWDTwHCU9gWwsxsTzFaQ==',key_name='tempest-TestSecurityGroupsBasicOps-568533765',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:42:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-08tv1d98',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:42:33Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.994 257641 DEBUG nova.network.os_vif_util [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.995 257641 DEBUG nova.network.os_vif_util [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.995 257641 DEBUG os_vif [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.996 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:40 np0005539550 nova_compute[257631]: 2025-11-29 08:43:40.997 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c50a91b-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.004 257641 INFO os_vif [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:64:30:27,bridge_name='br-int',has_traffic_filtering=True,id=2c50a91b-2e4c-4a7f-9f14-15004d9b2af6,network=Network(e9e8d3c4-db7b-4d58-959e-9279d976835d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c50a91b-2e')#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.079 257641 DEBUG nova.compute.manager [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.079 257641 DEBUG nova.compute.manager [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing instance network info cache due to event network-changed-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.080 257641 DEBUG oslo_concurrency.lockutils [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.080 257641 DEBUG oslo_concurrency.lockutils [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.080 257641 DEBUG nova.network.neutron [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Refreshing network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:41 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [NOTICE]   (380075) : haproxy version is 2.8.14-c23fe91
Nov 29 03:43:41 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [NOTICE]   (380075) : path to executable is /usr/sbin/haproxy
Nov 29 03:43:41 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [WARNING]  (380075) : Exiting Master process...
Nov 29 03:43:41 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [ALERT]    (380075) : Current worker (380077) exited with code 143 (Terminated)
Nov 29 03:43:41 np0005539550 neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d[380070]: [WARNING]  (380075) : All workers exited. Exiting... (0)
Nov 29 03:43:41 np0005539550 systemd[1]: libpod-ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68.scope: Deactivated successfully.
Nov 29 03:43:41 np0005539550 podman[382148]: 2025-11-29 08:43:41.157848553 +0000 UTC m=+0.241437861 container died ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:43:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68-userdata-shm.mount: Deactivated successfully.
Nov 29 03:43:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da92d758081b86f93be0f26cc860db5f37b4340f6227df9528d133914633a628-merged.mount: Deactivated successfully.
Nov 29 03:43:41 np0005539550 podman[382148]: 2025-11-29 08:43:41.887668782 +0000 UTC m=+0.971258090 container cleanup ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:43:41 np0005539550 podman[382204]: 2025-11-29 08:43:41.897685195 +0000 UTC m=+0.314771906 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:43:41 np0005539550 systemd[1]: libpod-conmon-ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68.scope: Deactivated successfully.
Nov 29 03:43:41 np0005539550 podman[382235]: 2025-11-29 08:43:41.960892235 +0000 UTC m=+0.045729419 container remove ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:43:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:41.967 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f170755-b962-462f-bb1e-87070cf5be77]: (4, ('Sat Nov 29 08:43:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d (ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68)\nef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68\nSat Nov 29 08:43:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d (ef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68)\nef9d077630bcdc03fabc11d99ba602cc9e8ac8745538390d95acf240b36b9a68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:41.968 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a0082c20-109d-485a-af31-b8710cd5e18a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:41.969 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape9e8d3c4-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:41 np0005539550 kernel: tape9e8d3c4-d0: left promiscuous mode
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.971 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:41 np0005539550 nova_compute[257631]: 2025-11-29 08:43:41.992 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:41.994 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dd805bdb-11d5-457a-9711-20772787cda0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:42.009 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e802ae5-1c8c-4d17-9149-20969ad938f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:42.011 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba48e81-64f1-4729-bcdf-906adc59a519]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:42.025 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9affdd7c-b039-4c3e-b862-b614fcd9bd11]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 879766, 'reachable_time': 24430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382248, 'error': None, 'target': 'ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:42.027 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e9e8d3c4-db7b-4d58-959e-9279d976835d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:43:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:42.027 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[043cff7a-4898-4676-a425-2ba5ffdc5d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:42 np0005539550 systemd[1]: run-netns-ovnmeta\x2de9e8d3c4\x2ddb7b\x2d4d58\x2d959e\x2d9279d976835d.mount: Deactivated successfully.
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.127 257641 INFO nova.virt.libvirt.driver [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Deleting instance files /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_del#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.128 257641 INFO nova.virt.libvirt.driver [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Deletion of /var/lib/nova/instances/f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3_del complete#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.197 257641 INFO nova.compute.manager [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Took 1.46 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.198 257641 DEBUG oslo.service.loopingcall [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.198 257641 DEBUG nova.compute.manager [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.199 257641 DEBUG nova.network.neutron [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:43:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.532 257641 DEBUG nova.network.neutron [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updated VIF entry in instance network info cache for port 2c50a91b-2e4c-4a7f-9f14-15004d9b2af6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.533 257641 DEBUG nova.network.neutron [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updating instance_info_cache with network_info: [{"id": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "address": "fa:16:3e:64:30:27", "network": {"id": "e9e8d3c4-db7b-4d58-959e-9279d976835d", "bridge": "br-int", "label": "tempest-network-smoke--768821174", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c50a91b-2e", "ovs_interfaceid": "2c50a91b-2e4c-4a7f-9f14-15004d9b2af6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 285 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.7 MiB/s wr, 257 op/s
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.575 257641 DEBUG oslo_concurrency.lockutils [req-0b57e5a5-8065-42dc-bcd6-8883c2c22808 req-cf2c6594-d5d9-452a-bf57-e1bbc66d0841 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:42.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.846 257641 DEBUG nova.network.neutron [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.870 257641 INFO nova.compute.manager [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Took 0.67 seconds to deallocate network for instance.#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.905 257641 DEBUG nova.compute.manager [req-8d0f2f7e-6d60-4898-93ee-5a5c2564db9d req-d8fd12e9-b870-4b03-8036-943704cc7332 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-vif-deleted-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.928 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:42 np0005539550 nova_compute[257631]: 2025-11-29 08:43:42.928 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.007 257641 DEBUG oslo_concurrency.processutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.123 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-vif-unplugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.125 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.125 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.126 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.126 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] No waiting events found dispatching network-vif-unplugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.126 257641 WARNING nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received unexpected event network-vif-unplugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.127 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.127 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.127 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.127 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.128 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] No waiting events found dispatching network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.128 257641 WARNING nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Received unexpected event network-vif-plugged-2c50a91b-2e4c-4a7f-9f14-15004d9b2af6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.128 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.128 257641 DEBUG nova.compute.manager [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing instance network info cache due to event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.129 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.129 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.129 257641 DEBUG nova.network.neutron [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961159360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.478 257641 DEBUG oslo_concurrency.processutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.485 257641 DEBUG nova.compute.provider_tree [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.503 257641 DEBUG nova.scheduler.client.report [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.522 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.558 257641 INFO nova.scheduler.client.report [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3#033[00m
Nov 29 03:43:43 np0005539550 nova_compute[257631]: 2025-11-29 08:43:43.654 257641 DEBUG oslo_concurrency.lockutils [None req-d50f7a16-3c15-4f58-bf45-42bdd8473973 de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:44 np0005539550 nova_compute[257631]: 2025-11-29 08:43:44.530 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 304 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.5 MiB/s wr, 219 op/s
Nov 29 03:43:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:44.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:44 np0005539550 nova_compute[257631]: 2025-11-29 08:43:44.954 257641 DEBUG nova.network.neutron [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updated VIF entry in instance network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:44 np0005539550 nova_compute[257631]: 2025-11-29 08:43:44.955 257641 DEBUG nova.network.neutron [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:44 np0005539550 nova_compute[257631]: 2025-11-29 08:43:44.975 257641 DEBUG oslo_concurrency.lockutils [req-d5b91537-b6d7-4074-8ccb-5ef7e3694f0a req-5cdcdbd9-837c-474d-9576-fccac5c93981 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.129 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.130 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.148 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.202 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.203 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.208 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.208 257641 INFO nova.compute.claims [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.334 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2063686144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.798 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.807 257641 DEBUG nova.compute.provider_tree [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.825 257641 DEBUG nova.scheduler.client.report [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.851 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.852 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.899 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.899 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.931 257641 INFO nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.947 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:43:45 np0005539550 nova_compute[257631]: 2025-11-29 08:43:45.999 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.033 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.034 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.035 257641 INFO nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Creating image(s)#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.071 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.103 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.132 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.136 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.214 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.216 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.216 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.217 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.243 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.247 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 00298f05-878f-4ef5-8e10-1c1209ac25da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:46 np0005539550 nova_compute[257631]: 2025-11-29 08:43:46.383 257641 DEBUG nova.policy [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:43:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:46.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.8 MiB/s wr, 254 op/s
Nov 29 03:43:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:46.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.118 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 00298f05-878f-4ef5-8e10-1c1209ac25da_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.870s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.187 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.231 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Successfully created port: 75ce1944-3811-4616-bed7-4715c0d228c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.309 257641 DEBUG nova.objects.instance [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 00298f05-878f-4ef5-8e10-1c1209ac25da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.328 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.329 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Ensure instance console log exists: /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.329 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.330 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.330 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:47Z|00941|binding|INFO|Releasing lport 58d6f574-370c-46a0-9e95-f7c81493d948 from this chassis (sb_readonly=0)
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.528 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:43:47 np0005539550 nova_compute[257631]: 2025-11-29 08:43:47.946 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.064 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Successfully updated port: 75ce1944-3811-4616-bed7-4715c0d228c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.083 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.084 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.084 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.185 257641 DEBUG nova.compute.manager [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.186 257641 DEBUG nova.compute.manager [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing instance network info cache due to event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.188 257641 DEBUG oslo_concurrency.lockutils [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.258 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:43:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:48.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 192 op/s
Nov 29 03:43:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:48.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:48 np0005539550 nova_compute[257631]: 2025-11-29 08:43:48.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.533 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.899 257641 DEBUG nova.network.neutron [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updating instance_info_cache with network_info: [{"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.947 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.947 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Instance network_info: |[{"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.948 257641 DEBUG oslo_concurrency.lockutils [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.948 257641 DEBUG nova.network.neutron [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing network info cache for port 75ce1944-3811-4616-bed7-4715c0d228c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.950 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Start _get_guest_xml network_info=[{"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.955 257641 WARNING nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.959 257641 DEBUG nova.virt.libvirt.host [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.960 257641 DEBUG nova.virt.libvirt.host [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.967 257641 DEBUG nova.virt.libvirt.host [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.968 257641 DEBUG nova.virt.libvirt.host [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.969 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.969 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.969 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.969 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.970 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.970 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.970 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.970 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.970 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.971 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.971 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.971 257641 DEBUG nova.virt.hardware [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:43:49 np0005539550 nova_compute[257631]: 2025-11-29 08:43:49.974 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 182 op/s
Nov 29 03:43:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865480088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.702 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.730 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.733 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:50.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:43:50 np0005539550 nova_compute[257631]: 2025-11-29 08:43:50.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.000 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:43:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754714570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.180 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.181 257641 DEBUG nova.virt.libvirt.vif [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1610630889',display_name='tempest-TestNetworkBasicOps-server-1610630889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1610630889',id=204,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8K/FjbT8v1GRjTGh9SoZLpoCBVN/noNsAf3vQ/McZ7McLFlsW3xA1XQVc8snQFVGkQ3J6EMrcFGbgk03uGYtbawIUNMZKqf7f/ElPShFkpyo44b2NtC+8W9PLnXTQ86w==',key_name='tempest-TestNetworkBasicOps-1736356646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tavxz9uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:45Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=00298f05-878f-4ef5-8e10-1c1209ac25da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.182 257641 DEBUG nova.network.os_vif_util [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.183 257641 DEBUG nova.network.os_vif_util [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.184 257641 DEBUG nova.objects.instance [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 00298f05-878f-4ef5-8e10-1c1209ac25da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.202 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <uuid>00298f05-878f-4ef5-8e10-1c1209ac25da</uuid>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <name>instance-000000cc</name>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-1610630889</nova:name>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:43:49</nova:creationTime>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <nova:port uuid="75ce1944-3811-4616-bed7-4715c0d228c4">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="serial">00298f05-878f-4ef5-8e10-1c1209ac25da</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="uuid">00298f05-878f-4ef5-8e10-1c1209ac25da</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/00298f05-878f-4ef5-8e10-1c1209ac25da_disk">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:75:bd:85"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <target dev="tap75ce1944-38"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/console.log" append="off"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:43:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:43:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:43:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:43:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.204 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Preparing to wait for external event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.204 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.204 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.205 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.205 257641 DEBUG nova.virt.libvirt.vif [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1610630889',display_name='tempest-TestNetworkBasicOps-server-1610630889',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1610630889',id=204,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8K/FjbT8v1GRjTGh9SoZLpoCBVN/noNsAf3vQ/McZ7McLFlsW3xA1XQVc8snQFVGkQ3J6EMrcFGbgk03uGYtbawIUNMZKqf7f/ElPShFkpyo44b2NtC+8W9PLnXTQ86w==',key_name='tempest-TestNetworkBasicOps-1736356646',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tavxz9uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:43:45Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=00298f05-878f-4ef5-8e10-1c1209ac25da,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.205 257641 DEBUG nova.network.os_vif_util [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.206 257641 DEBUG nova.network.os_vif_util [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.207 257641 DEBUG os_vif [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.210 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.210 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.210 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.214 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75ce1944-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.215 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75ce1944-38, col_values=(('external_ids', {'iface-id': '75ce1944-3811-4616-bed7-4715c0d228c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:bd:85', 'vm-uuid': '00298f05-878f-4ef5-8e10-1c1209ac25da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.216 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 NetworkManager[49039]: <info>  [1764405831.2170] manager: (tap75ce1944-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/410)
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.219 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.223 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.223 257641 INFO os_vif [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38')#033[00m
Nov 29 03:43:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:51.298 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.298 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:51.300 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:43:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:51.301 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.304 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.305 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.305 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:75:bd:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.305 257641 INFO nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Using config drive#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.329 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.464 257641 DEBUG nova.network.neutron [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updated VIF entry in instance network info cache for port 75ce1944-3811-4616-bed7-4715c0d228c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.464 257641 DEBUG nova.network.neutron [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updating instance_info_cache with network_info: [{"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.492 257641 DEBUG oslo_concurrency.lockutils [req-a6017a34-09de-4b4c-b8e1-dd27a67967f3 req-efd46004-b314-491c-8f88-36aa13c2b3fe 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.887 257641 INFO nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Creating config drive at /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.894 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrgc2de0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:51 np0005539550 nova_compute[257631]: 2025-11-29 08:43:51.931 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:52 np0005539550 nova_compute[257631]: 2025-11-29 08:43:52.039 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrgc2de0" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:52 np0005539550 nova_compute[257631]: 2025-11-29 08:43:52.065 257641 DEBUG nova.storage.rbd_utils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:43:52 np0005539550 nova_compute[257631]: 2025-11-29 08:43:52.068 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config 00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 339 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.7 MiB/s wr, 265 op/s
Nov 29 03:43:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:52.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:52 np0005539550 nova_compute[257631]: 2025-11-29 08:43:52.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:52 np0005539550 nova_compute[257631]: 2025-11-29 08:43:52.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.061 257641 DEBUG oslo_concurrency.processutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config 00298f05-878f-4ef5-8e10-1c1209ac25da_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.993s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.062 257641 INFO nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Deleting local config drive /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da/disk.config because it was imported into RBD.#033[00m
Nov 29 03:43:53 np0005539550 kernel: tap75ce1944-38: entered promiscuous mode
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.1172] manager: (tap75ce1944-38): new Tun device (/org/freedesktop/NetworkManager/Devices/411)
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.129 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:53Z|00942|binding|INFO|Claiming lport 75ce1944-3811-4616-bed7-4715c0d228c4 for this chassis.
Nov 29 03:43:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:53Z|00943|binding|INFO|75ce1944-3811-4616-bed7-4715c0d228c4: Claiming fa:16:3e:75:bd:85 10.100.0.12
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.141 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bd:85 10.100.0.12'], port_security=['fa:16:3e:75:bd:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '00298f05-878f-4ef5-8e10-1c1209ac25da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '207bc049-b56d-4bb7-80c9-386faa32c47d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=75ce1944-3811-4616-bed7-4715c0d228c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.143 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 75ce1944-3811-4616-bed7-4715c0d228c4 in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 bound to our chassis#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.146 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network faff47d9-b197-488e-92f7-8e8d5ec1eec7#033[00m
Nov 29 03:43:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:53Z|00944|binding|INFO|Setting lport 75ce1944-3811-4616-bed7-4715c0d228c4 ovn-installed in OVS
Nov 29 03:43:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:53Z|00945|binding|INFO|Setting lport 75ce1944-3811-4616-bed7-4715c0d228c4 up in Southbound
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.152 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.161 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4f52d407-ceb4-493c-a812-be7f2c9cba29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.162 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfaff47d9-b1 in ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.166 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfaff47d9-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.166 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa5d5e6-defa-4740-81ce-29f43c4b8c54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.167 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c32fae1e-f5db-4996-a310-fc6843e82e21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 systemd-machined[216673]: New machine qemu-109-instance-000000cc.
Nov 29 03:43:53 np0005539550 systemd-udevd[382653]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:53 np0005539550 systemd[1]: Started Virtual Machine qemu-109-instance-000000cc.
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.1859] device (tap75ce1944-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.184 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[67dd0822-b808-464a-8967-26b93a2746b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.1866] device (tap75ce1944-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.215 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[13451995-5244-4f47-b7ee-b5f5f75cb475]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.260 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[0a8939b7-245c-4d3c-a4e7-90313162b83d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 systemd-udevd[382656]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.266 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[018cd690-f58e-4a29-849d-f880082ee958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.2675] manager: (tapfaff47d9-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/412)
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.301 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a11b1d4b-e46e-4e94-8278-d458ab47612e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.304 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[15452000-da45-4d24-994e-25b641d71a42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.3377] device (tapfaff47d9-b0): carrier: link connected
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.343 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9e629394-9e9f-415b-8c60-54e4721d455e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.362 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9588da90-13cb-4d40-a8b4-fdf868b5693e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaff47d9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 888100, 'reachable_time': 22636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382685, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.379 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e0eb49ed-6d20-4746-8c14-b614359a569b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:20e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 888100, 'tstamp': 888100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382686, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.396 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6b66663c-2632-4281-a80c-3388b9ec5418]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfaff47d9-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:20:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 273], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 888100, 'reachable_time': 22636, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382687, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.427 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dfe4162e-4f86-4ae0-af2b-da5c5494ee07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.488 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9c5a67-b2ee-42b0-bd6c-7914a8371e0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.489 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaff47d9-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.490 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.490 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaff47d9-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:53 np0005539550 kernel: tapfaff47d9-b0: entered promiscuous mode
Nov 29 03:43:53 np0005539550 NetworkManager[49039]: <info>  [1764405833.4927] manager: (tapfaff47d9-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/413)
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.544 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfaff47d9-b0, col_values=(('external_ids', {'iface-id': '9a08cd83-c19d-4e8b-b8f6-ce380a01cac2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.545 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:53Z|00946|binding|INFO|Releasing lport 9a08cd83-c19d-4e8b-b8f6-ce380a01cac2 from this chassis (sb_readonly=0)
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.546 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.548 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.549 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a2f2342e-2351-4155-89f7-42aaf8a53b2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.550 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-faff47d9-b197-488e-92f7-8e8d5ec1eec7
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/faff47d9-b197-488e-92f7-8e8d5ec1eec7.pid.haproxy
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID faff47d9-b197-488e-92f7-8e8d5ec1eec7
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:43:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:43:53.551 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'env', 'PROCESS_TAG=haproxy-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/faff47d9-b197-488e-92f7-8e8d5ec1eec7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.563 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.956 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.959 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:43:53 np0005539550 nova_compute[257631]: 2025-11-29 08:43:53.959 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:53 np0005539550 podman[382717]: 2025-11-29 08:43:53.905380844 +0000 UTC m=+0.025242290 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:43:54 np0005539550 podman[382717]: 2025-11-29 08:43:54.146099134 +0000 UTC m=+0.265960560 container create 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.157 257641 DEBUG nova.compute.manager [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.158 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.158 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG nova.compute.manager [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Processing event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG nova.compute.manager [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.159 257641 DEBUG oslo_concurrency.lockutils [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.160 257641 DEBUG nova.compute.manager [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] No waiting events found dispatching network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.160 257641 WARNING nova.compute.manager [req-d9d9bffa-ae8c-4d1c-92c6-d8d9f7287741 req-80521360-d6d9-4b52-a3d2-4ba38a929007 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received unexpected event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:43:54 np0005539550 systemd[1]: Started libpod-conmon-122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896.scope.
Nov 29 03:43:54 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:43:54 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8abed935743ea766a671b129a01c00640c25f048e4a8092a4a6d00e8bdbb906/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:43:54 np0005539550 podman[382717]: 2025-11-29 08:43:54.232519621 +0000 UTC m=+0.352381087 container init 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:43:54 np0005539550 podman[382717]: 2025-11-29 08:43:54.238831141 +0000 UTC m=+0.358692567 container start 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:43:54 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [NOTICE]   (382755) : New worker (382757) forked
Nov 29 03:43:54 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [NOTICE]   (382755) : Loading success.
Nov 29 03:43:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163139822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:54.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.424 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.480 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405834.4798086, 00298f05-878f-4ef5-8e10-1c1209ac25da => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.481 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] VM Started (Lifecycle Event)#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.483 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.487 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.490 257641 INFO nova.virt.libvirt.driver [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Instance spawned successfully.#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.490 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.521 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.526 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.529 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.530 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.530 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.531 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.531 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.531 257641 DEBUG nova.virt.libvirt.driver [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.537 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.545 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.545 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000cc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.549 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.550 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.565 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 352 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 209 op/s
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.565 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405834.4802725, 00298f05-878f-4ef5-8e10-1c1209ac25da => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.566 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.589 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.592 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405834.4864185, 00298f05-878f-4ef5-8e10-1c1209ac25da => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.593 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.603 257641 INFO nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Took 8.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.604 257641 DEBUG nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.610 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.614 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.636 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.762 257641 INFO nova.compute.manager [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Took 9.57 seconds to build instance.#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.780 257641 DEBUG oslo_concurrency.lockutils [None req-9c6fd544-e7ff-44ee-8f88-14f4563fb85a 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.803 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.804 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3870MB free_disk=20.88034439086914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.805 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.805 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:43:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:54.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.908 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 3941161c-104e-452f-8d56-54600d37d0f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.909 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 00298f05-878f-4ef5-8e10-1c1209ac25da actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.909 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.909 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:43:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:54Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:3f:dc 10.100.0.14
Nov 29 03:43:54 np0005539550 ovn_controller[148680]: 2025-11-29T08:43:54Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:3f:dc 10.100.0.14
Nov 29 03:43:54 np0005539550 nova_compute[257631]: 2025-11-29 08:43:54.986 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:43:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:43:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:43:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3560782021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.677 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.680 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.685 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.699 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.717 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.717 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.972 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405820.9708085, f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:43:55 np0005539550 nova_compute[257631]: 2025-11-29 08:43:55.973 257641 INFO nova.compute.manager [-] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:43:56 np0005539550 nova_compute[257631]: 2025-11-29 08:43:56.002 257641 DEBUG nova.compute.manager [None req-0aa7a9ac-4fdd-4aa3-82c2-97e1d2f5b95d - - - - - -] [instance: f4f236d6-b1e2-4aa1-ad05-7ce2ea0f99b3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:43:56 np0005539550 nova_compute[257631]: 2025-11-29 08:43:56.217 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:56.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 368 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 5.5 MiB/s wr, 304 op/s
Nov 29 03:43:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:43:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:56.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:43:57 np0005539550 nova_compute[257631]: 2025-11-29 08:43:57.712 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:57 np0005539550 nova_compute[257631]: 2025-11-29 08:43:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:43:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:43:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:43:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:43:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.0 MiB/s wr, 261 op/s
Nov 29 03:43:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:43:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:43:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:43:58.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:43:59
Nov 29 03:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'vms']
Nov 29 03:43:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.539 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.670 257641 DEBUG nova.compute.manager [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.670 257641 DEBUG nova.compute.manager [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing instance network info cache due to event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.670 257641 DEBUG oslo_concurrency.lockutils [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.670 257641 DEBUG oslo_concurrency.lockutils [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:43:59 np0005539550 nova_compute[257631]: 2025-11-29 08:43:59.671 257641 DEBUG nova.network.neutron [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing network info cache for port 75ce1944-3811-4616-bed7-4715c0d228c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:00 np0005539550 podman[382837]: 2025-11-29 08:44:00.327267411 +0000 UTC m=+0.059581499 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 03:44:00 np0005539550 podman[382836]: 2025-11-29 08:44:00.340069945 +0000 UTC m=+0.072408033 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 254 op/s
Nov 29 03:44:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:00.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:00 np0005539550 nova_compute[257631]: 2025-11-29 08:44:00.968 257641 DEBUG nova.network.neutron [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updated VIF entry in instance network info cache for port 75ce1944-3811-4616-bed7-4715c0d228c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:00 np0005539550 nova_compute[257631]: 2025-11-29 08:44:00.969 257641 DEBUG nova.network.neutron [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updating instance_info_cache with network_info: [{"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:00 np0005539550 nova_compute[257631]: 2025-11-29 08:44:00.988 257641 DEBUG oslo_concurrency.lockutils [req-5c075062-8898-42eb-bdfc-37fd63a37dfe req-43b576f2-f0b0-41d9-80a9-17ee952606b9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:01 np0005539550 nova_compute[257631]: 2025-11-29 08:44:01.220 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:02.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 372 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 258 op/s
Nov 29 03:44:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:04.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:04 np0005539550 nova_compute[257631]: 2025-11-29 08:44:04.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 374 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.6 MiB/s wr, 181 op/s
Nov 29 03:44:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:04 np0005539550 nova_compute[257631]: 2025-11-29 08:44:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:06 np0005539550 nova_compute[257631]: 2025-11-29 08:44:06.224 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:06.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 389 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 154 op/s
Nov 29 03:44:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:44:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:08.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 401 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 501 KiB/s rd, 2.4 MiB/s wr, 52 op/s
Nov 29 03:44:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:44:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:44:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:44:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:44:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:44:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:09 np0005539550 nova_compute[257631]: 2025-11-29 08:44:09.540 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:10.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 401 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 2.3 MiB/s wr, 33 op/s
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:44:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:44:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:11 np0005539550 nova_compute[257631]: 2025-11-29 08:44:11.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:11Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:bd:85 10.100.0.12
Nov 29 03:44:11 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:11Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:bd:85 10.100.0.12
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:11 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9d6aaa62-d8c9-4fe8-9b83-c898553b4d65 does not exist
Nov 29 03:44:11 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 09d6cab1-63d2-4505-b93f-3238a47d8525 does not exist
Nov 29 03:44:11 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c0d7d7d7-7740-48e6-95b0-45f7844bbc4a does not exist
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:44:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:44:12 np0005539550 podman[383280]: 2025-11-29 08:44:12.038727303 +0000 UTC m=+0.098076843 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.266050046 +0000 UTC m=+0.039210344 container create e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:44:12 np0005539550 systemd[1]: Started libpod-conmon-e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271.scope.
Nov 29 03:44:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.248980354 +0000 UTC m=+0.022140682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.347808904 +0000 UTC m=+0.120969282 container init e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.355296714 +0000 UTC m=+0.128457012 container start e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.35867996 +0000 UTC m=+0.131840338 container attach e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Nov 29 03:44:12 np0005539550 quirky_cannon[383363]: 167 167
Nov 29 03:44:12 np0005539550 systemd[1]: libpod-e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271.scope: Deactivated successfully.
Nov 29 03:44:12 np0005539550 conmon[383363]: conmon e0b7f5e274d898c00289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271.scope/container/memory.events
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.36264408 +0000 UTC m=+0.135804378 container died e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:44:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f009862f250968973ec757f73cc6667946a94ef4bb1e558447e21084cf214b02-merged.mount: Deactivated successfully.
Nov 29 03:44:12 np0005539550 podman[383348]: 2025-11-29 08:44:12.402431667 +0000 UTC m=+0.175591965 container remove e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_cannon, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:44:12 np0005539550 systemd[1]: libpod-conmon-e0b7f5e274d898c002892f3e8239f809d72a2c142a008ab1857081939ea09271.scope: Deactivated successfully.
Nov 29 03:44:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:12.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:44:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:44:12 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:44:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 457 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 497 KiB/s rd, 5.5 MiB/s wr, 103 op/s
Nov 29 03:44:12 np0005539550 podman[383387]: 2025-11-29 08:44:12.576008859 +0000 UTC m=+0.047562274 container create fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:44:12 np0005539550 systemd[1]: Started libpod-conmon-fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d.scope.
Nov 29 03:44:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:12 np0005539550 podman[383387]: 2025-11-29 08:44:12.557997544 +0000 UTC m=+0.029550989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:12 np0005539550 podman[383387]: 2025-11-29 08:44:12.670339117 +0000 UTC m=+0.141892552 container init fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:44:12 np0005539550 podman[383387]: 2025-11-29 08:44:12.676872302 +0000 UTC m=+0.148425727 container start fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:44:12 np0005539550 podman[383387]: 2025-11-29 08:44:12.680026742 +0000 UTC m=+0.151580187 container attach fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:44:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:13 np0005539550 kind_chebyshev[383403]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:44:13 np0005539550 kind_chebyshev[383403]: --> relative data size: 1.0
Nov 29 03:44:13 np0005539550 kind_chebyshev[383403]: --> All data devices are unavailable
Nov 29 03:44:13 np0005539550 systemd[1]: libpod-fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d.scope: Deactivated successfully.
Nov 29 03:44:13 np0005539550 conmon[383403]: conmon fc8f957986d29aa5ac9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d.scope/container/memory.events
Nov 29 03:44:13 np0005539550 podman[383387]: 2025-11-29 08:44:13.504851234 +0000 UTC m=+0.976404659 container died fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:44:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b4469271e446f77f02d78aab3ef8e2fc67296da2e740fe181aaad38660f508d0-merged.mount: Deactivated successfully.
Nov 29 03:44:13 np0005539550 podman[383387]: 2025-11-29 08:44:13.558582274 +0000 UTC m=+1.030135699 container remove fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_chebyshev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:44:13 np0005539550 systemd[1]: libpod-conmon-fc8f957986d29aa5ac9c068cd93f2396e93b76c3fa5492a4a08e97ff0f411b4d.scope: Deactivated successfully.
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.220671309 +0000 UTC m=+0.044170979 container create eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:44:14 np0005539550 systemd[1]: Started libpod-conmon-eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a.scope.
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.203385511 +0000 UTC m=+0.026885201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.322500135 +0000 UTC m=+0.145999835 container init eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.329437251 +0000 UTC m=+0.152936911 container start eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.332627692 +0000 UTC m=+0.156127362 container attach eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:44:14 np0005539550 keen_rubin[383594]: 167 167
Nov 29 03:44:14 np0005539550 systemd[1]: libpod-eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a.scope: Deactivated successfully.
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.335124165 +0000 UTC m=+0.158623835 container died eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:44:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6469e767f28d43b7d33ca0235af6388be2b5175125bb1ba65c8f7ac2d06c2947-merged.mount: Deactivated successfully.
Nov 29 03:44:14 np0005539550 podman[383577]: 2025-11-29 08:44:14.372928751 +0000 UTC m=+0.196428421 container remove eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:44:14 np0005539550 systemd[1]: libpod-conmon-eeb52299d2e05ba5f0de36b4b686849f4b20e4b1678cbf7f58ca06982156bf7a.scope: Deactivated successfully.
Nov 29 03:44:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:14.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 464 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 648 KiB/s rd, 5.9 MiB/s wr, 132 op/s
Nov 29 03:44:14 np0005539550 nova_compute[257631]: 2025-11-29 08:44:14.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:14 np0005539550 podman[383617]: 2025-11-29 08:44:14.62697372 +0000 UTC m=+0.105395838 container create 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:44:14 np0005539550 systemd[1]: Started libpod-conmon-24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed.scope.
Nov 29 03:44:14 np0005539550 podman[383617]: 2025-11-29 08:44:14.600788418 +0000 UTC m=+0.079210556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a466a6d6b254a5007bb7a50d9e384dea58333bc9378fe27fbef5b440b215aec0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a466a6d6b254a5007bb7a50d9e384dea58333bc9378fe27fbef5b440b215aec0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a466a6d6b254a5007bb7a50d9e384dea58333bc9378fe27fbef5b440b215aec0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a466a6d6b254a5007bb7a50d9e384dea58333bc9378fe27fbef5b440b215aec0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:14 np0005539550 podman[383617]: 2025-11-29 08:44:14.733200818 +0000 UTC m=+0.211622946 container init 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:44:14 np0005539550 podman[383617]: 2025-11-29 08:44:14.742057602 +0000 UTC m=+0.220479680 container start 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:44:14 np0005539550 podman[383617]: 2025-11-29 08:44:14.744756361 +0000 UTC m=+0.223178489 container attach 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:44:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:14.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.332 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.332 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.349 257641 DEBUG nova.objects.instance [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.381 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:15 np0005539550 elated_hermann[383633]: {
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:    "0": [
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:        {
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "devices": [
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "/dev/loop3"
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            ],
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "lv_name": "ceph_lv0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "lv_size": "7511998464",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "name": "ceph_lv0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "tags": {
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.cluster_name": "ceph",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.crush_device_class": "",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.encrypted": "0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.osd_id": "0",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.type": "block",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:                "ceph.vdo": "0"
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            },
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "type": "block",
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:            "vg_name": "ceph_vg0"
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:        }
Nov 29 03:44:15 np0005539550 elated_hermann[383633]:    ]
Nov 29 03:44:15 np0005539550 elated_hermann[383633]: }
Nov 29 03:44:15 np0005539550 systemd[1]: libpod-24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed.scope: Deactivated successfully.
Nov 29 03:44:15 np0005539550 podman[383617]: 2025-11-29 08:44:15.535663344 +0000 UTC m=+1.014085432 container died 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:44:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a466a6d6b254a5007bb7a50d9e384dea58333bc9378fe27fbef5b440b215aec0-merged.mount: Deactivated successfully.
Nov 29 03:44:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:15 np0005539550 podman[383617]: 2025-11-29 08:44:15.603104141 +0000 UTC m=+1.081526229 container remove 24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hermann, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:44:15 np0005539550 systemd[1]: libpod-conmon-24d738f9d636a5841be38185c62932c7e039435a68e2ea17e93faaf717beb6ed.scope: Deactivated successfully.
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.664 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.665 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.665 257641 INFO nova.compute.manager [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Attaching volume 9aa47c29-8250-4a09-aa5f-08b05b14f1c9 to /dev/vdb#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.863 257641 DEBUG os_brick.utils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.865 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.880 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.881 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0950b9-1039-41fa-9dba-9c9558d7f3d2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.882 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.893 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.893 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[4697865b-10bf-47f0-8d1a-5eaf5be9bb58]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.895 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.904 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.905 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[2521fe52-052e-4dd4-b826-840253bb0799]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.906 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[37fdebb6-06a6-4a08-9dc5-4554d40948ba]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.907 257641 DEBUG oslo_concurrency.processutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.938 257641 DEBUG oslo_concurrency.processutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.941 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.941 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.941 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.942 257641 DEBUG os_brick.utils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:44:15 np0005539550 nova_compute[257631]: 2025-11-29 08:44:15.942 257641 DEBUG nova.virt.block_device [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating existing volume attachment record: b49b892a-ac5a-4566-a337-c119c5661ba9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.21204674 +0000 UTC m=+0.042900206 container create 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:44:16 np0005539550 nova_compute[257631]: 2025-11-29 08:44:16.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:16 np0005539550 systemd[1]: Started libpod-conmon-5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a.scope.
Nov 29 03:44:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.195924032 +0000 UTC m=+0.026777528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.294743923 +0000 UTC m=+0.125597419 container init 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.301998897 +0000 UTC m=+0.132852373 container start 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.305030213 +0000 UTC m=+0.135883709 container attach 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:44:16 np0005539550 eloquent_johnson[383817]: 167 167
Nov 29 03:44:16 np0005539550 systemd[1]: libpod-5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a.scope: Deactivated successfully.
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.309007124 +0000 UTC m=+0.139860600 container died 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:44:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6a33a44696dd36502697e125cf28bc9332844421c5344ee072e9877fe550e8c3-merged.mount: Deactivated successfully.
Nov 29 03:44:16 np0005539550 podman[383801]: 2025-11-29 08:44:16.347617081 +0000 UTC m=+0.178470557 container remove 5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_johnson, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:44:16 np0005539550 systemd[1]: libpod-conmon-5a369433ff2f3076420f85f22b5d136f74d2f61462e16997429f31eaac04c52a.scope: Deactivated successfully.
Nov 29 03:44:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:16.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:16 np0005539550 podman[383840]: 2025-11-29 08:44:16.520479265 +0000 UTC m=+0.040996598 container create d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:44:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 484 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 940 KiB/s rd, 5.6 MiB/s wr, 166 op/s
Nov 29 03:44:16 np0005539550 podman[383840]: 2025-11-29 08:44:16.50366733 +0000 UTC m=+0.024184693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:44:16 np0005539550 systemd[1]: Started libpod-conmon-d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b.scope.
Nov 29 03:44:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:44:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48abd9ab11ae3178711d6deecc254fe8addd4351c368b6e2bc04e8ab40ef5d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48abd9ab11ae3178711d6deecc254fe8addd4351c368b6e2bc04e8ab40ef5d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48abd9ab11ae3178711d6deecc254fe8addd4351c368b6e2bc04e8ab40ef5d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f48abd9ab11ae3178711d6deecc254fe8addd4351c368b6e2bc04e8ab40ef5d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:44:16 np0005539550 podman[383840]: 2025-11-29 08:44:16.663935816 +0000 UTC m=+0.184453149 container init d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:44:16 np0005539550 podman[383840]: 2025-11-29 08:44:16.670710287 +0000 UTC m=+0.191227620 container start d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:44:16 np0005539550 podman[383840]: 2025-11-29 08:44:16.673896338 +0000 UTC m=+0.194413681 container attach d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:44:16 np0005539550 nova_compute[257631]: 2025-11-29 08:44:16.744 257641 DEBUG nova.objects.instance [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:16 np0005539550 nova_compute[257631]: 2025-11-29 08:44:16.769 257641 DEBUG nova.virt.libvirt.driver [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Attempting to attach volume 9aa47c29-8250-4a09-aa5f-08b05b14f1c9 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:44:16 np0005539550 nova_compute[257631]: 2025-11-29 08:44:16.773 257641 DEBUG nova.virt.libvirt.guest [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-9aa47c29-8250-4a09-aa5f-08b05b14f1c9">
Nov 29 03:44:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:44:16 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:44:16 np0005539550 nova_compute[257631]:  <serial>9aa47c29-8250-4a09-aa5f-08b05b14f1c9</serial>
Nov 29 03:44:16 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:44:16 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:44:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:17 np0005539550 nova_compute[257631]: 2025-11-29 08:44:17.112 257641 DEBUG nova.virt.libvirt.driver [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:17 np0005539550 nova_compute[257631]: 2025-11-29 08:44:17.113 257641 DEBUG nova.virt.libvirt.driver [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:17 np0005539550 nova_compute[257631]: 2025-11-29 08:44:17.113 257641 DEBUG nova.virt.libvirt.driver [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:17 np0005539550 nova_compute[257631]: 2025-11-29 08:44:17.113 257641 DEBUG nova.virt.libvirt.driver [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No VIF found with MAC fa:16:3e:6c:3f:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:17 np0005539550 nova_compute[257631]: 2025-11-29 08:44:17.324 257641 DEBUG oslo_concurrency.lockutils [None req-4f6206a5-afa6-419f-9169-127256962fdc 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:17 np0005539550 boring_elion[383856]: {
Nov 29 03:44:17 np0005539550 boring_elion[383856]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:44:17 np0005539550 boring_elion[383856]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:44:17 np0005539550 boring_elion[383856]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:44:17 np0005539550 boring_elion[383856]:        "osd_id": 0,
Nov 29 03:44:17 np0005539550 boring_elion[383856]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:44:17 np0005539550 boring_elion[383856]:        "type": "bluestore"
Nov 29 03:44:17 np0005539550 boring_elion[383856]:    }
Nov 29 03:44:17 np0005539550 boring_elion[383856]: }
Nov 29 03:44:17 np0005539550 systemd[1]: libpod-d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b.scope: Deactivated successfully.
Nov 29 03:44:17 np0005539550 podman[383840]: 2025-11-29 08:44:17.483478525 +0000 UTC m=+1.003995848 container died d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:44:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f48abd9ab11ae3178711d6deecc254fe8addd4351c368b6e2bc04e8ab40ef5d6-merged.mount: Deactivated successfully.
Nov 29 03:44:17 np0005539550 podman[383840]: 2025-11-29 08:44:17.548990883 +0000 UTC m=+1.069508216 container remove d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_elion, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 03:44:17 np0005539550 systemd[1]: libpod-conmon-d81dfaaa2ccc60ad42addb8eaf60cb04062cde99b313b02a445269e4fc67a75b.scope: Deactivated successfully.
Nov 29 03:44:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:44:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:44:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b31740a2-64ac-4d7f-9bab-4c6d7eaaabd5 does not exist
Nov 29 03:44:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev aec022f6-8d90-4e9c-bd76-b4de6f6b034e does not exist
Nov 29 03:44:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2d2806f0-fede-473b-8b20-90d91a35535c does not exist
Nov 29 03:44:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:18.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 484 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.4 MiB/s wr, 170 op/s
Nov 29 03:44:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:18.975 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:18.979 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:44:19 np0005539550 nova_compute[257631]: 2025-11-29 08:44:19.581 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009660207868043924 of space, bias 1.0, pg target 2.898062360413177 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021617782198027173 of space, bias 1.0, pg target 0.6442099095012097 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 16)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Nov 29 03:44:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:20.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 484 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 160 op/s
Nov 29 03:44:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:20 np0005539550 nova_compute[257631]: 2025-11-29 08:44:20.810 257641 DEBUG oslo_concurrency.lockutils [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:20 np0005539550 nova_compute[257631]: 2025-11-29 08:44:20.810 257641 DEBUG oslo_concurrency.lockutils [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:20 np0005539550 nova_compute[257631]: 2025-11-29 08:44:20.826 257641 INFO nova.compute.manager [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Detaching volume 9aa47c29-8250-4a09-aa5f-08b05b14f1c9#033[00m
Nov 29 03:44:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:20.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.042 257641 INFO nova.virt.block_device [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Attempting to driver detach volume 9aa47c29-8250-4a09-aa5f-08b05b14f1c9 from mountpoint /dev/vdb#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.055 257641 DEBUG nova.virt.libvirt.driver [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Attempting to detach device vdb from instance 3941161c-104e-452f-8d56-54600d37d0f5 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.056 257641 DEBUG nova.virt.libvirt.guest [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-9aa47c29-8250-4a09-aa5f-08b05b14f1c9">
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <serial>9aa47c29-8250-4a09-aa5f-08b05b14f1c9</serial>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:44:21 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.065 257641 INFO nova.virt.libvirt.driver [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully detached device vdb from instance 3941161c-104e-452f-8d56-54600d37d0f5 from the persistent domain config.#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.065 257641 DEBUG nova.virt.libvirt.driver [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 3941161c-104e-452f-8d56-54600d37d0f5 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.066 257641 DEBUG nova.virt.libvirt.guest [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-9aa47c29-8250-4a09-aa5f-08b05b14f1c9">
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <serial>9aa47c29-8250-4a09-aa5f-08b05b14f1c9</serial>
Nov 29 03:44:21 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:44:21 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:44:21 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.201 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405861.2005432, 3941161c-104e-452f-8d56-54600d37d0f5 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.202 257641 DEBUG nova.virt.libvirt.driver [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 3941161c-104e-452f-8d56-54600d37d0f5 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.205 257641 INFO nova.virt.libvirt.driver [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully detached device vdb from instance 3941161c-104e-452f-8d56-54600d37d0f5 from the live domain config.#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.232 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.423 257641 DEBUG nova.objects.instance [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:21 np0005539550 nova_compute[257631]: 2025-11-29 08:44:21.462 257641 DEBUG oslo_concurrency.lockutils [None req-28ac93ee-8ff1-441f-b732-0d33e9dbc436 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Nov 29 03:44:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Nov 29 03:44:22 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Nov 29 03:44:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 484 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 710 KiB/s wr, 164 op/s
Nov 29 03:44:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.151 257641 DEBUG nova.compute.manager [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.214 257641 INFO nova.compute.manager [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] instance snapshotting#033[00m
Nov 29 03:44:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:24.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.463 257641 INFO nova.virt.libvirt.driver [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Beginning live snapshot process#033[00m
Nov 29 03:44:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 486 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 416 KiB/s wr, 138 op/s
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.603 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.616 257641 DEBUG nova.virt.libvirt.imagebackend [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No parent info for 4873db8c-b414-4e95-acd9-77caabebe722; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Nov 29 03:44:24 np0005539550 nova_compute[257631]: 2025-11-29 08:44:24.868 257641 DEBUG nova.storage.rbd_utils [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] creating snapshot(2e60079a1477405ab79aa187d6ba8f84) on rbd image(3941161c-104e-452f-8d56-54600d37d0f5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:44:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Nov 29 03:44:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Nov 29 03:44:25 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Nov 29 03:44:25 np0005539550 nova_compute[257631]: 2025-11-29 08:44:25.733 257641 DEBUG nova.storage.rbd_utils [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] cloning vms/3941161c-104e-452f-8d56-54600d37d0f5_disk@2e60079a1477405ab79aa187d6ba8f84 to images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:44:25 np0005539550 nova_compute[257631]: 2025-11-29 08:44:25.902 257641 DEBUG nova.storage.rbd_utils [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] flattening images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:44:26 np0005539550 nova_compute[257631]: 2025-11-29 08:44:26.235 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:26.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:26 np0005539550 nova_compute[257631]: 2025-11-29 08:44:26.514 257641 DEBUG nova.storage.rbd_utils [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] removing snapshot(2e60079a1477405ab79aa187d6ba8f84) on rbd image(3941161c-104e-452f-8d56-54600d37d0f5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Nov 29 03:44:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 564 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.4 MiB/s rd, 6.3 MiB/s wr, 225 op/s
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.704901) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866705008, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1214, "num_deletes": 251, "total_data_size": 1923871, "memory_usage": 1949216, "flush_reason": "Manual Compaction"}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866718637, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 1879311, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62295, "largest_seqno": 63508, "table_properties": {"data_size": 1873521, "index_size": 3120, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13007, "raw_average_key_size": 20, "raw_value_size": 1861649, "raw_average_value_size": 2908, "num_data_blocks": 137, "num_entries": 640, "num_filter_entries": 640, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405765, "oldest_key_time": 1764405765, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 13779 microseconds, and 6299 cpu microseconds.
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.718699) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 1879311 bytes OK
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.718718) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.720328) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.720351) EVENT_LOG_v1 {"time_micros": 1764405866720346, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.720368) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 1918398, prev total WAL file size 1918398, number of live WAL files 2.
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.721583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(1835KB)], [140(12MB)]
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866721734, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14798324, "oldest_snapshot_seqno": -1}
Nov 29 03:44:26 np0005539550 nova_compute[257631]: 2025-11-29 08:44:26.722 257641 DEBUG nova.storage.rbd_utils [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] creating snapshot(snap) on rbd image(a9d624a0-fd76-47d0-81ab-f89f80fec0c1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 9725 keys, 12856200 bytes, temperature: kUnknown
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866817238, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12856200, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12792965, "index_size": 37842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 257300, "raw_average_key_size": 26, "raw_value_size": 12621611, "raw_average_value_size": 1297, "num_data_blocks": 1439, "num_entries": 9725, "num_filter_entries": 9725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764405866, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.817597) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12856200 bytes
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.819226) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.8 rd, 134.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 12.3 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 10246, records dropped: 521 output_compression: NoCompression
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.819249) EVENT_LOG_v1 {"time_micros": 1764405866819238, "job": 86, "event": "compaction_finished", "compaction_time_micros": 95576, "compaction_time_cpu_micros": 48698, "output_level": 6, "num_output_files": 1, "total_output_size": 12856200, "num_input_records": 10246, "num_output_records": 9725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866819873, "job": 86, "event": "table_file_deletion", "file_number": 142}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764405866822578, "job": 86, "event": "table_file_deletion", "file_number": 140}
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.721009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.822664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.822672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.822676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.822679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:44:26.822682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:44:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Nov 29 03:44:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Nov 29 03:44:27 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Nov 29 03:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:28.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 629 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 15 MiB/s wr, 327 op/s
Nov 29 03:44:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:28.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:29 np0005539550 nova_compute[257631]: 2025-11-29 08:44:29.225 257641 INFO nova.virt.libvirt.driver [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Snapshot image upload complete#033[00m
Nov 29 03:44:29 np0005539550 nova_compute[257631]: 2025-11-29 08:44:29.225 257641 INFO nova.compute.manager [None req-74e7a0f7-8a52-495b-a7d4-0191988c285d 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Took 5.01 seconds to snapshot the instance on the hypervisor.#033[00m
Nov 29 03:44:29 np0005539550 nova_compute[257631]: 2025-11-29 08:44:29.586 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:30 np0005539550 podman[384134]: 2025-11-29 08:44:30.423776954 +0000 UTC m=+0.054864639 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:44:30 np0005539550 podman[384135]: 2025-11-29 08:44:30.442706453 +0000 UTC m=+0.066938104 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:30.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 629 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 15 MiB/s rd, 15 MiB/s wr, 304 op/s
Nov 29 03:44:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:31 np0005539550 nova_compute[257631]: 2025-11-29 08:44:31.237 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:32.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 673 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 14 MiB/s rd, 17 MiB/s wr, 406 op/s
Nov 29 03:44:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:32.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:44:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.4 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.70 writes per sync, written: 0.20 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8979 writes, 32K keys, 8979 commit groups, 1.0 writes per commit group, ingest: 34.71 MB, 0.06 MB/s#012Interval WAL: 8979 writes, 3547 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:44:33 np0005539550 nova_compute[257631]: 2025-11-29 08:44:33.897 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:33 np0005539550 nova_compute[257631]: 2025-11-29 08:44:33.899 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:33 np0005539550 nova_compute[257631]: 2025-11-29 08:44:33.927 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.318 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.319 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.327 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.327 257641 INFO nova.compute.claims [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:44:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:34.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.486 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 677 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 8.9 MiB/s wr, 249 op/s
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:34.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765893841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.943 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.949 257641 DEBUG nova.compute.provider_tree [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.965 257641 DEBUG nova.scheduler.client.report [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.992 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:34 np0005539550 nova_compute[257631]: 2025-11-29 08:44:34.993 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.041 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.042 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.057 257641 INFO nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.079 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.153 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.154 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.154 257641 INFO nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Creating image(s)#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.180 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.207 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.234 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.237 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "42545339ae39bae2930aaca12b7aeb632bbd4b94" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.238 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "42545339ae39bae2930aaca12b7aeb632bbd4b94" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.410 257641 DEBUG nova.policy [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01c0b956e2c74d5798d01fc2be0a8bac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b3b0484057a4e3db51366d29c6b684d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.532 257641 DEBUG nova.virt.libvirt.imagebackend [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:44:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Nov 29 03:44:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.606 257641 DEBUG nova.virt.libvirt.imagebackend [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:44:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.608 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] cloning images/a9d624a0-fd76-47d0-81ab-f89f80fec0c1@snap to None/02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:44:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:35.621 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:35 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:35.622 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.773 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "42545339ae39bae2930aaca12b7aeb632bbd4b94" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.944 257641 DEBUG nova.objects.instance [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'migration_context' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.960 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.960 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Ensure instance console log exists: /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.961 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.961 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:35 np0005539550 nova_compute[257631]: 2025-11-29 08:44:35.961 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:36 np0005539550 nova_compute[257631]: 2025-11-29 08:44:36.239 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:36 np0005539550 nova_compute[257631]: 2025-11-29 08:44:36.373 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Successfully created port: 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:44:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:36.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 677 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 8.1 MiB/s wr, 266 op/s
Nov 29 03:44:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:36.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.248 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Successfully updated port: 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.267 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.267 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquired lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.267 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.494 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.566 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.567 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.567 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.568 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.568 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.570 257641 INFO nova.compute.manager [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Terminating instance#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.571 257641 DEBUG nova.compute.manager [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:44:37 np0005539550 kernel: tap75ce1944-38 (unregistering): left promiscuous mode
Nov 29 03:44:37 np0005539550 NetworkManager[49039]: <info>  [1764405877.6221] device (tap75ce1944-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.630 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:37Z|00947|binding|INFO|Releasing lport 75ce1944-3811-4616-bed7-4715c0d228c4 from this chassis (sb_readonly=0)
Nov 29 03:44:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:37Z|00948|binding|INFO|Setting lport 75ce1944-3811-4616-bed7-4715c0d228c4 down in Southbound
Nov 29 03:44:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:37Z|00949|binding|INFO|Removing iface tap75ce1944-38 ovn-installed in OVS
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.637 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:bd:85 10.100.0.12'], port_security=['fa:16:3e:75:bd:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '00298f05-878f-4ef5-8e10-1c1209ac25da', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '207bc049-b56d-4bb7-80c9-386faa32c47d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=019e6a06-0d9a-4978-9f73-700cb716f877, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=75ce1944-3811-4616-bed7-4715c0d228c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.638 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 75ce1944-3811-4616-bed7-4715c0d228c4 in datapath faff47d9-b197-488e-92f7-8e8d5ec1eec7 unbound from our chassis#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.640 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network faff47d9-b197-488e-92f7-8e8d5ec1eec7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.641 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2c3f18-6c3e-4deb-9eae-b57130dbce66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.642 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 namespace which is not needed anymore#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.647 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539550 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000cc.scope: Deactivated successfully.
Nov 29 03:44:37 np0005539550 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d000000cc.scope: Consumed 16.351s CPU time.
Nov 29 03:44:37 np0005539550 systemd-machined[216673]: Machine qemu-109-instance-000000cc terminated.
Nov 29 03:44:37 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [NOTICE]   (382755) : haproxy version is 2.8.14-c23fe91
Nov 29 03:44:37 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [NOTICE]   (382755) : path to executable is /usr/sbin/haproxy
Nov 29 03:44:37 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [WARNING]  (382755) : Exiting Master process...
Nov 29 03:44:37 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [ALERT]    (382755) : Current worker (382757) exited with code 143 (Terminated)
Nov 29 03:44:37 np0005539550 neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7[382751]: [WARNING]  (382755) : All workers exited. Exiting... (0)
Nov 29 03:44:37 np0005539550 systemd[1]: libpod-122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896.scope: Deactivated successfully.
Nov 29 03:44:37 np0005539550 podman[384425]: 2025-11-29 08:44:37.790261576 +0000 UTC m=+0.048774456 container died 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.816 257641 INFO nova.virt.libvirt.driver [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Instance destroyed successfully.#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.817 257641 DEBUG nova.objects.instance [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 00298f05-878f-4ef5-8e10-1c1209ac25da obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896-userdata-shm.mount: Deactivated successfully.
Nov 29 03:44:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d8abed935743ea766a671b129a01c00640c25f048e4a8092a4a6d00e8bdbb906-merged.mount: Deactivated successfully.
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.831 257641 DEBUG nova.virt.libvirt.vif [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1610630889',display_name='tempest-TestNetworkBasicOps-server-1610630889',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1610630889',id=204,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBM8K/FjbT8v1GRjTGh9SoZLpoCBVN/noNsAf3vQ/McZ7McLFlsW3xA1XQVc8snQFVGkQ3J6EMrcFGbgk03uGYtbawIUNMZKqf7f/ElPShFkpyo44b2NtC+8W9PLnXTQ86w==',key_name='tempest-TestNetworkBasicOps-1736356646',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-tavxz9uv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:43:54Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=00298f05-878f-4ef5-8e10-1c1209ac25da,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.833 257641 DEBUG nova.network.os_vif_util [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "75ce1944-3811-4616-bed7-4715c0d228c4", "address": "fa:16:3e:75:bd:85", "network": {"id": "faff47d9-b197-488e-92f7-8e8d5ec1eec7", "bridge": "br-int", "label": "tempest-network-smoke--1652969305", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75ce1944-38", "ovs_interfaceid": "75ce1944-3811-4616-bed7-4715c0d228c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.834 257641 DEBUG nova.network.os_vif_util [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.835 257641 DEBUG os_vif [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.837 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.837 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75ce1944-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:37 np0005539550 podman[384425]: 2025-11-29 08:44:37.879831412 +0000 UTC m=+0.138344272 container cleanup 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.879 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.883 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.887 257641 INFO os_vif [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:75:bd:85,bridge_name='br-int',has_traffic_filtering=True,id=75ce1944-3811-4616-bed7-4715c0d228c4,network=Network(faff47d9-b197-488e-92f7-8e8d5ec1eec7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75ce1944-38')#033[00m
Nov 29 03:44:37 np0005539550 systemd[1]: libpod-conmon-122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896.scope: Deactivated successfully.
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.967 257641 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.967 257641 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing instance network info cache due to event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.968 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:37 np0005539550 podman[384467]: 2025-11-29 08:44:37.971239645 +0000 UTC m=+0.056214083 container remove 122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.978 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[77b0ce86-43a5-4705-8de0-3ee8b347a0e1]: (4, ('Sat Nov 29 08:44:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 (122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896)\n122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896\nSat Nov 29 08:44:37 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 (122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896)\n122b7912075d6e350d08129289a2af20e4ae24340c3099a8a863e179e7f14896\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.980 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9890d54d-5d75-494f-be44-d098d9782998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:37.981 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaff47d9-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:37 np0005539550 nova_compute[257631]: 2025-11-29 08:44:37.982 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:37 np0005539550 kernel: tapfaff47d9-b0: left promiscuous mode
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.003 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.006 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[503b689e-123e-464a-b86f-0c9dbb98d58a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.025 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb9f21b-b046-4e3d-811c-9edaf8f7298b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.027 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ba38e995-9986-417e-b098-a25356fa9596]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.058 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[16b8b3d6-46a8-488a-828b-164fad61888d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 888091, 'reachable_time': 22791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384500, 'error': None, 'target': 'ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.064 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-faff47d9-b197-488e-92f7-8e8d5ec1eec7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:44:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:38.064 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[165e71b4-55d9-4030-8986-52060b8c8274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:38 np0005539550 systemd[1]: run-netns-ovnmeta\x2dfaff47d9\x2db197\x2d488e\x2d92f7\x2d8e8d5ec1eec7.mount: Deactivated successfully.
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.292 257641 INFO nova.virt.libvirt.driver [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Deleting instance files /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da_del#033[00m
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.293 257641 INFO nova.virt.libvirt.driver [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Deletion of /var/lib/nova/instances/00298f05-878f-4ef5-8e10-1c1209ac25da_del complete#033[00m
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.383 257641 INFO nova.compute.manager [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.383 257641 DEBUG oslo.service.loopingcall [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.383 257641 DEBUG nova.compute.manager [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:44:38 np0005539550 nova_compute[257631]: 2025-11-29 08:44:38.384 257641 DEBUG nova.network.neutron [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:44:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:38.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 665 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 3.1 MiB/s wr, 206 op/s
Nov 29 03:44:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.154 257641 DEBUG nova.compute.manager [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-unplugged-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.155 257641 DEBUG oslo_concurrency.lockutils [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.155 257641 DEBUG oslo_concurrency.lockutils [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.156 257641 DEBUG oslo_concurrency.lockutils [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.157 257641 DEBUG nova.compute.manager [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] No waiting events found dispatching network-vif-unplugged-75ce1944-3811-4616-bed7-4715c0d228c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.157 257641 DEBUG nova.compute.manager [req-4d7fe4b4-f451-40e1-920b-b9ede7b3ebb4 req-0b20283d-ab9e-438a-81a6-0c37f2687257 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-unplugged-75ce1944-3811-4616-bed7-4715c0d228c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.255 257641 DEBUG nova.network.neutron [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating instance_info_cache with network_info: [{"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.281 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Releasing lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.282 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Instance network_info: |[{"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.282 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.283 257641 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing network info cache for port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.287 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Start _get_guest_xml network_info=[{"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:44:23Z,direct_url=<?>,disk_format='raw',id=a9d624a0-fd76-47d0-81ab-f89f80fec0c1,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1409388708',owner='3b3b0484057a4e3db51366d29c6b684d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:44:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': 'a9d624a0-fd76-47d0-81ab-f89f80fec0c1'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.293 257641 WARNING nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.300 257641 DEBUG nova.virt.libvirt.host [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.301 257641 DEBUG nova.virt.libvirt.host [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.305 257641 DEBUG nova.virt.libvirt.host [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.305 257641 DEBUG nova.virt.libvirt.host [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.307 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.308 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:44:23Z,direct_url=<?>,disk_format='raw',id=a9d624a0-fd76-47d0-81ab-f89f80fec0c1,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1409388708',owner='3b3b0484057a4e3db51366d29c6b684d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:44:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.309 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.309 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.310 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.310 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.310 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.311 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.312 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.312 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.312 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.313 257641 DEBUG nova.virt.hardware [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.317 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.507 257641 DEBUG nova.network.neutron [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.530 257641 INFO nova.compute.manager [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Took 1.15 seconds to deallocate network for instance.#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.567 257641 DEBUG nova.compute.manager [req-2383d317-5eac-42ba-a7b6-2521bde47413 req-6d5fb94a-68b4-47ba-a37d-1f6dc82fbedc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-deleted-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.587 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.587 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.591 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:39 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:39.624 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.671 257641 DEBUG oslo_concurrency.processutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826352960' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.831 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.869 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:39 np0005539550 nova_compute[257631]: 2025-11-29 08:44:39.874 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245235894' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.117 257641 DEBUG oslo_concurrency.processutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.125 257641 DEBUG nova.compute.provider_tree [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.146 257641 DEBUG nova.scheduler.client.report [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.172 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.228 257641 INFO nova.scheduler.client.report [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 00298f05-878f-4ef5-8e10-1c1209ac25da#033[00m
Nov 29 03:44:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:44:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200712796' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.317 257641 DEBUG oslo_concurrency.lockutils [None req-d93a9800-2a74-45ed-841f-d99517ea40eb 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.329 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.330 257641 DEBUG nova.virt.libvirt.vif [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1381850634',display_name='tempest-TestStampPattern-server-1381850634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1381850634',id=207,image_ref='a9d624a0-fd76-47d0-81ab-f89f80fec0c1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-8zb0f04m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='3941161c-104e-452f-8d56-54600d37d0f5',image_min_disk='1',image_min_ram='0',image_owner_id='3b3b0484057a4e3db51366d29c6b684d',image_owner_project_name='tempest-TestStampPattern-1305027466',image_owner_user_name='tempest-TestStampPattern-1305027466-project-member',image_user_id='01c0b956e2c74d5798d01fc2be0a8bac',network_allocated='True',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:35Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=02ec8d3d-66e9-47c4-a5a4-04389383ad38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.331 257641 DEBUG nova.network.os_vif_util [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.331 257641 DEBUG nova.network.os_vif_util [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.332 257641 DEBUG nova.objects.instance [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'pci_devices' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.355 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <uuid>02ec8d3d-66e9-47c4-a5a4-04389383ad38</uuid>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <name>instance-000000cf</name>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestStampPattern-server-1381850634</nova:name>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:44:39</nova:creationTime>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:user uuid="01c0b956e2c74d5798d01fc2be0a8bac">tempest-TestStampPattern-1305027466-project-member</nova:user>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:project uuid="3b3b0484057a4e3db51366d29c6b684d">tempest-TestStampPattern-1305027466</nova:project>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="a9d624a0-fd76-47d0-81ab-f89f80fec0c1"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <nova:port uuid="1cd0b9a8-7d4c-48ab-932c-928e5977ab22">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="serial">02ec8d3d-66e9-47c4-a5a4-04389383ad38</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="uuid">02ec8d3d-66e9-47c4-a5a4-04389383ad38</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:d2:f8:73"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <target dev="tap1cd0b9a8-7d"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/console.log" append="off"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:44:40 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:44:40 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:44:40 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:44:40 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.356 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Preparing to wait for external event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.356 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.356 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.357 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.357 257641 DEBUG nova.virt.libvirt.vif [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:44:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1381850634',display_name='tempest-TestStampPattern-server-1381850634',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1381850634',id=207,image_ref='a9d624a0-fd76-47d0-81ab-f89f80fec0c1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-8zb0f04m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='3941161c-104e-452f-8d56-54600d37d0f5',image_min_disk='1',image_min_ram='0',image_owner_id='3b3b0484057a4e3db51366d29c6b684d',image_owner_project_name='tempest-TestStampPattern-1305027466',image_owner_user_name='tempest-TestStampPattern-1305027466-project-member',image_user_id='01c0b956e2c74d5798d01fc2be0a8bac',network_allocated='True',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:44:35Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=02ec8d3d-66e9-47c4-a5a4-04389383ad38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.358 257641 DEBUG nova.network.os_vif_util [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.358 257641 DEBUG nova.network.os_vif_util [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.358 257641 DEBUG os_vif [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.359 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.359 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.360 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.362 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.362 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cd0b9a8-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.362 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cd0b9a8-7d, col_values=(('external_ids', {'iface-id': '1cd0b9a8-7d4c-48ab-932c-928e5977ab22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:f8:73', 'vm-uuid': '02ec8d3d-66e9-47c4-a5a4-04389383ad38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.363 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539550 NetworkManager[49039]: <info>  [1764405880.3650] manager: (tap1cd0b9a8-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.366 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.369 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.370 257641 INFO os_vif [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d')#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.431 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.431 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.432 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No VIF found with MAC fa:16:3e:d2:f8:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.432 257641 INFO nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Using config drive#033[00m
Nov 29 03:44:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:40.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.463 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 665 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 3.1 MiB/s wr, 206 op/s
Nov 29 03:44:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.888 257641 INFO nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Creating config drive at /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config#033[00m
Nov 29 03:44:40 np0005539550 nova_compute[257631]: 2025-11-29 08:44:40.892 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8xm34hn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:40.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.029 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl8xm34hn" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.060 257641 DEBUG nova.storage.rbd_utils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] rbd image 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.063 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.140 257641 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updated VIF entry in instance network info cache for port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.141 257641 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating instance_info_cache with network_info: [{"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.157 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.158 257641 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.158 257641 DEBUG nova.compute.manager [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing instance network info cache due to event network-changed-75ce1944-3811-4616-bed7-4715c0d228c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.158 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.159 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.159 257641 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Refreshing network info cache for port 75ce1944-3811-4616-bed7-4715c0d228c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.180 257641 DEBUG nova.compute.utils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Can not refresh info_cache because instance was not found refresh_info_cache_for_instance /usr/lib/python3.9/site-packages/nova/compute/utils.py:1010#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.221 257641 DEBUG oslo_concurrency.processutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config 02ec8d3d-66e9-47c4-a5a4-04389383ad38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.221 257641 INFO nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Deleting local config drive /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38/disk.config because it was imported into RBD.#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.241 257641 DEBUG nova.compute.manager [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.241 257641 DEBUG oslo_concurrency.lockutils [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.241 257641 DEBUG oslo_concurrency.lockutils [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.242 257641 DEBUG oslo_concurrency.lockutils [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "00298f05-878f-4ef5-8e10-1c1209ac25da-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.242 257641 DEBUG nova.compute.manager [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] No waiting events found dispatching network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.242 257641 WARNING nova.compute.manager [req-e9879b91-e880-4742-a570-a938f997fc6e req-aabc99c6-5d46-4863-bb7a-c0e139f4a143 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Received unexpected event network-vif-plugged-75ce1944-3811-4616-bed7-4715c0d228c4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:44:41 np0005539550 kernel: tap1cd0b9a8-7d: entered promiscuous mode
Nov 29 03:44:41 np0005539550 NetworkManager[49039]: <info>  [1764405881.2678] manager: (tap1cd0b9a8-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/415)
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.268 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:41Z|00950|binding|INFO|Claiming lport 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 for this chassis.
Nov 29 03:44:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:41Z|00951|binding|INFO|1cd0b9a8-7d4c-48ab-932c-928e5977ab22: Claiming fa:16:3e:d2:f8:73 10.100.0.9
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.274 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:f8:73 10.100.0.9'], port_security=['fa:16:3e:d2:f8:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '02ec8d3d-66e9-47c4-a5a4-04389383ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13244464-ec20-4842-bec9-0ac60372e025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b3b0484057a4e3db51366d29c6b684d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aefb70fb-aad6-4ac2-b50b-5898c331e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca67c27a-85ad-40df-9c83-890f1ece542e, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1cd0b9a8-7d4c-48ab-932c-928e5977ab22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.275 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 in datapath 13244464-ec20-4842-bec9-0ac60372e025 bound to our chassis#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.277 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13244464-ec20-4842-bec9-0ac60372e025#033[00m
Nov 29 03:44:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:41Z|00952|binding|INFO|Setting lport 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 ovn-installed in OVS
Nov 29 03:44:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:41Z|00953|binding|INFO|Setting lport 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 up in Southbound
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.286 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.292 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e216c632-1cb3-4d96-9be5-75624c193048]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 systemd-udevd[384662]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:44:41 np0005539550 systemd-machined[216673]: New machine qemu-110-instance-000000cf.
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.303 257641 INFO nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Port 75ce1944-3811-4616-bed7-4715c0d228c4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.303 257641 DEBUG nova.network.neutron [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:41 np0005539550 NetworkManager[49039]: <info>  [1764405881.3095] device (tap1cd0b9a8-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:44:41 np0005539550 NetworkManager[49039]: <info>  [1764405881.3110] device (tap1cd0b9a8-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:44:41 np0005539550 systemd[1]: Started Virtual Machine qemu-110-instance-000000cf.
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.326 257641 DEBUG oslo_concurrency.lockutils [req-383cd0aa-9ee5-47a4-9040-900ff8ddfd06 req-f3206f20-3e7f-4f9d-9628-67462367dd7b 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-00298f05-878f-4ef5-8e10-1c1209ac25da" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.326 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[21a0b4f4-5074-42ae-a0fb-e35be1154a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.328 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[ce22bc3e-62ca-43f2-9302-11e06ae3d16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.352 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1981dbb5-0df3-4ad8-abe8-f8eebd90d27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.371 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2fe101c1-3496-44c4-b6b0-5a07709ef9be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13244464-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:83:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886459, 'reachable_time': 22739, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384673, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.389 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6278619e-68d4-4043-b302-680fea577a03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13244464-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886472, 'tstamp': 886472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384675, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13244464-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886475, 'tstamp': 886475}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384675, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.390 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13244464-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:41 np0005539550 nova_compute[257631]: 2025-11-29 08:44:41.392 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.394 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13244464-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.394 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.395 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13244464-e0, col_values=(('external_ids', {'iface-id': '58d6f574-370c-46a0-9e95-f7c81493d948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:44:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:44:41.396 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.185 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405882.184789, 02ec8d3d-66e9-47c4-a5a4-04389383ad38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.186 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] VM Started (Lifecycle Event)#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.215 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.223 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405882.1850893, 02ec8d3d-66e9-47c4-a5a4-04389383ad38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.223 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.244 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.248 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:42 np0005539550 nova_compute[257631]: 2025-11-29 08:44:42.271 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:42 np0005539550 podman[384719]: 2025-11-29 08:44:42.39342261 +0000 UTC m=+0.123005043 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:44:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:42.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 538 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 145 KiB/s rd, 87 KiB/s wr, 162 op/s
Nov 29 03:44:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.402 257641 DEBUG nova.compute.manager [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.402 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.402 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.402 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG nova.compute.manager [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Processing event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG nova.compute.manager [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG oslo_concurrency.lockutils [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 DEBUG nova.compute.manager [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] No waiting events found dispatching network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.403 257641 WARNING nova.compute.manager [req-5d3894a7-0e67-4989-99ac-f695ed27e3b2 req-e0970e77-bbbd-4c6d-b63a-46ae9f060677 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received unexpected event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.404 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.407 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405883.4075873, 02ec8d3d-66e9-47c4-a5a4-04389383ad38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.407 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.409 257641 DEBUG nova.virt.libvirt.driver [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.411 257641 INFO nova.virt.libvirt.driver [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Instance spawned successfully.#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.411 257641 INFO nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Took 8.26 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.412 257641 DEBUG nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.440 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.443 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.487 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.509 257641 INFO nova.compute.manager [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Took 9.23 seconds to build instance.#033[00m
Nov 29 03:44:43 np0005539550 nova_compute[257631]: 2025-11-29 08:44:43.532 257641 DEBUG oslo_concurrency.lockutils [None req-eb9ab807-5506-4496-a4b9-70187196d2ac 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:44:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:44.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:44:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 519 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 62 KiB/s wr, 165 op/s
Nov 29 03:44:44 np0005539550 nova_compute[257631]: 2025-11-29 08:44:44.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:45 np0005539550 nova_compute[257631]: 2025-11-29 08:44:45.364 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:46Z|00954|binding|INFO|Releasing lport 58d6f574-370c-46a0-9e95-f7c81493d948 from this chassis (sb_readonly=0)
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.268 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 465 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 58 KiB/s wr, 272 op/s
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.868 257641 DEBUG nova.compute.manager [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.868 257641 DEBUG nova.compute.manager [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing instance network info cache due to event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.869 257641 DEBUG oslo_concurrency.lockutils [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.869 257641 DEBUG oslo_concurrency.lockutils [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:46 np0005539550 nova_compute[257631]: 2025-11-29 08:44:46.869 257641 DEBUG nova.network.neutron [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing network info cache for port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:44:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:44:48 np0005539550 nova_compute[257631]: 2025-11-29 08:44:48.451 257641 DEBUG nova.network.neutron [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updated VIF entry in instance network info cache for port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:44:48 np0005539550 nova_compute[257631]: 2025-11-29 08:44:48.452 257641 DEBUG nova.network.neutron [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating instance_info_cache with network_info: [{"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:48.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:48 np0005539550 nova_compute[257631]: 2025-11-29 08:44:48.501 257641 DEBUG oslo_concurrency.lockutils [req-9a8c0a4a-ae1a-4db9-88ce-2c56e268b0dc req-07c32259-cdcb-475a-a98e-aede82955f23 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 34 KiB/s wr, 264 op/s
Nov 29 03:44:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:48.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.596 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:49Z|00955|binding|INFO|Releasing lport 58d6f574-370c-46a0-9e95-f7c81493d948 from this chassis (sb_readonly=0)
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:44:49 np0005539550 nova_compute[257631]: 2025-11-29 08:44:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:44:50 np0005539550 nova_compute[257631]: 2025-11-29 08:44:50.367 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:44:50 np0005539550 nova_compute[257631]: 2025-11-29 08:44:50.368 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:44:50 np0005539550 nova_compute[257631]: 2025-11-29 08:44:50.368 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:44:50 np0005539550 nova_compute[257631]: 2025-11-29 08:44:50.369 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:44:50 np0005539550 nova_compute[257631]: 2025-11-29 08:44:50.372 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:50.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 229 op/s
Nov 29 03:44:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:50.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.656 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.675 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.675 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.676 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:51 np0005539550 nova_compute[257631]: 2025-11-29 08:44:51.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:44:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:52.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 229 op/s
Nov 29 03:44:52 np0005539550 nova_compute[257631]: 2025-11-29 08:44:52.814 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405877.8131518, 00298f05-878f-4ef5-8e10-1c1209ac25da => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:44:52 np0005539550 nova_compute[257631]: 2025-11-29 08:44:52.815 257641 INFO nova.compute.manager [-] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:44:52 np0005539550 nova_compute[257631]: 2025-11-29 08:44:52.840 257641 DEBUG nova.compute.manager [None req-6f5f4272-ff8e-4251-98cb-d785f2f467ef - - - - - -] [instance: 00298f05-878f-4ef5-8e10-1c1209ac25da] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:44:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:52.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:52 np0005539550 nova_compute[257631]: 2025-11-29 08:44:52.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:52 np0005539550 nova_compute[257631]: 2025-11-29 08:44:52.933 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.895 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.940 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.941 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:44:53 np0005539550 nova_compute[257631]: 2025-11-29 08:44:53.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/16394311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.377 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.473 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.474 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000cf as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.478 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.478 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000ca as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:44:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:54.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 440 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 KiB/s wr, 186 op/s
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.670 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.674 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3615MB free_disk=20.896705627441406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.675 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.675 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.799 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 3941161c-104e-452f-8d56-54600d37d0f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.800 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.801 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.801 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:44:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:54 np0005539550 nova_compute[257631]: 2025-11-29 08:44:54.919 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:44:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:44:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/342135265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.373 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.386 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.392 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.408 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.433 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:44:55 np0005539550 nova_compute[257631]: 2025-11-29 08:44:55.434 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:44:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:44:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:56Z|00104|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.9
Nov 29 03:44:56 np0005539550 ovn_controller[148680]: 2025-11-29T08:44:56Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d2:f8:73 10.100.0.9
Nov 29 03:44:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:44:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:56.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:44:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 459 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 763 KiB/s wr, 230 op/s
Nov 29 03:44:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:58 np0005539550 nova_compute[257631]: 2025-11-29 08:44:58.102 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:44:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:44:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:44:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:44:58.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 468 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 1002 KiB/s wr, 144 op/s
Nov 29 03:44:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:44:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:44:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:44:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:44:59 np0005539550 nova_compute[257631]: 2025-11-29 08:44:59.429 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:44:59
Nov 29 03:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', 'vms', '.mgr', 'images', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Nov 29 03:44:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:44:59 np0005539550 nova_compute[257631]: 2025-11-29 08:44:59.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:00Z|00106|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.9
Nov 29 03:45:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:00Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d2:f8:73 10.100.0.9
Nov 29 03:45:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:00.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:00 np0005539550 nova_compute[257631]: 2025-11-29 08:45:00.557 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 468 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1002 KiB/s wr, 100 op/s
Nov 29 03:45:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:01 np0005539550 podman[384853]: 2025-11-29 08:45:01.335836135 +0000 UTC m=+0.068269528 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:45:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:01Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:f8:73 10.100.0.9
Nov 29 03:45:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:01Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:f8:73 10.100.0.9
Nov 29 03:45:01 np0005539550 podman[384854]: 2025-11-29 08:45:01.354396225 +0000 UTC m=+0.086959942 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 03:45:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 492 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Nov 29 03:45:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 513 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Nov 29 03:45:04 np0005539550 nova_compute[257631]: 2025-11-29 08:45:04.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:05 np0005539550 nova_compute[257631]: 2025-11-29 08:45:05.559 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:06.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 135 op/s
Nov 29 03:45:06 np0005539550 nova_compute[257631]: 2025-11-29 08:45:06.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:45:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:08.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 888 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 29 03:45:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:45:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:45:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:45:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:45:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:45:09 np0005539550 nova_compute[257631]: 2025-11-29 08:45:09.605 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:10.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:10 np0005539550 nova_compute[257631]: 2025-11-29 08:45:10.561 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 1.9 MiB/s wr, 37 op/s
Nov 29 03:45:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:10.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:11 np0005539550 nova_compute[257631]: 2025-11-29 08:45:11.946 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:11 np0005539550 nova_compute[257631]: 2025-11-29 08:45:11.947 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:11 np0005539550 nova_compute[257631]: 2025-11-29 08:45:11.979 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.092 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.092 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.099 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.099 257641 INFO nova.compute.claims [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.254 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:12.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 143 KiB/s rd, 1.9 MiB/s wr, 38 op/s
Nov 29 03:45:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/212034809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.732 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.739 257641 DEBUG nova.compute.provider_tree [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.800 257641 DEBUG nova.scheduler.client.report [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.836 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.837 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.890 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.890 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.909 257641 INFO nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.936 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.939 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:12 np0005539550 nova_compute[257631]: 2025-11-29 08:45:12.939 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:45:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.006 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.008 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.008 257641 INFO nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Creating image(s)#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.035 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.066 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.092 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.096 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.164 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.165 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.165 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.166 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.188 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.192 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:13 np0005539550 podman[385048]: 2025-11-29 08:45:13.352298604 +0000 UTC m=+0.088218833 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.411 257641 DEBUG nova.policy [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4774e2851bc6407cb0fcde15bd24d1b3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0471b9b208874403aa3f0fbe7504ad19', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.477 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.554 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] resizing rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.669 257641 DEBUG nova.objects.instance [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'migration_context' on Instance uuid 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.682 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.683 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Ensure instance console log exists: /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.683 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.684 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:13 np0005539550 nova_compute[257631]: 2025-11-29 08:45:13.684 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.045 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Successfully created port: ec078636-08a9-4e73-b55f-6a3a4d83d29b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:45:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:14.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 521 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 50 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.607 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.827 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Successfully updated port: ec078636-08a9-4e73-b55f-6a3a4d83d29b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.843 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.843 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquired lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.843 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:45:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.952 257641 DEBUG nova.compute.manager [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.953 257641 DEBUG nova.compute.manager [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing instance network info cache due to event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:14 np0005539550 nova_compute[257631]: 2025-11-29 08:45:14.953 257641 DEBUG oslo_concurrency.lockutils [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:15 np0005539550 nova_compute[257631]: 2025-11-29 08:45:15.384 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:45:15 np0005539550 nova_compute[257631]: 2025-11-29 08:45:15.563 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:16.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 557 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.5 MiB/s wr, 91 op/s
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.794 257641 DEBUG nova.network.neutron [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updating instance_info_cache with network_info: [{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.829 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Releasing lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.830 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Instance network_info: |[{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.830 257641 DEBUG oslo_concurrency.lockutils [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.831 257641 DEBUG nova.network.neutron [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.836 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Start _get_guest_xml network_info=[{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.844 257641 WARNING nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.850 257641 DEBUG nova.virt.libvirt.host [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.851 257641 DEBUG nova.virt.libvirt.host [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.856 257641 DEBUG nova.virt.libvirt.host [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.857 257641 DEBUG nova.virt.libvirt.host [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.858 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.858 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.859 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.859 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.859 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.859 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.860 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.860 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.860 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.860 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.861 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.861 257641 DEBUG nova.virt.hardware [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:45:16 np0005539550 nova_compute[257631]: 2025-11-29 08:45:16.864 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:16.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279604938' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.340 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.369 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.375 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:45:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/696949588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.843 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.845 257641 DEBUG nova.virt.libvirt.vif [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1945607048',display_name='tempest-TestNetworkBasicOps-server-1945607048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1945607048',id=209,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8YYTuPLxWjRdH48dYFWOIbHm3QmgCaIcww5tnBh5TfdDmwK+5KMycSKSEzho4NKJWn4t6JuuudCktUX/JC1QI6JOOE2xZ7zGY3WCixaSY8j6QobQEzNZiXhCr6i7pUpw==',key_name='tempest-TestNetworkBasicOps-1870735197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x000i7t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:12Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=7233dfec-5360-4ae2-a5c7-6a82ac3145c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.845 257641 DEBUG nova.network.os_vif_util [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.846 257641 DEBUG nova.network.os_vif_util [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.848 257641 DEBUG nova.objects.instance [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.942 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <uuid>7233dfec-5360-4ae2-a5c7-6a82ac3145c2</uuid>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <name>instance-000000d1</name>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestNetworkBasicOps-server-1945607048</nova:name>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:45:16</nova:creationTime>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:user uuid="4774e2851bc6407cb0fcde15bd24d1b3">tempest-TestNetworkBasicOps-828399474-project-member</nova:user>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:project uuid="0471b9b208874403aa3f0fbe7504ad19">tempest-TestNetworkBasicOps-828399474</nova:project>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <nova:port uuid="ec078636-08a9-4e73-b55f-6a3a4d83d29b">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="serial">7233dfec-5360-4ae2-a5c7-6a82ac3145c2</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="uuid">7233dfec-5360-4ae2-a5c7-6a82ac3145c2</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:53:74:19"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <target dev="tapec078636-08"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/console.log" append="off"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:45:17 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:45:17 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:45:17 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:45:17 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.943 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Preparing to wait for external event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.943 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.943 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.943 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.944 257641 DEBUG nova.virt.libvirt.vif [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1945607048',display_name='tempest-TestNetworkBasicOps-server-1945607048',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1945607048',id=209,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8YYTuPLxWjRdH48dYFWOIbHm3QmgCaIcww5tnBh5TfdDmwK+5KMycSKSEzho4NKJWn4t6JuuudCktUX/JC1QI6JOOE2xZ7zGY3WCixaSY8j6QobQEzNZiXhCr6i7pUpw==',key_name='tempest-TestNetworkBasicOps-1870735197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x000i7t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:45:12Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=7233dfec-5360-4ae2-a5c7-6a82ac3145c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.944 257641 DEBUG nova.network.os_vif_util [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.945 257641 DEBUG nova.network.os_vif_util [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.945 257641 DEBUG os_vif [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.946 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.946 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.947 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.949 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.949 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec078636-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.950 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec078636-08, col_values=(('external_ids', {'iface-id': 'ec078636-08a9-4e73-b55f-6a3a4d83d29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:74:19', 'vm-uuid': '7233dfec-5360-4ae2-a5c7-6a82ac3145c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.951 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:17 np0005539550 NetworkManager[49039]: <info>  [1764405917.9531] manager: (tapec078636-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.953 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:17 np0005539550 nova_compute[257631]: 2025-11-29 08:45:17.962 257641 INFO os_vif [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08')#033[00m
Nov 29 03:45:18 np0005539550 nova_compute[257631]: 2025-11-29 08:45:18.045 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:18 np0005539550 nova_compute[257631]: 2025-11-29 08:45:18.046 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:18 np0005539550 nova_compute[257631]: 2025-11-29 08:45:18.046 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] No VIF found with MAC fa:16:3e:53:74:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:45:18 np0005539550 nova_compute[257631]: 2025-11-29 08:45:18.046 257641 INFO nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Using config drive#033[00m
Nov 29 03:45:18 np0005539550 nova_compute[257631]: 2025-11-29 08:45:18.072 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:18.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 571 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 29 03:45:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:18.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:18.975 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.082 257641 INFO nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Creating config drive at /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.090 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyq9j_a10 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 886d28a6-7dc7-4643-9dda-0babf4a0c8a9 does not exist
Nov 29 03:45:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0a758c5f-2185-4bdb-b9a3-e7ecc17e384a does not exist
Nov 29 03:45:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev fea3059c-bb6c-4efc-ac41-d4065acb05de does not exist
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.234 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyq9j_a10" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.262 257641 DEBUG nova.storage.rbd_utils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] rbd image 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.265 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.430 257641 DEBUG oslo_concurrency.processutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config 7233dfec-5360-4ae2-a5c7-6a82ac3145c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.431 257641 INFO nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Deleting local config drive /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2/disk.config because it was imported into RBD.#033[00m
Nov 29 03:45:19 np0005539550 kernel: tapec078636-08: entered promiscuous mode
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.4800] manager: (tapec078636-08): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:19Z|00956|binding|INFO|Claiming lport ec078636-08a9-4e73-b55f-6a3a4d83d29b for this chassis.
Nov 29 03:45:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:19Z|00957|binding|INFO|ec078636-08a9-4e73-b55f-6a3a4d83d29b: Claiming fa:16:3e:53:74:19 10.100.0.3
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.493 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:74:19 10.100.0.3'], port_security=['fa:16:3e:53:74:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7233dfec-5360-4ae2-a5c7-6a82ac3145c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8348f212-c25f-450d-beda-4c2e2c4a398c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce71f536-5923-4fc0-8270-107f14f9127c, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ec078636-08a9-4e73-b55f-6a3a4d83d29b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.495 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ec078636-08a9-4e73-b55f-6a3a4d83d29b in datapath 3ba224d7-2b01-485d-af2a-07d3c8407c0a bound to our chassis#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.498 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ba224d7-2b01-485d-af2a-07d3c8407c0a#033[00m
Nov 29 03:45:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:19Z|00958|binding|INFO|Setting lport ec078636-08a9-4e73-b55f-6a3a4d83d29b ovn-installed in OVS
Nov 29 03:45:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:19Z|00959|binding|INFO|Setting lport ec078636-08a9-4e73-b55f-6a3a4d83d29b up in Southbound
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.500 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.515 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f760246e-2560-46a7-98cd-fb41e102944f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.516 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ba224d7-21 in ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:45:19 np0005539550 systemd-udevd[385529]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.518 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ba224d7-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.518 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[230eea1d-90fc-41b4-aa36-6d794d112d2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.520 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[38a8737e-fc4a-44f0-82e2-35d871b531de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 systemd-machined[216673]: New machine qemu-111-instance-000000d1.
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.5329] device (tapec078636-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.534 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e66bd6-9649-4333-9231-138ef548e551]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.5366] device (tapec078636-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:45:19 np0005539550 systemd[1]: Started Virtual Machine qemu-111-instance-000000d1.
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.561 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[387bb13d-ea8d-4430-aa2e-729122e04907]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.592 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fe6ab1-27ce-4d08-a831-9dc68b554e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.599 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[40482c69-68f9-435b-ae8e-f568eaad27ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.6002] manager: (tap3ba224d7-20): new Veth device (/org/freedesktop/NetworkManager/Devices/418)
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.635 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[accdf9ac-ff48-42b8-8a7d-48eaa5fb891e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.644 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1c62974e-813d-4dda-a1b7-147acee827db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.6651] device (tap3ba224d7-20): carrier: link connected
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.670 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[e0b79404-26be-4467-92e0-c0826b85cc75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.688 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0b3dd15d-fdcd-4140-a8b7-5a25e66ac1b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ba224d7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:5d:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896732, 'reachable_time': 36690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385584, 'error': None, 'target': 'ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbb648b-cda1-469b-a4f7-9869bf94d102]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:5dd1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 896732, 'tstamp': 896732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385587, 'error': None, 'target': 'ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.751 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5abb956d-72b3-49e5-9897-8fb222339b57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ba224d7-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:5d:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 277], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896732, 'reachable_time': 36690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385594, 'error': None, 'target': 'ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.788 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[66d8887a-45d6-45c5-a829-2cc1207571ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.837349102 +0000 UTC m=+0.048111969 container create 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.854 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3d67d510-faf9-4311-95c7-7231f4728ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.856 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba224d7-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.856 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.857 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ba224d7-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.898 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 kernel: tap3ba224d7-20: entered promiscuous mode
Nov 29 03:45:19 np0005539550 NetworkManager[49039]: <info>  [1764405919.9003] manager: (tap3ba224d7-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/419)
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.902 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ba224d7-20, col_values=(('external_ids', {'iface-id': '96a414db-4904-46be-8b53-cf5cf81a70fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:19Z|00960|binding|INFO|Releasing lport 96a414db-4904-46be-8b53-cf5cf81a70fe from this chassis (sb_readonly=0)
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.904 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.905 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ba224d7-2b01-485d-af2a-07d3c8407c0a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ba224d7-2b01-485d-af2a-07d3c8407c0a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.906 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[272325ae-7785-4101-b74c-994f9daffd63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.907 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-3ba224d7-2b01-485d-af2a-07d3c8407c0a
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/3ba224d7-2b01-485d-af2a-07d3c8407c0a.pid.haproxy
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 3ba224d7-2b01-485d-af2a-07d3c8407c0a
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:45:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:19.907 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'env', 'PROCESS_TAG=haproxy-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ba224d7-2b01-485d-af2a-07d3c8407c0a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:45:19 np0005539550 systemd[1]: Started libpod-conmon-4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c.scope.
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.817398767 +0000 UTC m=+0.028161664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.953442909 +0000 UTC m=+0.164205786 container init 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.960017545 +0000 UTC m=+0.170780402 container start 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:45:19 np0005539550 nova_compute[257631]: 2025-11-29 08:45:19.961 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.963525504 +0000 UTC m=+0.174288381 container attach 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:19 np0005539550 loving_haslett[385624]: 167 167
Nov 29 03:45:19 np0005539550 systemd[1]: libpod-4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c.scope: Deactivated successfully.
Nov 29 03:45:19 np0005539550 podman[385603]: 2025-11-29 08:45:19.967131635 +0000 UTC m=+0.177894492 container died 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3b209787f93bfae59c5bc82f5357c419d95b241128c53731cb6ecb91019f1cdb-merged.mount: Deactivated successfully.
Nov 29 03:45:20 np0005539550 podman[385603]: 2025-11-29 08:45:20.00923081 +0000 UTC m=+0.219993667 container remove 4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:45:20 np0005539550 systemd[1]: libpod-conmon-4626d0c8327e1fbe016c2a97ba55644be0db4cae71fb7f538800aa0dbbe9f55c.scope: Deactivated successfully.
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.155 257641 DEBUG nova.compute.manager [req-8ab852bb-092e-4806-99ff-b2da343c64cf req-3cb5893a-e084-4d88-a128-b6d2a1f036a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.156 257641 DEBUG oslo_concurrency.lockutils [req-8ab852bb-092e-4806-99ff-b2da343c64cf req-3cb5893a-e084-4d88-a128-b6d2a1f036a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.156 257641 DEBUG oslo_concurrency.lockutils [req-8ab852bb-092e-4806-99ff-b2da343c64cf req-3cb5893a-e084-4d88-a128-b6d2a1f036a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.156 257641 DEBUG oslo_concurrency.lockutils [req-8ab852bb-092e-4806-99ff-b2da343c64cf req-3cb5893a-e084-4d88-a128-b6d2a1f036a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.156 257641 DEBUG nova.compute.manager [req-8ab852bb-092e-4806-99ff-b2da343c64cf req-3cb5893a-e084-4d88-a128-b6d2a1f036a6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Processing event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:45:20 np0005539550 podman[385686]: 2025-11-29 08:45:20.203673081 +0000 UTC m=+0.040027094 container create 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:45:20 np0005539550 systemd[1]: Started libpod-conmon-0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538.scope.
Nov 29 03:45:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 podman[385686]: 2025-11-29 08:45:20.186197659 +0000 UTC m=+0.022551692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:20 np0005539550 podman[385686]: 2025-11-29 08:45:20.298277455 +0000 UTC m=+0.134631478 container init 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:45:20 np0005539550 podman[385686]: 2025-11-29 08:45:20.305506308 +0000 UTC m=+0.141860321 container start 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:45:20 np0005539550 podman[385686]: 2025-11-29 08:45:20.308582356 +0000 UTC m=+0.144936389 container attach 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:20 np0005539550 podman[385724]: 2025-11-29 08:45:20.314103165 +0000 UTC m=+0.057829264 container create 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007032958484552392 of space, bias 1.0, pg target 2.1098875453657175 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022557448701842546 of space, bias 1.0, pg target 0.6722119713149078 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006236877384608226 of space, bias 1.0, pg target 1.8585894606132514 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 16)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 systemd[1]: Started libpod-conmon-4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d.scope.
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Nov 29 03:45:20 np0005539550 podman[385724]: 2025-11-29 08:45:20.286604889 +0000 UTC m=+0.030330998 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.374 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405920.374148, 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.375 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] VM Started (Lifecycle Event)#033[00m
Nov 29 03:45:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.378 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.382 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.385 257641 INFO nova.virt.libvirt.driver [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Instance spawned successfully.#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.385 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:45:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713b71c082c712738d77f232c281d8cb2933e747b3d774981be293b1fdf64ac1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:20 np0005539550 podman[385724]: 2025-11-29 08:45:20.402365399 +0000 UTC m=+0.146091518 container init 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.404 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.407 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:20 np0005539550 podman[385724]: 2025-11-29 08:45:20.409807297 +0000 UTC m=+0.153533386 container start 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.425 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.426 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.426 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.427 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.427 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.428 257641 DEBUG nova.virt.libvirt.driver [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:45:20 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [NOTICE]   (385752) : New worker (385754) forked
Nov 29 03:45:20 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [NOTICE]   (385752) : Loading success.
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.434 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.434 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405920.3744378, 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.434 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.485 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.488 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405920.3809607, 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.488 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:45:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:20.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.576 257641 INFO nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Took 7.57 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.577 257641 DEBUG nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 571 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.611 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.615 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.642 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.712 257641 INFO nova.compute.manager [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Took 8.66 seconds to build instance.#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.716 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.717 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.778 257641 DEBUG nova.objects.instance [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.809 257641 DEBUG oslo_concurrency.lockutils [None req-b5d870cc-4673-4478-a746-ef9dba3b0837 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:20 np0005539550 nova_compute[257631]: 2025-11-29 08:45:20.827 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:20.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.065 257641 DEBUG nova.network.neutron [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updated VIF entry in instance network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.065 257641 DEBUG nova.network.neutron [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updating instance_info_cache with network_info: [{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.123 257641 DEBUG oslo_concurrency.lockutils [req-f95e70ad-7d80-4393-8d58-d2530233a08a req-36a16f99-6a50-4862-955c-7cdf72c019db 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:21 np0005539550 modest_booth[385725]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:45:21 np0005539550 modest_booth[385725]: --> relative data size: 1.0
Nov 29 03:45:21 np0005539550 modest_booth[385725]: --> All data devices are unavailable
Nov 29 03:45:21 np0005539550 systemd[1]: libpod-0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538.scope: Deactivated successfully.
Nov 29 03:45:21 np0005539550 podman[385686]: 2025-11-29 08:45:21.207396091 +0000 UTC m=+1.043750124 container died 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9ed3ffb899a680a336230ca12744d404aa16dea47dccfb9d2222948216df35a2-merged.mount: Deactivated successfully.
Nov 29 03:45:21 np0005539550 podman[385686]: 2025-11-29 08:45:21.272727164 +0000 UTC m=+1.109081177 container remove 0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:45:21 np0005539550 systemd[1]: libpod-conmon-0f7bd59f4748406eacfec5b94e2f12f4ee81ef6b0c431a52d1fbeb4ade3fa538.scope: Deactivated successfully.
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.527 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.528 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:21 np0005539550 nova_compute[257631]: 2025-11-29 08:45:21.528 257641 INFO nova.compute.manager [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Attaching volume 2fa5bbdf-955a-4fa5-ac81-4ccf91494340 to /dev/vdb#033[00m
Nov 29 03:45:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Nov 29 03:45:21 np0005539550 podman[385927]: 2025-11-29 08:45:21.918602538 +0000 UTC m=+0.039706216 container create 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:45:21 np0005539550 systemd[1]: Started libpod-conmon-2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813.scope.
Nov 29 03:45:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Nov 29 03:45:21 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Nov 29 03:45:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:21 np0005539550 podman[385927]: 2025-11-29 08:45:21.903078455 +0000 UTC m=+0.024182153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:22 np0005539550 podman[385927]: 2025-11-29 08:45:22.008455382 +0000 UTC m=+0.129559060 container init 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.015 257641 DEBUG os_brick.utils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:45:22 np0005539550 podman[385927]: 2025-11-29 08:45:22.016521866 +0000 UTC m=+0.137625554 container start 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:22 np0005539550 gracious_haslett[385944]: 167 167
Nov 29 03:45:22 np0005539550 podman[385927]: 2025-11-29 08:45:22.020180068 +0000 UTC m=+0.141283766 container attach 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:45:22 np0005539550 systemd[1]: libpod-2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813.scope: Deactivated successfully.
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.017 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:22 np0005539550 conmon[385944]: conmon 2925eb1ca86d8d05e3eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813.scope/container/memory.events
Nov 29 03:45:22 np0005539550 podman[385927]: 2025-11-29 08:45:22.023532423 +0000 UTC m=+0.144636101 container died 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.035 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.035 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[9f507621-c7b1-48e4-a0cb-fad398f9476a]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.036 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.048 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:22 np0005539550 systemd[1]: var-lib-containers-storage-overlay-63b8f80a1f01b5c8c3c64b210502687359fffc91ff9fe0805a707aedc25dbc68-merged.mount: Deactivated successfully.
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.048 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[d2545d69-8483-4024-a500-ee6b95e2871e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.052 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:22 np0005539550 podman[385927]: 2025-11-29 08:45:22.061687209 +0000 UTC m=+0.182790887 container remove 2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.063 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.063 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3377b519-caf1-461c-a429-d7a86923a0b2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.064 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[6eb42a88-98f8-4a01-9cae-a38405e0d51b]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.065 257641 DEBUG oslo_concurrency.processutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:22 np0005539550 systemd[1]: libpod-conmon-2925eb1ca86d8d05e3ebe4eb8fc704dcb5a62e4a54adf7f60eb5ba3cb089f813.scope: Deactivated successfully.
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.098 257641 DEBUG oslo_concurrency.processutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.100 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.101 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.101 257641 DEBUG os_brick.initiator.connectors.lightos [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.101 257641 DEBUG os_brick.utils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] <== get_connector_properties: return (86ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.101 257641 DEBUG nova.virt.block_device [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating existing volume attachment record: aad096ce-1312-4638-a043-99a0bc49b78e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:45:22 np0005539550 podman[385973]: 2025-11-29 08:45:22.272501644 +0000 UTC m=+0.050325345 container create b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:22 np0005539550 systemd[1]: Started libpod-conmon-b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263.scope.
Nov 29 03:45:22 np0005539550 podman[385973]: 2025-11-29 08:45:22.2525825 +0000 UTC m=+0.030406211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b62103d7b23157f5c49ae6deebfc5927fbaef5c60ed7a383bd2000c2b7bf6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b62103d7b23157f5c49ae6deebfc5927fbaef5c60ed7a383bd2000c2b7bf6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b62103d7b23157f5c49ae6deebfc5927fbaef5c60ed7a383bd2000c2b7bf6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b62103d7b23157f5c49ae6deebfc5927fbaef5c60ed7a383bd2000c2b7bf6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:22 np0005539550 podman[385973]: 2025-11-29 08:45:22.38340859 +0000 UTC m=+0.161232291 container init b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:45:22 np0005539550 podman[385973]: 2025-11-29 08:45:22.391261879 +0000 UTC m=+0.169085580 container start b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:45:22 np0005539550 podman[385973]: 2025-11-29 08:45:22.406097744 +0000 UTC m=+0.183921445 container attach b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.428 257641 DEBUG nova.compute.manager [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.429 257641 DEBUG oslo_concurrency.lockutils [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.429 257641 DEBUG oslo_concurrency.lockutils [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.429 257641 DEBUG oslo_concurrency.lockutils [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.429 257641 DEBUG nova.compute.manager [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] No waiting events found dispatching network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.429 257641 WARNING nova.compute.manager [req-2853ef8f-bd70-4fef-9d97-3398bbbf11ad req-f3b6e793-f005-4bda-a8fd-38c291ec1228 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received unexpected event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:45:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:22.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 571 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.4 MiB/s wr, 161 op/s
Nov 29 03:45:22 np0005539550 nova_compute[257631]: 2025-11-29 08:45:22.952 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Nov 29 03:45:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Nov 29 03:45:23 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Nov 29 03:45:23 np0005539550 sweet_buck[385989]: {
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:    "0": [
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:        {
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "devices": [
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "/dev/loop3"
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            ],
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "lv_name": "ceph_lv0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "lv_size": "7511998464",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "name": "ceph_lv0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "tags": {
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.cluster_name": "ceph",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.crush_device_class": "",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.encrypted": "0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.osd_id": "0",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.type": "block",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:                "ceph.vdo": "0"
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            },
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "type": "block",
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:            "vg_name": "ceph_vg0"
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:        }
Nov 29 03:45:23 np0005539550 sweet_buck[385989]:    ]
Nov 29 03:45:23 np0005539550 sweet_buck[385989]: }
Nov 29 03:45:23 np0005539550 systemd[1]: libpod-b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263.scope: Deactivated successfully.
Nov 29 03:45:23 np0005539550 podman[385973]: 2025-11-29 08:45:23.175249188 +0000 UTC m=+0.953072889 container died b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:45:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a3b62103d7b23157f5c49ae6deebfc5927fbaef5c60ed7a383bd2000c2b7bf6c-merged.mount: Deactivated successfully.
Nov 29 03:45:23 np0005539550 podman[385973]: 2025-11-29 08:45:23.235650897 +0000 UTC m=+1.013474598 container remove b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:45:23 np0005539550 systemd[1]: libpod-conmon-b37109f38db80e05392648851f32ca35cfb8f2eed3d49c2ce4764422ee740263.scope: Deactivated successfully.
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.843443716 +0000 UTC m=+0.037461759 container create ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:45:23 np0005539550 systemd[1]: Started libpod-conmon-ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb.scope.
Nov 29 03:45:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.91827685 +0000 UTC m=+0.112294903 container init ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.828492138 +0000 UTC m=+0.022510211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.925117503 +0000 UTC m=+0.119135566 container start ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.928618381 +0000 UTC m=+0.122636444 container attach ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:45:23 np0005539550 focused_villani[386171]: 167 167
Nov 29 03:45:23 np0005539550 systemd[1]: libpod-ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb.scope: Deactivated successfully.
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.931980327 +0000 UTC m=+0.125998380 container died ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:45:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-66fdf9306c6210e95a112b4a76714878d8f68e49f21de3d2b5b9265a08867e7d-merged.mount: Deactivated successfully.
Nov 29 03:45:23 np0005539550 podman[386155]: 2025-11-29 08:45:23.969629479 +0000 UTC m=+0.163647532 container remove ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 03:45:23 np0005539550 systemd[1]: libpod-conmon-ff3400d0d44d326b658a30b78c3f0bb0acca94c1197c381b8cef6c5917db06eb.scope: Deactivated successfully.
Nov 29 03:45:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Nov 29 03:45:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Nov 29 03:45:24 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.120 257641 DEBUG nova.objects.instance [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:24 np0005539550 podman[386194]: 2025-11-29 08:45:24.155749199 +0000 UTC m=+0.047030991 container create d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.164 257641 DEBUG nova.virt.libvirt.driver [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Attempting to attach volume 2fa5bbdf-955a-4fa5-ac81-4ccf91494340 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.167 257641 DEBUG nova.virt.libvirt.guest [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] attach device xml: <disk type="network" device="disk">
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-2fa5bbdf-955a-4fa5-ac81-4ccf91494340">
Nov 29 03:45:24 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  <auth username="openstack">
Nov 29 03:45:24 np0005539550 nova_compute[257631]:    <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  </auth>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:45:24 np0005539550 nova_compute[257631]:  <serial>2fa5bbdf-955a-4fa5-ac81-4ccf91494340</serial>
Nov 29 03:45:24 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:45:24 np0005539550 nova_compute[257631]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Nov 29 03:45:24 np0005539550 systemd[1]: Started libpod-conmon-d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc.scope.
Nov 29 03:45:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:45:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539858dc0773cc831592acc4daa2a5c1fe85086a05f005b5127a4e1f76d7b0d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539858dc0773cc831592acc4daa2a5c1fe85086a05f005b5127a4e1f76d7b0d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:24 np0005539550 podman[386194]: 2025-11-29 08:45:24.138526143 +0000 UTC m=+0.029807955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:45:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539858dc0773cc831592acc4daa2a5c1fe85086a05f005b5127a4e1f76d7b0d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/539858dc0773cc831592acc4daa2a5c1fe85086a05f005b5127a4e1f76d7b0d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:45:24 np0005539550 podman[386194]: 2025-11-29 08:45:24.253028431 +0000 UTC m=+0.144310243 container init d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:45:24 np0005539550 podman[386194]: 2025-11-29 08:45:24.260173952 +0000 UTC m=+0.151455744 container start d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:45:24 np0005539550 podman[386194]: 2025-11-29 08:45:24.263804724 +0000 UTC m=+0.155086516 container attach d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.425 257641 DEBUG nova.virt.libvirt.driver [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.425 257641 DEBUG nova.virt.libvirt.driver [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.425 257641 DEBUG nova.virt.libvirt.driver [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.425 257641 DEBUG nova.virt.libvirt.driver [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] No VIF found with MAC fa:16:3e:d2:f8:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:45:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:24.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 600 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 172 op/s
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.658 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:24 np0005539550 nova_compute[257631]: 2025-11-29 08:45:24.941 257641 DEBUG oslo_concurrency.lockutils [None req-4573ca94-36d1-45d4-b9c6-b979d39538d9 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.413s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:24.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]: {
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:        "osd_id": 0,
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:        "type": "bluestore"
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]:    }
Nov 29 03:45:25 np0005539550 gifted_rhodes[386227]: }
Nov 29 03:45:25 np0005539550 systemd[1]: libpod-d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc.scope: Deactivated successfully.
Nov 29 03:45:25 np0005539550 podman[386194]: 2025-11-29 08:45:25.088639436 +0000 UTC m=+0.979921238 container died d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:45:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-539858dc0773cc831592acc4daa2a5c1fe85086a05f005b5127a4e1f76d7b0d5-merged.mount: Deactivated successfully.
Nov 29 03:45:25 np0005539550 podman[386194]: 2025-11-29 08:45:25.152375039 +0000 UTC m=+1.043656841 container remove d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rhodes, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:45:25 np0005539550 systemd[1]: libpod-conmon-d48032a176946700424523c262bd8402c700c0fd1fbe48c352121aea01f6eebc.scope: Deactivated successfully.
Nov 29 03:45:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:45:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:45:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f799c18c-11e5-4620-a88a-cbe9f475d4ea does not exist
Nov 29 03:45:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 00d969c6-3d6c-41f1-b6bf-bf0c89f4baaf does not exist
Nov 29 03:45:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev aa188dcf-2165-42af-b82d-7efa488b912f does not exist
Nov 29 03:45:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:25 np0005539550 nova_compute[257631]: 2025-11-29 08:45:25.979 257641 DEBUG nova.compute.manager [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:25 np0005539550 nova_compute[257631]: 2025-11-29 08:45:25.980 257641 DEBUG nova.compute.manager [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing instance network info cache due to event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:25 np0005539550 nova_compute[257631]: 2025-11-29 08:45:25.980 257641 DEBUG oslo_concurrency.lockutils [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:25 np0005539550 nova_compute[257631]: 2025-11-29 08:45:25.980 257641 DEBUG oslo_concurrency.lockutils [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:25 np0005539550 nova_compute[257631]: 2025-11-29 08:45:25.981 257641 DEBUG nova.network.neutron [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:45:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:26.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 692 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 13 MiB/s rd, 18 MiB/s wr, 369 op/s
Nov 29 03:45:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:27 np0005539550 nova_compute[257631]: 2025-11-29 08:45:27.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.074 257641 DEBUG oslo_concurrency.lockutils [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.075 257641 DEBUG oslo_concurrency.lockutils [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Nov 29 03:45:28 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.202 257641 INFO nova.compute.manager [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Detaching volume 2fa5bbdf-955a-4fa5-ac81-4ccf91494340#033[00m
Nov 29 03:45:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.379 257641 DEBUG nova.network.neutron [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updated VIF entry in instance network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.380 257641 DEBUG nova.network.neutron [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updating instance_info_cache with network_info: [{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.397 257641 INFO nova.virt.block_device [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Attempting to driver detach volume 2fa5bbdf-955a-4fa5-ac81-4ccf91494340 from mountpoint /dev/vdb#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.412 257641 DEBUG nova.virt.libvirt.driver [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Attempting to detach device vdb from instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.413 257641 DEBUG nova.virt.libvirt.guest [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-2fa5bbdf-955a-4fa5-ac81-4ccf91494340">
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <serial>2fa5bbdf-955a-4fa5-ac81-4ccf91494340</serial>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:45:28 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.417 257641 DEBUG oslo_concurrency.lockutils [req-c777f7d0-4996-4d51-9e66-bfc8ac993839 req-950ac3e2-5fe7-497b-954d-e0f2398d5ee8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.425 257641 INFO nova.virt.libvirt.driver [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully detached device vdb from instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 from the persistent domain config.#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.426 257641 DEBUG nova.virt.libvirt.driver [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.427 257641 DEBUG nova.virt.libvirt.guest [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] detach device xml: <disk type="network" device="disk">
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <source protocol="rbd" name="volumes/volume-2fa5bbdf-955a-4fa5-ac81-4ccf91494340">
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.100" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.102" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:    <host name="192.168.122.101" port="6789"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  </source>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <target dev="vdb" bus="virtio"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <serial>2fa5bbdf-955a-4fa5-ac81-4ccf91494340</serial>
Nov 29 03:45:28 np0005539550 nova_compute[257631]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 29 03:45:28 np0005539550 nova_compute[257631]: </disk>
Nov 29 03:45:28 np0005539550 nova_compute[257631]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Nov 29 03:45:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 03:45:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:28.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.559 257641 DEBUG nova.virt.libvirt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Received event <DeviceRemovedEvent: 1764405928.5595994, 02ec8d3d-66e9-47c4-a5a4-04389383ad38 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.562 257641 DEBUG nova.virt.libvirt.driver [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.564 257641 INFO nova.virt.libvirt.driver [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully detached device vdb from instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38 from the live domain config.#033[00m
Nov 29 03:45:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 698 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 12 MiB/s rd, 20 MiB/s wr, 409 op/s
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.827 257641 DEBUG nova.objects.instance [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'flavor' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:28 np0005539550 nova_compute[257631]: 2025-11-29 08:45:28.875 257641 DEBUG oslo_concurrency.lockutils [None req-24f25adb-9f4b-442a-9411-13d9f1c65153 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:29 np0005539550 nova_compute[257631]: 2025-11-29 08:45:29.661 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:29 np0005539550 nova_compute[257631]: 2025-11-29 08:45:29.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:30.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Nov 29 03:45:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 698 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 232 op/s
Nov 29 03:45:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Nov 29 03:45:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.691 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.693 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.693 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.693 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.694 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.695 257641 INFO nova.compute.manager [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Terminating instance#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.696 257641 DEBUG nova.compute.manager [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:30 np0005539550 kernel: tap1cd0b9a8-7d (unregistering): left promiscuous mode
Nov 29 03:45:30 np0005539550 NetworkManager[49039]: <info>  [1764405930.7505] device (tap1cd0b9a8-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:30Z|00961|binding|INFO|Releasing lport 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 from this chassis (sb_readonly=0)
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:30Z|00962|binding|INFO|Setting lport 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 down in Southbound
Nov 29 03:45:30 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:30Z|00963|binding|INFO|Removing iface tap1cd0b9a8-7d ovn-installed in OVS
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.808 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.816 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:f8:73 10.100.0.9'], port_security=['fa:16:3e:d2:f8:73 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '02ec8d3d-66e9-47c4-a5a4-04389383ad38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13244464-ec20-4842-bec9-0ac60372e025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b3b0484057a4e3db51366d29c6b684d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aefb70fb-aad6-4ac2-b50b-5898c331e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca67c27a-85ad-40df-9c83-890f1ece542e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1cd0b9a8-7d4c-48ab-932c-928e5977ab22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.818 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 in datapath 13244464-ec20-4842-bec9-0ac60372e025 unbound from our chassis#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.819 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13244464-ec20-4842-bec9-0ac60372e025#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.822 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000cf.scope: Deactivated successfully.
Nov 29 03:45:30 np0005539550 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d000000cf.scope: Consumed 16.375s CPU time.
Nov 29 03:45:30 np0005539550 systemd-machined[216673]: Machine qemu-110-instance-000000cf terminated.
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.850 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7c661286-bd54-482b-a63b-b554315de3eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.889 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc2c03a-eadd-4b1c-bb40-de95267ba60d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.895 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc88fac-aa87-457d-bacf-26e8ff11affa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.930 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[460ca968-efd1-4895-b2da-afa53794722a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.931 257641 INFO nova.virt.libvirt.driver [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Instance destroyed successfully.#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.932 257641 DEBUG nova.objects.instance [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'resources' on Instance uuid 02ec8d3d-66e9-47c4-a5a4-04389383ad38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.949 257641 DEBUG nova.virt.libvirt.vif [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:44:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1381850634',display_name='tempest-TestStampPattern-server-1381850634',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1381850634',id=207,image_ref='a9d624a0-fd76-47d0-81ab-f89f80fec0c1',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:44:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-8zb0f04m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='3941161c-104e-452f-8d56-54600d37d0f5',image_min_disk='1',image_min_ram='0',image_owner_id='3b3b0484057a4e3db51366d29c6b684d',image_owner_project_name='tempest-TestStampPattern-1305027466',image_owner_user_name='tempest-TestStampPattern-1305027466-project-member',image_user_id='01c0b956e2c74d5798d01fc2be0a8bac',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:43Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=02ec8d3d-66e9-47c4-a5a4-04389383ad38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.950 257641 DEBUG nova.network.os_vif_util [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "address": "fa:16:3e:d2:f8:73", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cd0b9a8-7d", "ovs_interfaceid": "1cd0b9a8-7d4c-48ab-932c-928e5977ab22", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.951 257641 DEBUG nova.network.os_vif_util [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.952 257641 DEBUG os_vif [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.949 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1a5803-124a-40cc-bc79-3c4b982a89d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13244464-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:83:75'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 270], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886459, 'reachable_time': 16251, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386348, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.955 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.955 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cd0b9a8-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.962 257641 INFO os_vif [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f8:73,bridge_name='br-int',has_traffic_filtering=True,id=1cd0b9a8-7d4c-48ab-932c-928e5977ab22,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cd0b9a8-7d')#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.965 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[04300899-f445-4660-8a7c-b5aa06f8493f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap13244464-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886472, 'tstamp': 886472}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386365, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap13244464-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 886475, 'tstamp': 886475}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386365, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.967 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13244464-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:30.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.970 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13244464-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.970 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.970 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13244464-e0, col_values=(('external_ids', {'iface-id': '58d6f574-370c-46a0-9e95-f7c81493d948'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:30 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:30.971 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:45:30 np0005539550 nova_compute[257631]: 2025-11-29 08:45:30.985 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.119 257641 DEBUG nova.compute.manager [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-unplugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.120 257641 DEBUG oslo_concurrency.lockutils [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.120 257641 DEBUG oslo_concurrency.lockutils [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.120 257641 DEBUG oslo_concurrency.lockutils [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.121 257641 DEBUG nova.compute.manager [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] No waiting events found dispatching network-vif-unplugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.121 257641 DEBUG nova.compute.manager [req-ac8344af-9b45-4fa4-a45f-4c6dce41c238 req-3c164d51-1a1c-4350-90aa-ab5cb731c0e1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-unplugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.397 257641 INFO nova.virt.libvirt.driver [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Deleting instance files /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38_del#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.398 257641 INFO nova.virt.libvirt.driver [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Deletion of /var/lib/nova/instances/02ec8d3d-66e9-47c4-a5a4-04389383ad38_del complete#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.457 257641 INFO nova.compute.manager [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.457 257641 DEBUG oslo.service.loopingcall [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.457 257641 DEBUG nova.compute.manager [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:31 np0005539550 nova_compute[257631]: 2025-11-29 08:45:31.458 257641 DEBUG nova.network.neutron [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.250 257641 DEBUG nova.compute.manager [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.250 257641 DEBUG nova.compute.manager [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing instance network info cache due to event network-changed-1cd0b9a8-7d4c-48ab-932c-928e5977ab22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.250 257641 DEBUG oslo_concurrency.lockutils [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.251 257641 DEBUG oslo_concurrency.lockutils [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.251 257641 DEBUG nova.network.neutron [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Refreshing network info cache for port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:32 np0005539550 podman[386416]: 2025-11-29 08:45:32.339781449 +0000 UTC m=+0.067472338 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:45:32 np0005539550 podman[386415]: 2025-11-29 08:45:32.343337119 +0000 UTC m=+0.070924576 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.438 257641 INFO nova.network.neutron [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Port 1cd0b9a8-7d4c-48ab-932c-928e5977ab22 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.439 257641 DEBUG nova.network.neutron [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.442 257641 DEBUG nova.network.neutron [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.458 257641 DEBUG oslo_concurrency.lockutils [req-4459f883-71cc-4ec1-a1a4-49a39f30fa66 req-5cb46f7d-4de0-4981-ad22-3d34830b0fbd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-02ec8d3d-66e9-47c4-a5a4-04389383ad38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.459 257641 INFO nova.compute.manager [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Took 1.00 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.508 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.509 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:32.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:32 np0005539550 nova_compute[257631]: 2025-11-29 08:45:32.607 257641 DEBUG oslo_concurrency.processutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 613 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 12 MiB/s wr, 302 op/s
Nov 29 03:45:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:32.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/331517399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.047 257641 DEBUG oslo_concurrency.processutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.055 257641 DEBUG nova.compute.provider_tree [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.075 257641 DEBUG nova.scheduler.client.report [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.108 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.142 257641 INFO nova.scheduler.client.report [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Deleted allocations for instance 02ec8d3d-66e9-47c4-a5a4-04389383ad38#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.287 257641 DEBUG oslo_concurrency.lockutils [None req-0ba46073-f7a4-47d4-93fc-0476a3d0e513 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.397 257641 DEBUG nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.398 257641 DEBUG oslo_concurrency.lockutils [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.398 257641 DEBUG oslo_concurrency.lockutils [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.398 257641 DEBUG oslo_concurrency.lockutils [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "02ec8d3d-66e9-47c4-a5a4-04389383ad38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.399 257641 DEBUG nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] No waiting events found dispatching network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:33 np0005539550 nova_compute[257631]: 2025-11-29 08:45:33.399 257641 WARNING nova.compute.manager [req-cf873eb1-2e8c-4103-9619-0fae76e7126a req-4247af95-2223-4b0b-ae6d-42ef306c21da 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received unexpected event network-vif-plugged-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:33Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:74:19 10.100.0.3
Nov 29 03:45:33 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:33Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:74:19 10.100.0.3
Nov 29 03:45:34 np0005539550 nova_compute[257631]: 2025-11-29 08:45:34.434 257641 DEBUG nova.compute.manager [req-0f114fe6-2be1-4bd1-b2c3-e7cf50520ff5 req-7ee034db-8f0e-4482-8672-1a4074606da9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Received event network-vif-deleted-1cd0b9a8-7d4c-48ab-932c-928e5977ab22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:34.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 574 MiB data, 1.8 GiB used, 19 GiB / 21 GiB avail; 977 KiB/s rd, 1.7 MiB/s wr, 205 op/s
Nov 29 03:45:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Nov 29 03:45:34 np0005539550 nova_compute[257631]: 2025-11-29 08:45:34.663 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Nov 29 03:45:34 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Nov 29 03:45:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:34.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Nov 29 03:45:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Nov 29 03:45:35 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Nov 29 03:45:35 np0005539550 nova_compute[257631]: 2025-11-29 08:45:35.957 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:45:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:45:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 545 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.5 MiB/s wr, 330 op/s
Nov 29 03:45:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Nov 29 03:45:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Nov 29 03:45:36 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Nov 29 03:45:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:36.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:37.141 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:37.142 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:45:37 np0005539550 nova_compute[257631]: 2025-11-29 08:45:37.143 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:38.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 488 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 793 KiB/s rd, 4.3 MiB/s wr, 321 op/s
Nov 29 03:45:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Nov 29 03:45:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Nov 29 03:45:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Nov 29 03:45:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:38.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:39 np0005539550 nova_compute[257631]: 2025-11-29 08:45:39.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Nov 29 03:45:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 488 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 731 KiB/s rd, 4.1 MiB/s wr, 256 op/s
Nov 29 03:45:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Nov 29 03:45:40 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Nov 29 03:45:40 np0005539550 nova_compute[257631]: 2025-11-29 08:45:40.717 257641 INFO nova.compute.manager [None req-24e5f40b-4be8-4d3d-b2e8-181edf434a25 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Get console output#033[00m
Nov 29 03:45:40 np0005539550 nova_compute[257631]: 2025-11-29 08:45:40.724 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:45:40 np0005539550 nova_compute[257631]: 2025-11-29 08:45:40.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:40.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.811 257641 DEBUG nova.compute.manager [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.811 257641 DEBUG nova.compute.manager [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing instance network info cache due to event network-changed-1dcdca62-3969-4483-aefd-58f07dfb018f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.811 257641 DEBUG oslo_concurrency.lockutils [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.812 257641 DEBUG oslo_concurrency.lockutils [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.812 257641 DEBUG nova.network.neutron [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Refreshing network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.846 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.847 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.847 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.847 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.848 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.848 257641 INFO nova.compute.manager [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Terminating instance#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.849 257641 DEBUG nova.compute.manager [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:41 np0005539550 kernel: tap1dcdca62-39 (unregistering): left promiscuous mode
Nov 29 03:45:41 np0005539550 NetworkManager[49039]: <info>  [1764405941.9109] device (tap1dcdca62-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:41Z|00964|binding|INFO|Releasing lport 1dcdca62-3969-4483-aefd-58f07dfb018f from this chassis (sb_readonly=0)
Nov 29 03:45:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:41Z|00965|binding|INFO|Setting lport 1dcdca62-3969-4483-aefd-58f07dfb018f down in Southbound
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:41 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:41Z|00966|binding|INFO|Removing iface tap1dcdca62-39 ovn-installed in OVS
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.921 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:41.928 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:3f:dc 10.100.0.14'], port_security=['fa:16:3e:6c:3f:dc 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3941161c-104e-452f-8d56-54600d37d0f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13244464-ec20-4842-bec9-0ac60372e025', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b3b0484057a4e3db51366d29c6b684d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aefb70fb-aad6-4ac2-b50b-5898c331e692', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ca67c27a-85ad-40df-9c83-890f1ece542e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=1dcdca62-3969-4483-aefd-58f07dfb018f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:41.929 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 1dcdca62-3969-4483-aefd-58f07dfb018f in datapath 13244464-ec20-4842-bec9-0ac60372e025 unbound from our chassis#033[00m
Nov 29 03:45:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:41.930 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13244464-ec20-4842-bec9-0ac60372e025, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:41.934 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c641cd3b-007a-4abd-b9e2-1245eb9df6dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:41 np0005539550 nova_compute[257631]: 2025-11-29 08:45:41.934 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:41 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:41.935 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13244464-ec20-4842-bec9-0ac60372e025 namespace which is not needed anymore#033[00m
Nov 29 03:45:41 np0005539550 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Nov 29 03:45:41 np0005539550 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d000000ca.scope: Consumed 20.679s CPU time.
Nov 29 03:45:41 np0005539550 systemd-machined[216673]: Machine qemu-108-instance-000000ca terminated.
Nov 29 03:45:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:42Z|00967|binding|INFO|Releasing lport 58d6f574-370c-46a0-9e95-f7c81493d948 from this chassis (sb_readonly=0)
Nov 29 03:45:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:42Z|00968|binding|INFO|Releasing lport 96a414db-4904-46be-8b53-cf5cf81a70fe from this chassis (sb_readonly=0)
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.055 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [NOTICE]   (382069) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:42 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [NOTICE]   (382069) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:42 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [WARNING]  (382069) : Exiting Master process...
Nov 29 03:45:42 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [ALERT]    (382069) : Current worker (382071) exited with code 143 (Terminated)
Nov 29 03:45:42 np0005539550 neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025[382065]: [WARNING]  (382069) : All workers exited. Exiting... (0)
Nov 29 03:45:42 np0005539550 systemd[1]: libpod-bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429.scope: Deactivated successfully.
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.120 257641 INFO nova.virt.libvirt.driver [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Instance destroyed successfully.#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.121 257641 DEBUG nova.objects.instance [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lazy-loading 'resources' on Instance uuid 3941161c-104e-452f-8d56-54600d37d0f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:42 np0005539550 podman[386500]: 2025-11-29 08:45:42.126764871 +0000 UTC m=+0.067007356 container died bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.143 257641 DEBUG nova.virt.libvirt.vif [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:43:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-18064033',display_name='tempest-TestStampPattern-server-18064033',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-18064033',id=202,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBaYR98uKIjTIgJJtQ2rdiQ8Vh+nL0em9VusryiAjil0FJtSxO+lnk+ODZdl0LgdlhrdeOvooo+j67fYKSbk9O76X3xg1L2IaHTBYV9x6ArSe2Q3HBmSDWpJ9bh8XmHSeQ==',key_name='tempest-TestStampPattern-826635977',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:43:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b3b0484057a4e3db51366d29c6b684d',ramdisk_id='',reservation_id='r-31vy1fku',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1305027466',owner_user_name='tempest-TestStampPattern-1305027466-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:44:29Z,user_data=None,user_id='01c0b956e2c74d5798d01fc2be0a8bac',uuid=3941161c-104e-452f-8d56-54600d37d0f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.144 257641 DEBUG nova.network.os_vif_util [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converting VIF {"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.145 257641 DEBUG nova.network.os_vif_util [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.145 257641 DEBUG os_vif [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.144 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.147 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.148 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1dcdca62-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.150 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.152 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:45:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-eab56709855df16aefd273cba1223d696851acb94f485f8cc5dde2603492e8db-merged.mount: Deactivated successfully.
Nov 29 03:45:42 np0005539550 podman[386500]: 2025-11-29 08:45:42.211764292 +0000 UTC m=+0.152006777 container cleanup bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:45:42 np0005539550 systemd[1]: libpod-conmon-bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429.scope: Deactivated successfully.
Nov 29 03:45:42 np0005539550 podman[386536]: 2025-11-29 08:45:42.278812379 +0000 UTC m=+0.047163485 container remove bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8fdf4002-afed-4da2-b8cc-cf61b0f88102]: (4, ('Sat Nov 29 08:45:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025 (bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429)\nbc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429\nSat Nov 29 08:45:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-13244464-ec20-4842-bec9-0ac60372e025 (bc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429)\nbc180e0aa667ae6eaa357feaa67bd9ae3c2be5126faaec3f503c4c4db7870429\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.286 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0db944ed-4573-4ce5-9610-fd01a86b1efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.287 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13244464-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.325 257641 DEBUG nova.compute.manager [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-unplugged-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.326 257641 DEBUG oslo_concurrency.lockutils [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.326 257641 DEBUG oslo_concurrency.lockutils [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.327 257641 DEBUG oslo_concurrency.lockutils [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.327 257641 DEBUG nova.compute.manager [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] No waiting events found dispatching network-vif-unplugged-1dcdca62-3969-4483-aefd-58f07dfb018f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.328 257641 DEBUG nova.compute.manager [req-7817176e-5a98-4291-940c-0a0b17a20d66 req-0cbf8170-2dbe-4a58-98e4-44c60a252771 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-unplugged-1dcdca62-3969-4483-aefd-58f07dfb018f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:45:42 np0005539550 kernel: tap13244464-e0: left promiscuous mode
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.375 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.379 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e122a1-52d4-4e41-884c-9a1a113fc8cf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.380 257641 INFO os_vif [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:3f:dc,bridge_name='br-int',has_traffic_filtering=True,id=1dcdca62-3969-4483-aefd-58f07dfb018f,network=Network(13244464-ec20-4842-bec9-0ac60372e025),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1dcdca62-39')#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.392 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9c2af79b-4234-4688-af8c-c7e8ac30fc8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.393 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[429bd4f3-accd-463b-a626-11d832f5b90e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:42Z|00969|binding|INFO|Releasing lport 96a414db-4904-46be-8b53-cf5cf81a70fe from this chassis (sb_readonly=0)
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.414 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.414 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbc5f76-a48c-494e-8b20-b89a9a3c0af0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 886453, 'reachable_time': 29407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386562, 'error': None, 'target': 'ovnmeta-13244464-ec20-4842-bec9-0ac60372e025', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 systemd[1]: run-netns-ovnmeta\x2d13244464\x2dec20\x2d4842\x2dbec9\x2d0ac60372e025.mount: Deactivated successfully.
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.417 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13244464-ec20-4842-bec9-0ac60372e025 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:42.417 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[49b6a484-4678-425b-8350-8f502c1e1370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.418 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 397 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 368 KiB/s wr, 183 op/s
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.805 257641 INFO nova.virt.libvirt.driver [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Deleting instance files /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5_del#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.806 257641 INFO nova.virt.libvirt.driver [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Deletion of /var/lib/nova/instances/3941161c-104e-452f-8d56-54600d37d0f5_del complete#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.852 257641 INFO nova.compute.manager [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.852 257641 DEBUG oslo.service.loopingcall [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.853 257641 DEBUG nova.compute.manager [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:42 np0005539550 nova_compute[257631]: 2025-11-29 08:45:42.853 257641 DEBUG nova.network.neutron [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.380 257641 INFO nova.compute.manager [None req-13726501-c020-49eb-b9a3-bd9f288f4b48 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Get console output#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.384 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:45:44 np0005539550 podman[386572]: 2025-11-29 08:45:44.392604399 +0000 UTC m=+0.115949015 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.445 257641 DEBUG nova.compute.manager [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.445 257641 DEBUG oslo_concurrency.lockutils [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "3941161c-104e-452f-8d56-54600d37d0f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.445 257641 DEBUG oslo_concurrency.lockutils [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.446 257641 DEBUG oslo_concurrency.lockutils [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.446 257641 DEBUG nova.compute.manager [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] No waiting events found dispatching network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.446 257641 WARNING nova.compute.manager [req-d5df8c2d-30ea-4eca-9fc8-7ad910358540 req-32fc483e-e3c6-40a1-8017-cd24c12d8389 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received unexpected event network-vif-plugged-1dcdca62-3969-4483-aefd-58f07dfb018f for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:45:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 337 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 280 KiB/s wr, 161 op/s
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.669 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.759 257641 DEBUG nova.network.neutron [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.786 257641 INFO nova.compute.manager [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Took 1.93 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.831 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.834 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.835 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.853 257641 DEBUG nova.compute.manager [req-bda9fc21-4536-4b79-b3f5-6d6a0d8e8119 req-861d1c67-f807-433a-b09d-995e9f7411a0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Received event network-vif-deleted-1dcdca62-3969-4483-aefd-58f07dfb018f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:44 np0005539550 nova_compute[257631]: 2025-11-29 08:45:44.940 257641 DEBUG oslo_concurrency.processutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:44.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509355315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.379 257641 DEBUG oslo_concurrency.processutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.386 257641 DEBUG nova.compute.provider_tree [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.408 257641 DEBUG nova.scheduler.client.report [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.441 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.467 257641 INFO nova.scheduler.client.report [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Deleted allocations for instance 3941161c-104e-452f-8d56-54600d37d0f5#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.548 257641 DEBUG oslo_concurrency.lockutils [None req-a312336d-4b2c-4f95-a1f9-0123c4a4b9c4 01c0b956e2c74d5798d01fc2be0a8bac 3b3b0484057a4e3db51366d29c6b684d - - default default] Lock "3941161c-104e-452f-8d56-54600d37d0f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Nov 29 03:45:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.640 257641 DEBUG nova.network.neutron [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updated VIF entry in instance network info cache for port 1dcdca62-3969-4483-aefd-58f07dfb018f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.641 257641 DEBUG nova.network.neutron [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Updating instance_info_cache with network_info: [{"id": "1dcdca62-3969-4483-aefd-58f07dfb018f", "address": "fa:16:3e:6c:3f:dc", "network": {"id": "13244464-ec20-4842-bec9-0ac60372e025", "bridge": "br-int", "label": "tempest-TestStampPattern-1903790850-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b3b0484057a4e3db51366d29c6b684d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1dcdca62-39", "ovs_interfaceid": "1dcdca62-3969-4483-aefd-58f07dfb018f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.658 257641 DEBUG oslo_concurrency.lockutils [req-cdbf7422-7759-4fcc-9120-6d892c237cad req-ed4457dd-42f6-4321-a330-e723dbd187f1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-3941161c-104e-452f-8d56-54600d37d0f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:45 np0005539550 NetworkManager[49039]: <info>  [1764405945.9204] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/420)
Nov 29 03:45:45 np0005539550 NetworkManager[49039]: <info>  [1764405945.9214] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.926 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405930.9252822, 02ec8d3d-66e9-47c4-a5a4-04389383ad38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.926 257641 INFO nova.compute.manager [-] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:45:45 np0005539550 nova_compute[257631]: 2025-11-29 08:45:45.949 257641 DEBUG nova.compute.manager [None req-629e4c98-90f5-4439-a7a7-5e50ee9daded - - - - - -] [instance: 02ec8d3d-66e9-47c4-a5a4-04389383ad38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:46 np0005539550 nova_compute[257631]: 2025-11-29 08:45:46.216 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:46 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:46Z|00970|binding|INFO|Releasing lport 96a414db-4904-46be-8b53-cf5cf81a70fe from this chassis (sb_readonly=0)
Nov 29 03:45:46 np0005539550 nova_compute[257631]: 2025-11-29 08:45:46.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:46 np0005539550 nova_compute[257631]: 2025-11-29 08:45:46.436 257641 INFO nova.compute.manager [None req-0730cce4-e5d8-40ea-9acf-d86bd80bf0ee 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Get console output#033[00m
Nov 29 03:45:46 np0005539550 nova_compute[257631]: 2025-11-29 08:45:46.442 329043 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 03:45:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:46.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 315 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 105 KiB/s rd, 2.1 MiB/s wr, 156 op/s
Nov 29 03:45:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.306 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.325 257641 DEBUG nova.compute.manager [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.326 257641 DEBUG nova.compute.manager [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing instance network info cache due to event network-changed-ec078636-08a9-4e73-b55f-6a3a4d83d29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.326 257641 DEBUG oslo_concurrency.lockutils [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.326 257641 DEBUG oslo_concurrency.lockutils [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.326 257641 DEBUG nova.network.neutron [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Refreshing network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.395 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.396 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.396 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.396 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.397 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.398 257641 INFO nova.compute.manager [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Terminating instance#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.399 257641 DEBUG nova.compute.manager [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:45:47 np0005539550 kernel: tapec078636-08 (unregistering): left promiscuous mode
Nov 29 03:45:47 np0005539550 NetworkManager[49039]: <info>  [1764405947.4504] device (tapec078636-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.459 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:47Z|00971|binding|INFO|Releasing lport ec078636-08a9-4e73-b55f-6a3a4d83d29b from this chassis (sb_readonly=0)
Nov 29 03:45:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:47Z|00972|binding|INFO|Setting lport ec078636-08a9-4e73-b55f-6a3a4d83d29b down in Southbound
Nov 29 03:45:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:45:47Z|00973|binding|INFO|Removing iface tapec078636-08 ovn-installed in OVS
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.460 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.469 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:74:19 10.100.0.3'], port_security=['fa:16:3e:53:74:19 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '7233dfec-5360-4ae2-a5c7-6a82ac3145c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0471b9b208874403aa3f0fbe7504ad19', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8348f212-c25f-450d-beda-4c2e2c4a398c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ce71f536-5923-4fc0-8270-107f14f9127c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ec078636-08a9-4e73-b55f-6a3a4d83d29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.470 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ec078636-08a9-4e73-b55f-6a3a4d83d29b in datapath 3ba224d7-2b01-485d-af2a-07d3c8407c0a unbound from our chassis#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.472 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ba224d7-2b01-485d-af2a-07d3c8407c0a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.472 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6081b70f-a075-430b-9a02-2db4c8b34666]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.473 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a namespace which is not needed anymore#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.483 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d1.scope: Deactivated successfully.
Nov 29 03:45:47 np0005539550 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d000000d1.scope: Consumed 14.557s CPU time.
Nov 29 03:45:47 np0005539550 systemd-machined[216673]: Machine qemu-111-instance-000000d1 terminated.
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [NOTICE]   (385752) : haproxy version is 2.8.14-c23fe91
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [NOTICE]   (385752) : path to executable is /usr/sbin/haproxy
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [WARNING]  (385752) : Exiting Master process...
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [WARNING]  (385752) : Exiting Master process...
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [ALERT]    (385752) : Current worker (385754) exited with code 143 (Terminated)
Nov 29 03:45:47 np0005539550 neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a[385748]: [WARNING]  (385752) : All workers exited. Exiting... (0)
Nov 29 03:45:47 np0005539550 systemd[1]: libpod-4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d.scope: Deactivated successfully.
Nov 29 03:45:47 np0005539550 podman[386649]: 2025-11-29 08:45:47.600031354 +0000 UTC m=+0.045199135 container died 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.617 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-713b71c082c712738d77f232c281d8cb2933e747b3d774981be293b1fdf64ac1-merged.mount: Deactivated successfully.
Nov 29 03:45:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d-userdata-shm.mount: Deactivated successfully.
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.630 257641 INFO nova.virt.libvirt.driver [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Instance destroyed successfully.#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.631 257641 DEBUG nova.objects.instance [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lazy-loading 'resources' on Instance uuid 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:45:47 np0005539550 podman[386649]: 2025-11-29 08:45:47.642125319 +0000 UTC m=+0.087293100 container cleanup 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:45:47 np0005539550 systemd[1]: libpod-conmon-4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d.scope: Deactivated successfully.
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.655 257641 DEBUG nova.virt.libvirt.vif [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:45:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1945607048',display_name='tempest-TestNetworkBasicOps-server-1945607048',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1945607048',id=209,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA8YYTuPLxWjRdH48dYFWOIbHm3QmgCaIcww5tnBh5TfdDmwK+5KMycSKSEzho4NKJWn4t6JuuudCktUX/JC1QI6JOOE2xZ7zGY3WCixaSY8j6QobQEzNZiXhCr6i7pUpw==',key_name='tempest-TestNetworkBasicOps-1870735197',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:45:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0471b9b208874403aa3f0fbe7504ad19',ramdisk_id='',reservation_id='r-x000i7t0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-828399474',owner_user_name='tempest-TestNetworkBasicOps-828399474-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:45:20Z,user_data=None,user_id='4774e2851bc6407cb0fcde15bd24d1b3',uuid=7233dfec-5360-4ae2-a5c7-6a82ac3145c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.656 257641 DEBUG nova.network.os_vif_util [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converting VIF {"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.656 257641 DEBUG nova.network.os_vif_util [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.657 257641 DEBUG os_vif [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.659 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.659 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapec078636-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.662 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.665 257641 INFO os_vif [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:74:19,bridge_name='br-int',has_traffic_filtering=True,id=ec078636-08a9-4e73-b55f-6a3a4d83d29b,network=Network(3ba224d7-2b01-485d-af2a-07d3c8407c0a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec078636-08')#033[00m
Nov 29 03:45:47 np0005539550 podman[386690]: 2025-11-29 08:45:47.71089516 +0000 UTC m=+0.041432910 container remove 4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.717 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3c15aa12-b966-4166-a780-831661c8278d]: (4, ('Sat Nov 29 08:45:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a (4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d)\n4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d\nSat Nov 29 08:45:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a (4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d)\n4dd791568a97938ca1adda819c32b37dc8889241d239064791a254730e2f722d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.718 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dece422c-ebc0-4801-81f9-34e5daf32ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.719 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ba224d7-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.720 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 kernel: tap3ba224d7-20: left promiscuous mode
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 nova_compute[257631]: 2025-11-29 08:45:47.736 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.737 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a3357d80-ac13-4aca-a4aa-3af44f593f5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.753 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2b14700e-c7d3-4f03-8af8-979a2fe44db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.754 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9edf38-99bd-4de2-9c08-e700dd5b026f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.774 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c191b47b-3167-45da-b112-7be452ca3678]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 896725, 'reachable_time': 44522, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386723, 'error': None, 'target': 'ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:47 np0005539550 systemd[1]: run-netns-ovnmeta\x2d3ba224d7\x2d2b01\x2d485d\x2daf2a\x2d07d3c8407c0a.mount: Deactivated successfully.
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.777 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ba224d7-2b01-485d-af2a-07d3c8407c0a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:45:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:45:47.777 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[2c66e288-7308-4de2-8b27-171f18deb2bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.096 257641 INFO nova.virt.libvirt.driver [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Deleting instance files /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2_del#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.097 257641 INFO nova.virt.libvirt.driver [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Deletion of /var/lib/nova/instances/7233dfec-5360-4ae2-a5c7-6a82ac3145c2_del complete#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.176 257641 INFO nova.compute.manager [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.177 257641 DEBUG oslo.service.loopingcall [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.177 257641 DEBUG nova.compute.manager [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.177 257641 DEBUG nova.network.neutron [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:45:48 np0005539550 nova_compute[257631]: 2025-11-29 08:45:48.426 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:48.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 2.7 MiB/s wr, 174 op/s
Nov 29 03:45:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:48.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.202 257641 DEBUG nova.network.neutron [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updated VIF entry in instance network info cache for port ec078636-08a9-4e73-b55f-6a3a4d83d29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.202 257641 DEBUG nova.network.neutron [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updating instance_info_cache with network_info: [{"id": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "address": "fa:16:3e:53:74:19", "network": {"id": "3ba224d7-2b01-485d-af2a-07d3c8407c0a", "bridge": "br-int", "label": "tempest-network-smoke--1321056144", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0471b9b208874403aa3f0fbe7504ad19", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec078636-08", "ovs_interfaceid": "ec078636-08a9-4e73-b55f-6a3a4d83d29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.250 257641 DEBUG oslo_concurrency.lockutils [req-2c4e5159-a4c2-46ee-a941-47d27992a264 req-7b59625b-46e0-41d1-b15f-9d7e6a46e279 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-7233dfec-5360-4ae2-a5c7-6a82ac3145c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.262 257641 DEBUG nova.network.neutron [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.276 257641 INFO nova.compute.manager [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Took 1.10 seconds to deallocate network for instance.#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.331 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.332 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.367 257641 DEBUG nova.compute.manager [req-cfb61a12-b8e2-422a-ae91-e29ac2f031b1 req-b56b62b7-1181-4402-b082-d68dcab9d4cd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-vif-deleted-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.405 257641 DEBUG oslo_concurrency.processutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.514 257641 DEBUG nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-vif-unplugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.514 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.514 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 DEBUG nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] No waiting events found dispatching network-vif-unplugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 WARNING nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received unexpected event network-vif-unplugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 DEBUG nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.515 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.516 257641 DEBUG oslo_concurrency.lockutils [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.516 257641 DEBUG nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] No waiting events found dispatching network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.516 257641 WARNING nova.compute.manager [req-3e2f7a61-6ff9-40b5-b3a7-5869ca97ab0d req-e8e806ec-f617-4bdd-a99e-a93d36a33f2c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Received unexpected event network-vif-plugged-ec078636-08a9-4e73-b55f-6a3a4d83d29b for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944757380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.826 257641 DEBUG oslo_concurrency.processutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.833 257641 DEBUG nova.compute.provider_tree [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.849 257641 DEBUG nova.scheduler.client.report [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.890 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.948 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.949 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.954 257641 INFO nova.scheduler.client.report [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Deleted allocations for instance 7233dfec-5360-4ae2-a5c7-6a82ac3145c2#033[00m
Nov 29 03:45:49 np0005539550 nova_compute[257631]: 2025-11-29 08:45:49.979 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:45:50 np0005539550 nova_compute[257631]: 2025-11-29 08:45:50.041 257641 DEBUG oslo_concurrency.lockutils [None req-1de8368e-1175-4916-b0d4-83e59ab5802c 4774e2851bc6407cb0fcde15bd24d1b3 0471b9b208874403aa3f0fbe7504ad19 - - default default] Lock "7233dfec-5360-4ae2-a5c7-6a82ac3145c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:50.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 95 KiB/s rd, 2.2 MiB/s wr, 139 op/s
Nov 29 03:45:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:50 np0005539550 nova_compute[257631]: 2025-11-29 08:45:50.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 85 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Nov 29 03:45:52 np0005539550 nova_compute[257631]: 2025-11-29 08:45:52.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:53 np0005539550 nova_compute[257631]: 2025-11-29 08:45:53.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:53 np0005539550 nova_compute[257631]: 2025-11-29 08:45:53.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:45:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:45:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:54.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:45:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 2.2 MiB/s wr, 109 op/s
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.674 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.941 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.942 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.942 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:45:54 np0005539550 nova_compute[257631]: 2025-11-29 08:45:54.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713425732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.410 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.563 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.564 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4109MB free_disk=20.921817779541016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.565 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.565 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:45:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.656 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.656 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.673 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.692 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.692 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.712 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.750 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:45:55 np0005539550 nova_compute[257631]: 2025-11-29 08:45:55.777 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:45:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:45:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600295720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:45:56 np0005539550 nova_compute[257631]: 2025-11-29 08:45:56.246 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:45:56 np0005539550 nova_compute[257631]: 2025-11-29 08:45:56.253 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:45:56 np0005539550 nova_compute[257631]: 2025-11-29 08:45:56.266 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:45:56 np0005539550 nova_compute[257631]: 2025-11-29 08:45:56.298 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:45:56 np0005539550 nova_compute[257631]: 2025-11-29 08:45:56.299 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:45:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:56.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 162 op/s
Nov 29 03:45:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:57.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:57 np0005539550 nova_compute[257631]: 2025-11-29 08:45:57.118 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405942.101043, 3941161c-104e-452f-8d56-54600d37d0f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:45:57 np0005539550 nova_compute[257631]: 2025-11-29 08:45:57.119 257641 INFO nova.compute.manager [-] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:45:57 np0005539550 nova_compute[257631]: 2025-11-29 08:45:57.144 257641 DEBUG nova.compute.manager [None req-f16b06d0-f9aa-4192-b3d8-504eba07f6d7 - - - - - -] [instance: 3941161c-104e-452f-8d56-54600d37d0f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:45:57 np0005539550 nova_compute[257631]: 2025-11-29 08:45:57.298 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:45:57 np0005539550 nova_compute[257631]: 2025-11-29 08:45:57.661 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:45:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:45:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:45:58.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 423 KiB/s wr, 116 op/s
Nov 29 03:45:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:45:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:45:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:45:59.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:45:59
Nov 29 03:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Nov 29 03:45:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:45:59 np0005539550 nova_compute[257631]: 2025-11-29 08:45:59.676 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:45:59 np0005539550 nova_compute[257631]: 2025-11-29 08:45:59.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:00.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 103 op/s
Nov 29 03:46:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:01.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:02.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 104 op/s
Nov 29 03:46:02 np0005539550 nova_compute[257631]: 2025-11-29 08:46:02.628 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764405947.6271214, 7233dfec-5360-4ae2-a5c7-6a82ac3145c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:02 np0005539550 nova_compute[257631]: 2025-11-29 08:46:02.628 257641 INFO nova.compute.manager [-] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:46:02 np0005539550 nova_compute[257631]: 2025-11-29 08:46:02.652 257641 DEBUG nova.compute.manager [None req-947c7667-8c8e-451a-b18b-06490f70e8d5 - - - - - -] [instance: 7233dfec-5360-4ae2-a5c7-6a82ac3145c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:02 np0005539550 nova_compute[257631]: 2025-11-29 08:46:02.662 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:03.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:03 np0005539550 podman[386852]: 2025-11-29 08:46:03.355988784 +0000 UTC m=+0.078573640 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:46:03 np0005539550 podman[386851]: 2025-11-29 08:46:03.367676959 +0000 UTC m=+0.090228184 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 29 03:46:03 np0005539550 nova_compute[257631]: 2025-11-29 08:46:03.732 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:04.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 247 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 142 KiB/s wr, 80 op/s
Nov 29 03:46:04 np0005539550 nova_compute[257631]: 2025-11-29 08:46:04.679 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:05.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:06.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 258 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 690 KiB/s wr, 95 op/s
Nov 29 03:46:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:07.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:07 np0005539550 nova_compute[257631]: 2025-11-29 08:46:07.664 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:07 np0005539550 nova_compute[257631]: 2025-11-29 08:46:07.945 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:46:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 266 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 542 KiB/s rd, 1.3 MiB/s wr, 41 op/s
Nov 29 03:46:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:09.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:46:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:46:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:46:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:46:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:46:09 np0005539550 nova_compute[257631]: 2025-11-29 08:46:09.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 278 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 198 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Nov 29 03:46:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:11.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:11 np0005539550 nova_compute[257631]: 2025-11-29 08:46:11.995 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:11 np0005539550 nova_compute[257631]: 2025-11-29 08:46:11.995 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.016 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.135 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.136 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.144 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.144 257641 INFO nova.compute.claims [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.286 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:12.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 214 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.667 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3188528991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.778 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.784 257641 DEBUG nova.compute.provider_tree [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.797 257641 DEBUG nova.scheduler.client.report [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.816 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.816 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.857 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.858 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.875 257641 INFO nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.892 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.989 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.992 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:46:12 np0005539550 nova_compute[257631]: 2025-11-29 08:46:12.993 257641 INFO nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Creating image(s)#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.027 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:13.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.056 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.087 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.092 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.195 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.196 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.197 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.198 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.230 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.234 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5c287733-d168-4706-9ec4-10a10472896c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.516 257641 DEBUG nova.policy [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '344d31c1be5d41b1af2cc7427e8e0457', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.519 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 5c287733-d168-4706-9ec4-10a10472896c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.607 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] resizing rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.723 257641 DEBUG nova.objects.instance [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.738 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.738 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Ensure instance console log exists: /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.739 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.739 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:13 np0005539550 nova_compute[257631]: 2025-11-29 08:46:13.740 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:14 np0005539550 nova_compute[257631]: 2025-11-29 08:46:14.083 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Successfully created port: 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:46:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 257 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 2.5 MiB/s wr, 84 op/s
Nov 29 03:46:14 np0005539550 nova_compute[257631]: 2025-11-29 08:46:14.684 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:15 np0005539550 nova_compute[257631]: 2025-11-29 08:46:15.304 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Successfully updated port: 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:46:15 np0005539550 nova_compute[257631]: 2025-11-29 08:46:15.328 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:15 np0005539550 nova_compute[257631]: 2025-11-29 08:46:15.328 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquired lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:15 np0005539550 nova_compute[257631]: 2025-11-29 08:46:15.328 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:46:15 np0005539550 podman[387132]: 2025-11-29 08:46:15.359146857 +0000 UTC m=+0.095365354 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:46:15 np0005539550 nova_compute[257631]: 2025-11-29 08:46:15.509 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:46:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:16.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 217 KiB/s rd, 3.2 MiB/s wr, 93 op/s
Nov 29 03:46:16 np0005539550 nova_compute[257631]: 2025-11-29 08:46:16.757 257641 DEBUG nova.compute.manager [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-changed-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:16 np0005539550 nova_compute[257631]: 2025-11-29 08:46:16.757 257641 DEBUG nova.compute.manager [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Refreshing instance network info cache due to event network-changed-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:46:16 np0005539550 nova_compute[257631]: 2025-11-29 08:46:16.758 257641 DEBUG oslo_concurrency.lockutils [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.052 257641 DEBUG nova.network.neutron [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.076 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Releasing lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.077 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance network_info: |[{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.077 257641 DEBUG oslo_concurrency.lockutils [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.077 257641 DEBUG nova.network.neutron [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Refreshing network info cache for port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.079 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Start _get_guest_xml network_info=[{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.084 257641 WARNING nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.093 257641 DEBUG nova.virt.libvirt.host [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.094 257641 DEBUG nova.virt.libvirt.host [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.099 257641 DEBUG nova.virt.libvirt.host [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.100 257641 DEBUG nova.virt.libvirt.host [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.101 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.101 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.101 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.102 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.102 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.102 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.102 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.102 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.103 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.103 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.103 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.103 257641 DEBUG nova.virt.hardware [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.106 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:46:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3374480825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.556 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.583 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.587 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:17 np0005539550 nova_compute[257631]: 2025-11-29 08:46:17.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:46:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/68238377' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.060 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.062 257641 DEBUG nova.virt.libvirt.vif [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1551813917',display_name='tempest-TestServerAdvancedOps-server-1551813917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1551813917',id=211,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='053dbfa33fdc48d6ad41e28ea7a34860',ramdisk_id='',reservation_id='r-l0d1ky0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-979394289',owner_user_name='tempest-TestServerAdvancedOps-979394289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:12Z,user_data=None,user_id='344d31c1be5d41b1af2cc7427e8e0457',uuid=5c287733-d168-4706-9ec4-10a10472896c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.062 257641 DEBUG nova.network.os_vif_util [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converting VIF {"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.063 257641 DEBUG nova.network.os_vif_util [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.064 257641 DEBUG nova.objects.instance [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.088 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <uuid>5c287733-d168-4706-9ec4-10a10472896c</uuid>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <name>instance-000000d3</name>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestServerAdvancedOps-server-1551813917</nova:name>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:46:17</nova:creationTime>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:user uuid="344d31c1be5d41b1af2cc7427e8e0457">tempest-TestServerAdvancedOps-979394289-project-member</nova:user>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:project uuid="053dbfa33fdc48d6ad41e28ea7a34860">tempest-TestServerAdvancedOps-979394289</nova:project>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <nova:port uuid="8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="serial">5c287733-d168-4706-9ec4-10a10472896c</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="uuid">5c287733-d168-4706-9ec4-10a10472896c</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5c287733-d168-4706-9ec4-10a10472896c_disk">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/5c287733-d168-4706-9ec4-10a10472896c_disk.config">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:eb:84:f7"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <target dev="tap8f35a6af-5d"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/console.log" append="off"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:46:18 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:46:18 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:46:18 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:46:18 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.089 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Preparing to wait for external event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.090 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.090 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.090 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.091 257641 DEBUG nova.virt.libvirt.vif [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:46:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1551813917',display_name='tempest-TestServerAdvancedOps-server-1551813917',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1551813917',id=211,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='053dbfa33fdc48d6ad41e28ea7a34860',ramdisk_id='',reservation_id='r-l0d1ky0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-979394289',owner_user_name='tempest-TestServerAdvancedOps-979394289-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:46:12Z,user_data=None,user_id='344d31c1be5d41b1af2cc7427e8e0457',uuid=5c287733-d168-4706-9ec4-10a10472896c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.092 257641 DEBUG nova.network.os_vif_util [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converting VIF {"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.093 257641 DEBUG nova.network.os_vif_util [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.093 257641 DEBUG os_vif [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.094 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.095 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.096 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.100 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.101 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f35a6af-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.102 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f35a6af-5d, col_values=(('external_ids', {'iface-id': '8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:84:f7', 'vm-uuid': '5c287733-d168-4706-9ec4-10a10472896c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:18 np0005539550 NetworkManager[49039]: <info>  [1764405978.1042] manager: (tap8f35a6af-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.105 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.110 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.111 257641 INFO os_vif [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d')#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.165 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.166 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.166 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] No VIF found with MAC fa:16:3e:eb:84:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.167 257641 INFO nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Using config drive#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.194 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.322 257641 DEBUG nova.network.neutron [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updated VIF entry in instance network info cache for port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.323 257641 DEBUG nova.network.neutron [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.339 257641 DEBUG oslo_concurrency.lockutils [req-ab95c6e8-b372-4114-a4ad-f12a8d41d3f3 req-a63d012a-4399-4a63-968d-c4c51ef20929 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.540 257641 INFO nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Creating config drive at /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.545 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmdou3v7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 154 KiB/s rd, 3.3 MiB/s wr, 92 op/s
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.700 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplmdou3v7" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.735 257641 DEBUG nova.storage.rbd_utils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] rbd image 5c287733-d168-4706-9ec4-10a10472896c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.739 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config 5c287733-d168-4706-9ec4-10a10472896c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.925 257641 DEBUG oslo_concurrency.processutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config 5c287733-d168-4706-9ec4-10a10472896c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.926 257641 INFO nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Deleting local config drive /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c/disk.config because it was imported into RBD.#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.976 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:18 np0005539550 kernel: tap8f35a6af-5d: entered promiscuous mode
Nov 29 03:46:18 np0005539550 NetworkManager[49039]: <info>  [1764405978.9859] manager: (tap8f35a6af-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/423)
Nov 29 03:46:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:18Z|00974|binding|INFO|Claiming lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for this chassis.
Nov 29 03:46:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:18Z|00975|binding|INFO|8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b: Claiming fa:16:3e:eb:84:f7 10.100.0.10
Nov 29 03:46:18 np0005539550 nova_compute[257631]: 2025-11-29 08:46:18.987 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.996 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.998 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 bound to our chassis#033[00m
Nov 29 03:46:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:18.998 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:19.000 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4262f2d1-61d7-4d49-b550-581542186ba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:19 np0005539550 systemd-udevd[387293]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:46:19 np0005539550 systemd-machined[216673]: New machine qemu-112-instance-000000d3.
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539550 NetworkManager[49039]: <info>  [1764405979.0314] device (tap8f35a6af-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:46:19 np0005539550 NetworkManager[49039]: <info>  [1764405979.0330] device (tap8f35a6af-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:46:19 np0005539550 systemd[1]: Started Virtual Machine qemu-112-instance-000000d3.
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.036 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:19Z|00976|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b ovn-installed in OVS
Nov 29 03:46:19 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:19Z|00977|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b up in Southbound
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:19.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.484 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405979.4829826, 5c287733-d168-4706-9ec4-10a10472896c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.484 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.503 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.507 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405979.486951, 5c287733-d168-4706-9ec4-10a10472896c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.507 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.529 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.532 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.569 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:46:19 np0005539550 nova_compute[257631]: 2025-11-29 08:46:19.686 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003162332205006514 of space, bias 1.0, pg target 0.9486996615019542 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:46:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 204 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 156 KiB/s rd, 2.7 MiB/s wr, 107 op/s
Nov 29 03:46:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:22.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 190 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 77 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Nov 29 03:46:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:23.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.674 257641 DEBUG nova.compute.manager [req-058e8b1f-a16d-4eef-8e92-115cef118013 req-827bd183-88a5-466e-ab77-dccf189a5fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.675 257641 DEBUG oslo_concurrency.lockutils [req-058e8b1f-a16d-4eef-8e92-115cef118013 req-827bd183-88a5-466e-ab77-dccf189a5fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.675 257641 DEBUG oslo_concurrency.lockutils [req-058e8b1f-a16d-4eef-8e92-115cef118013 req-827bd183-88a5-466e-ab77-dccf189a5fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.676 257641 DEBUG oslo_concurrency.lockutils [req-058e8b1f-a16d-4eef-8e92-115cef118013 req-827bd183-88a5-466e-ab77-dccf189a5fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.676 257641 DEBUG nova.compute.manager [req-058e8b1f-a16d-4eef-8e92-115cef118013 req-827bd183-88a5-466e-ab77-dccf189a5fc3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Processing event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.677 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.682 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405983.681691, 5c287733-d168-4706-9ec4-10a10472896c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.683 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.685 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.690 257641 INFO nova.virt.libvirt.driver [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance spawned successfully.#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.691 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.714 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.724 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.730 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.731 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.732 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.732 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.734 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.735 257641 DEBUG nova.virt.libvirt.driver [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.749 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.795 257641 INFO nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Took 10.80 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.796 257641 DEBUG nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.888 257641 INFO nova.compute.manager [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Took 11.80 seconds to build instance.#033[00m
Nov 29 03:46:23 np0005539550 nova_compute[257631]: 2025-11-29 08:46:23.927 257641 DEBUG oslo_concurrency.lockutils [None req-b90ea072-fc91-426a-9832-5825f427f50d 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:24.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 213 KiB/s rd, 1.8 MiB/s wr, 100 op/s
Nov 29 03:46:24 np0005539550 nova_compute[257631]: 2025-11-29 08:46:24.688 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:25.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.779 257641 DEBUG nova.compute.manager [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.780 257641 DEBUG oslo_concurrency.lockutils [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.781 257641 DEBUG oslo_concurrency.lockutils [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.781 257641 DEBUG oslo_concurrency.lockutils [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.781 257641 DEBUG nova.compute.manager [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:25 np0005539550 nova_compute[257631]: 2025-11-29 08:46:25.782 257641 WARNING nova.compute.manager [req-dd982a3b-c4c9-4b5a-a410-e8d3b41da174 req-82c7be72-532c-4eec-a5d2-46da1671441c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:46:26 np0005539550 podman[387519]: 2025-11-29 08:46:26.548739304 +0000 UTC m=+0.061405685 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:26.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.4 MiB/s wr, 103 op/s
Nov 29 03:46:26 np0005539550 podman[387519]: 2025-11-29 08:46:26.651536305 +0000 UTC m=+0.164202686 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:26 np0005539550 nova_compute[257631]: 2025-11-29 08:46:26.933 257641 DEBUG nova.objects.instance [None req-a98dd84a-7021-47f4-acb9-29db12898df7 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:26 np0005539550 nova_compute[257631]: 2025-11-29 08:46:26.957 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405986.9572816, 5c287733-d168-4706-9ec4-10a10472896c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:26 np0005539550 nova_compute[257631]: 2025-11-29 08:46:26.958 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:46:26 np0005539550 nova_compute[257631]: 2025-11-29 08:46:26.976 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:26 np0005539550 nova_compute[257631]: 2025-11-29 08:46:26.979 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.001 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:46:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:27.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 podman[387678]: 2025-11-29 08:46:27.410791808 +0000 UTC m=+0.205447130 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:46:27 np0005539550 kernel: tap8f35a6af-5d (unregistering): left promiscuous mode
Nov 29 03:46:27 np0005539550 NetworkManager[49039]: <info>  [1764405987.4179] device (tap8f35a6af-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:27 np0005539550 podman[387678]: 2025-11-29 08:46:27.423261104 +0000 UTC m=+0.217916426 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:46:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:27Z|00978|binding|INFO|Releasing lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b from this chassis (sb_readonly=0)
Nov 29 03:46:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:27Z|00979|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b down in Southbound
Nov 29 03:46:27 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:27Z|00980|binding|INFO|Removing iface tap8f35a6af-5d ovn-installed in OVS
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.438 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.440 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:27.445 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:27.446 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 unbound from our chassis#033[00m
Nov 29 03:46:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:27.447 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:27 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:27.449 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3082a3d7-6eed-4a65-8617-6b86505afd0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:27 np0005539550 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Nov 29 03:46:27 np0005539550 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d000000d3.scope: Consumed 3.818s CPU time.
Nov 29 03:46:27 np0005539550 systemd-machined[216673]: Machine qemu-112-instance-000000d3 terminated.
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.606 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.618 257641 DEBUG nova.compute.manager [None req-a98dd84a-7021-47f4-acb9-29db12898df7 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:27 np0005539550 podman[387750]: 2025-11-29 08:46:27.639373763 +0000 UTC m=+0.059100497 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.buildah.version=1.28.2, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Nov 29 03:46:27 np0005539550 podman[387750]: 2025-11-29 08:46:27.651195152 +0000 UTC m=+0.070921886 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, name=keepalived, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, description=keepalived for Ceph, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:46:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.881 257641 DEBUG nova.compute.manager [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.882 257641 DEBUG oslo_concurrency.lockutils [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.882 257641 DEBUG oslo_concurrency.lockutils [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.883 257641 DEBUG oslo_concurrency.lockutils [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.883 257641 DEBUG nova.compute.manager [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:27 np0005539550 nova_compute[257631]: 2025-11-29 08:46:27.883 257641 WARNING nova.compute.manager [req-cee9e29f-bdf4-4bbd-8875-7b1a18b7877a req-9eb68bba-1fef-46f9-8978-7146f7b28784 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:28 np0005539550 nova_compute[257631]: 2025-11-29 08:46:28.106 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:28.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 634 KiB/s wr, 108 op/s
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.906014535 +0000 UTC m=+0.066879444 container create 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:28 np0005539550 systemd[1]: Started libpod-conmon-0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc.scope.
Nov 29 03:46:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.880249813 +0000 UTC m=+0.041114822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.981076194 +0000 UTC m=+0.141941133 container init 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.986548273 +0000 UTC m=+0.147413192 container start 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.990506643 +0000 UTC m=+0.151371592 container attach 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:28 np0005539550 boring_feynman[388081]: 167 167
Nov 29 03:46:28 np0005539550 systemd[1]: libpod-0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc.scope: Deactivated successfully.
Nov 29 03:46:28 np0005539550 podman[388065]: 2025-11-29 08:46:28.99237015 +0000 UTC m=+0.153235069 container died 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9dbe38689fbd7db40747f9dc2fa1adc3321bc1606edc3206e0542eb2dd1eb08b-merged.mount: Deactivated successfully.
Nov 29 03:46:29 np0005539550 podman[388065]: 2025-11-29 08:46:29.027300624 +0000 UTC m=+0.188165543 container remove 0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:46:29 np0005539550 systemd[1]: libpod-conmon-0af97c397d1d1ccfe0af033a50019d2d74817bbc10bb74085dd3d2c4a2f3d2fc.scope: Deactivated successfully.
Nov 29 03:46:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:29.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:29 np0005539550 podman[388105]: 2025-11-29 08:46:29.186529633 +0000 UTC m=+0.039874210 container create 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:46:29 np0005539550 systemd[1]: Started libpod-conmon-875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6.scope.
Nov 29 03:46:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea7ad6ceb428b1a321014f8d80a231c2966dcbd0f7f355f02c5f3dbbc8fb177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea7ad6ceb428b1a321014f8d80a231c2966dcbd0f7f355f02c5f3dbbc8fb177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea7ad6ceb428b1a321014f8d80a231c2966dcbd0f7f355f02c5f3dbbc8fb177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aea7ad6ceb428b1a321014f8d80a231c2966dcbd0f7f355f02c5f3dbbc8fb177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:29 np0005539550 podman[388105]: 2025-11-29 08:46:29.168310192 +0000 UTC m=+0.021654779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:29 np0005539550 podman[388105]: 2025-11-29 08:46:29.276142371 +0000 UTC m=+0.129486948 container init 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:29 np0005539550 podman[388105]: 2025-11-29 08:46:29.285954479 +0000 UTC m=+0.139299046 container start 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:46:29 np0005539550 podman[388105]: 2025-11-29 08:46:29.289445988 +0000 UTC m=+0.142790645 container attach 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:46:29 np0005539550 nova_compute[257631]: 2025-11-29 08:46:29.690 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]: [
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:    {
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "available": false,
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "ceph_device": false,
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "lsm_data": {},
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "lvs": [],
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "path": "/dev/sr0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "rejected_reasons": [
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "Insufficient space (<5GB)",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "Has a FileSystem"
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        ],
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        "sys_api": {
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "actuators": null,
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "device_nodes": "sr0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "devname": "sr0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "human_readable_size": "482.00 KB",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "id_bus": "ata",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "model": "QEMU DVD-ROM",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "nr_requests": "2",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "parent": "/dev/sr0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "partitions": {},
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "path": "/dev/sr0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "removable": "1",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "rev": "2.5+",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "ro": "0",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "rotational": "1",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "sas_address": "",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "sas_device_handle": "",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "scheduler_mode": "mq-deadline",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "sectors": 0,
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "sectorsize": "2048",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "size": 493568.0,
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "support_discard": "2048",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "type": "disk",
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:            "vendor": "QEMU"
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:        }
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]:    }
Nov 29 03:46:30 np0005539550 jolly_maxwell[388122]: ]
Nov 29 03:46:30 np0005539550 systemd[1]: libpod-875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6.scope: Deactivated successfully.
Nov 29 03:46:30 np0005539550 podman[388105]: 2025-11-29 08:46:30.466001431 +0000 UTC m=+1.319345988 container died 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:46:30 np0005539550 systemd[1]: libpod-875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6.scope: Consumed 1.209s CPU time.
Nov 29 03:46:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-aea7ad6ceb428b1a321014f8d80a231c2966dcbd0f7f355f02c5f3dbbc8fb177-merged.mount: Deactivated successfully.
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.521 257641 DEBUG nova.compute.manager [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.521 257641 DEBUG oslo_concurrency.lockutils [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.522 257641 DEBUG oslo_concurrency.lockutils [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.522 257641 DEBUG oslo_concurrency.lockutils [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.522 257641 DEBUG nova.compute.manager [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.522 257641 WARNING nova.compute.manager [req-79dd6d5b-23b4-4e30-9c73-fc98954ae11c req-fb82f1e8-a262-4c2e-b83c-36f69bfc3bf5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state suspended and task_state None.#033[00m
Nov 29 03:46:30 np0005539550 podman[388105]: 2025-11-29 08:46:30.530194196 +0000 UTC m=+1.383538753 container remove 875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:30 np0005539550 systemd[1]: libpod-conmon-875a6e1055706d6bc5ccad57ac82f5a76d130259d440d981059d8d098bdd68c6.scope: Deactivated successfully.
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:46:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:30.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 35 KiB/s wr, 106 op/s
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.776 257641 INFO nova.compute.manager [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Resuming#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.777 257641 DEBUG nova.objects.instance [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'flavor' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 847379b9-19a8-40a5-b939-5e4181794316 does not exist
Nov 29 03:46:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 761220a2-df77-4aeb-b78a-3f0501c7228b does not exist
Nov 29 03:46:30 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 620f13b6-25dc-4499-aea0-28e8ad199a2b does not exist
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.817 257641 DEBUG oslo_concurrency.lockutils [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.817 257641 DEBUG oslo_concurrency.lockutils [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquired lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:30 np0005539550 nova_compute[257631]: 2025-11-29 08:46:30.817 257641 DEBUG nova.network.neutron [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:46:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:46:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:31.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:31 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.366668053 +0000 UTC m=+0.037382787 container create d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:31 np0005539550 systemd[1]: Started libpod-conmon-d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8.scope.
Nov 29 03:46:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.435961927 +0000 UTC m=+0.106676681 container init d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.442321907 +0000 UTC m=+0.113036641 container start d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.446045732 +0000 UTC m=+0.116760466 container attach d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.35074415 +0000 UTC m=+0.021458904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:31 np0005539550 silly_dijkstra[389601]: 167 167
Nov 29 03:46:31 np0005539550 systemd[1]: libpod-d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8.scope: Deactivated successfully.
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.447040847 +0000 UTC m=+0.117755581 container died d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:46:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3d6665d874e675e0b6890c4f8ac30bdf056c65aff7328e4f79bd3493bc0601b0-merged.mount: Deactivated successfully.
Nov 29 03:46:31 np0005539550 podman[389584]: 2025-11-29 08:46:31.485012658 +0000 UTC m=+0.155727392 container remove d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:31 np0005539550 systemd[1]: libpod-conmon-d0475b7c045c2a74bff43bf7ce2c553066d04557dd6bf069c6569751021239c8.scope: Deactivated successfully.
Nov 29 03:46:31 np0005539550 podman[389652]: 2025-11-29 08:46:31.637033544 +0000 UTC m=+0.040086986 container create 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:31 np0005539550 systemd[1]: Started libpod-conmon-4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4.scope.
Nov 29 03:46:31 np0005539550 podman[389652]: 2025-11-29 08:46:31.618242978 +0000 UTC m=+0.021296440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:31 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:31 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:31 np0005539550 podman[389652]: 2025-11-29 08:46:31.741148898 +0000 UTC m=+0.144202360 container init 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:46:31 np0005539550 podman[389652]: 2025-11-29 08:46:31.750342931 +0000 UTC m=+0.153396403 container start 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:46:31 np0005539550 podman[389652]: 2025-11-29 08:46:31.755753498 +0000 UTC m=+0.158806960 container attach 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:46:32 np0005539550 hopeful_goldstine[389691]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:46:32 np0005539550 hopeful_goldstine[389691]: --> relative data size: 1.0
Nov 29 03:46:32 np0005539550 hopeful_goldstine[389691]: --> All data devices are unavailable
Nov 29 03:46:32 np0005539550 systemd[1]: libpod-4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4.scope: Deactivated successfully.
Nov 29 03:46:32 np0005539550 podman[389652]: 2025-11-29 08:46:32.530649487 +0000 UTC m=+0.933702949 container died 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7a97962fe7598ddcadc8635e37701ad52034bd34494755a1710ad4da8d5b5338-merged.mount: Deactivated successfully.
Nov 29 03:46:32 np0005539550 podman[389652]: 2025-11-29 08:46:32.586242244 +0000 UTC m=+0.989295686 container remove 4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:32.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:32 np0005539550 systemd[1]: libpod-conmon-4b00518c6244156f4b38a19d838dcdac0dae5ecfdbba12e66827752a82e575b4.scope: Deactivated successfully.
Nov 29 03:46:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 87 op/s
Nov 29 03:46:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:33.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.150809251 +0000 UTC m=+0.037793218 container create ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Nov 29 03:46:33 np0005539550 nova_compute[257631]: 2025-11-29 08:46:33.151 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:33 np0005539550 systemd[1]: Started libpod-conmon-ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8.scope.
Nov 29 03:46:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.229054021 +0000 UTC m=+0.116038018 container init ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.134454847 +0000 UTC m=+0.021438844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.235543195 +0000 UTC m=+0.122527182 container start ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.239143876 +0000 UTC m=+0.126127863 container attach ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:46:33 np0005539550 exciting_edison[389873]: 167 167
Nov 29 03:46:33 np0005539550 systemd[1]: libpod-ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8.scope: Deactivated successfully.
Nov 29 03:46:33 np0005539550 conmon[389873]: conmon ad9dbf6d94baeef64a18 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8.scope/container/memory.events
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.241475065 +0000 UTC m=+0.128459032 container died ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:46:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-70cdd92f6a245fd3eca2657ac816bbb0df3d9378ce313e2131a9c58708a21059-merged.mount: Deactivated successfully.
Nov 29 03:46:33 np0005539550 podman[389856]: 2025-11-29 08:46:33.27724763 +0000 UTC m=+0.164231597 container remove ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:46:33 np0005539550 systemd[1]: libpod-conmon-ad9dbf6d94baeef64a1871f52e51e4a2001144644eaacc0537e0c70c23d836b8.scope: Deactivated successfully.
Nov 29 03:46:33 np0005539550 podman[389897]: 2025-11-29 08:46:33.430690623 +0000 UTC m=+0.045797460 container create 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:46:33 np0005539550 systemd[1]: Started libpod-conmon-182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb.scope.
Nov 29 03:46:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a085ab7fcb370869217a4661400d4e40ebe0cc608dfb3753f9ae17c60b76e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a085ab7fcb370869217a4661400d4e40ebe0cc608dfb3753f9ae17c60b76e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a085ab7fcb370869217a4661400d4e40ebe0cc608dfb3753f9ae17c60b76e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36a085ab7fcb370869217a4661400d4e40ebe0cc608dfb3753f9ae17c60b76e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:33 np0005539550 podman[389897]: 2025-11-29 08:46:33.501185427 +0000 UTC m=+0.116292284 container init 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:46:33 np0005539550 podman[389897]: 2025-11-29 08:46:33.410560824 +0000 UTC m=+0.025667701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:33 np0005539550 podman[389897]: 2025-11-29 08:46:33.510227886 +0000 UTC m=+0.125334733 container start 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:33 np0005539550 podman[389897]: 2025-11-29 08:46:33.513630182 +0000 UTC m=+0.128737019 container attach 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:46:33 np0005539550 podman[389911]: 2025-11-29 08:46:33.551603173 +0000 UTC m=+0.081404391 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 29 03:46:33 np0005539550 podman[389915]: 2025-11-29 08:46:33.560288483 +0000 UTC m=+0.090440010 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]: {
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:    "0": [
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:        {
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "devices": [
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "/dev/loop3"
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            ],
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "lv_name": "ceph_lv0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "lv_size": "7511998464",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "name": "ceph_lv0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "tags": {
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.cluster_name": "ceph",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.crush_device_class": "",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.encrypted": "0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.osd_id": "0",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.type": "block",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:                "ceph.vdo": "0"
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            },
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "type": "block",
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:            "vg_name": "ceph_vg0"
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:        }
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]:    ]
Nov 29 03:46:34 np0005539550 inspiring_ptolemy[389916]: }
Nov 29 03:46:34 np0005539550 systemd[1]: libpod-182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb.scope: Deactivated successfully.
Nov 29 03:46:34 np0005539550 podman[389897]: 2025-11-29 08:46:34.286210193 +0000 UTC m=+0.901317040 container died 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:46:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-36a085ab7fcb370869217a4661400d4e40ebe0cc608dfb3753f9ae17c60b76e8-merged.mount: Deactivated successfully.
Nov 29 03:46:34 np0005539550 podman[389897]: 2025-11-29 08:46:34.339948522 +0000 UTC m=+0.955055369 container remove 182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:34 np0005539550 systemd[1]: libpod-conmon-182d15bc0b97cbfec32230028132445d0a42e2f5f059607034c3e833620e8feb.scope: Deactivated successfully.
Nov 29 03:46:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 72 op/s
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.854 257641 DEBUG nova.network.neutron [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.870 257641 DEBUG oslo_concurrency.lockutils [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Releasing lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.878 257641 DEBUG nova.virt.libvirt.vif [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1551813917',display_name='tempest-TestServerAdvancedOps-server-1551813917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1551813917',id=211,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='053dbfa33fdc48d6ad41e28ea7a34860',ramdisk_id='',reservation_id='r-l0d1ky0j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-979394289',owner_user_name='tempest-TestServerAdvancedOps-979394289-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:46:27Z,user_data=None,user_id='344d31c1be5d41b1af2cc7427e8e0457',uuid=5c287733-d168-4706-9ec4-10a10472896c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.878 257641 DEBUG nova.network.os_vif_util [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converting VIF {"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.879 257641 DEBUG nova.network.os_vif_util [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.880 257641 DEBUG os_vif [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.881 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.881 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.881 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.884 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.885 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f35a6af-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.885 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f35a6af-5d, col_values=(('external_ids', {'iface-id': '8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:84:f7', 'vm-uuid': '5c287733-d168-4706-9ec4-10a10472896c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.885 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.885 257641 INFO os_vif [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d')#033[00m
Nov 29 03:46:34 np0005539550 nova_compute[257631]: 2025-11-29 08:46:34.907 257641 DEBUG nova.objects.instance [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:34 np0005539550 podman[390112]: 2025-11-29 08:46:34.913917737 +0000 UTC m=+0.045346729 container create e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:34 np0005539550 systemd[1]: Started libpod-conmon-e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204.scope.
Nov 29 03:46:34 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:34 np0005539550 podman[390112]: 2025-11-29 08:46:34.896928717 +0000 UTC m=+0.028357729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:34 np0005539550 podman[390112]: 2025-11-29 08:46:34.993962343 +0000 UTC m=+0.125391355 container init e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:35 np0005539550 podman[390112]: 2025-11-29 08:46:35.001892973 +0000 UTC m=+0.133321965 container start e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:35 np0005539550 podman[390112]: 2025-11-29 08:46:35.005191277 +0000 UTC m=+0.136620269 container attach e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:46:35 np0005539550 distracted_mestorf[390129]: 167 167
Nov 29 03:46:35 np0005539550 systemd[1]: libpod-e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204.scope: Deactivated successfully.
Nov 29 03:46:35 np0005539550 podman[390112]: 2025-11-29 08:46:35.007025013 +0000 UTC m=+0.138454005 container died e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-da9cc96652029e934b621838a9923a30d4bcba2032fc680fb6f7e80ed72138fa-merged.mount: Deactivated successfully.
Nov 29 03:46:35 np0005539550 podman[390112]: 2025-11-29 08:46:35.045079416 +0000 UTC m=+0.176508408 container remove e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:46:35 np0005539550 systemd[1]: libpod-conmon-e012a482deae62c59f11018660753cf3a12399bdd9bcbc8a496bafa569639204.scope: Deactivated successfully.
Nov 29 03:46:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:35.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:35 np0005539550 podman[390154]: 2025-11-29 08:46:35.204741645 +0000 UTC m=+0.039486510 container create 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:46:35 np0005539550 systemd[1]: Started libpod-conmon-78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69.scope.
Nov 29 03:46:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:46:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9549b453c95ae893b7011ac7bead927b7d606c7ead52010dbc425c0437cef00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9549b453c95ae893b7011ac7bead927b7d606c7ead52010dbc425c0437cef00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9549b453c95ae893b7011ac7bead927b7d606c7ead52010dbc425c0437cef00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9549b453c95ae893b7011ac7bead927b7d606c7ead52010dbc425c0437cef00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:46:35 np0005539550 podman[390154]: 2025-11-29 08:46:35.187484979 +0000 UTC m=+0.022229874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:46:35 np0005539550 podman[390154]: 2025-11-29 08:46:35.285337855 +0000 UTC m=+0.120082730 container init 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:46:35 np0005539550 podman[390154]: 2025-11-29 08:46:35.29107322 +0000 UTC m=+0.125818085 container start 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:46:35 np0005539550 podman[390154]: 2025-11-29 08:46:35.294087196 +0000 UTC m=+0.128832081 container attach 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:46:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]: {
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:        "osd_id": 0,
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:        "type": "bluestore"
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]:    }
Nov 29 03:46:36 np0005539550 wonderful_ishizaka[390171]: }
Nov 29 03:46:36 np0005539550 systemd[1]: libpod-78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69.scope: Deactivated successfully.
Nov 29 03:46:36 np0005539550 podman[390154]: 2025-11-29 08:46:36.13157685 +0000 UTC m=+0.966321735 container died 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:46:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f9549b453c95ae893b7011ac7bead927b7d606c7ead52010dbc425c0437cef00-merged.mount: Deactivated successfully.
Nov 29 03:46:36 np0005539550 podman[390154]: 2025-11-29 08:46:36.187967227 +0000 UTC m=+1.022712092 container remove 78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:46:36 np0005539550 systemd[1]: libpod-conmon-78e7c0f20946a3d5a0fbcfa2ef00a48e09df603e3f4f2a284cf1a26ed32ddb69.scope: Deactivated successfully.
Nov 29 03:46:36 np0005539550 kernel: tap8f35a6af-5d: entered promiscuous mode
Nov 29 03:46:36 np0005539550 NetworkManager[49039]: <info>  [1764405996.2288] manager: (tap8f35a6af-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/424)
Nov 29 03:46:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:36Z|00981|binding|INFO|Claiming lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for this chassis.
Nov 29 03:46:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:36Z|00982|binding|INFO|8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b: Claiming fa:16:3e:eb:84:f7 10.100.0.10
Nov 29 03:46:36 np0005539550 nova_compute[257631]: 2025-11-29 08:46:36.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:46:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:36.239 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:36.241 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 bound to our chassis#033[00m
Nov 29 03:46:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:36.242 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:36 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:36.244 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8c11e3cb-570a-406d-bcaa-37a6ba061197]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:36Z|00983|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b ovn-installed in OVS
Nov 29 03:46:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:36Z|00984|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b up in Southbound
Nov 29 03:46:36 np0005539550 nova_compute[257631]: 2025-11-29 08:46:36.246 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:36 np0005539550 nova_compute[257631]: 2025-11-29 08:46:36.248 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:46:36 np0005539550 nova_compute[257631]: 2025-11-29 08:46:36.253 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 595910ea-7e20-47a5-8b0c-6a08a345eb14 does not exist
Nov 29 03:46:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6adff863-1f72-4063-b843-396d9fc90e77 does not exist
Nov 29 03:46:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4bf5ff70-ff9c-410f-988a-5b8f95f5b136 does not exist
Nov 29 03:46:36 np0005539550 systemd-udevd[390218]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:46:36 np0005539550 systemd-machined[216673]: New machine qemu-113-instance-000000d3.
Nov 29 03:46:36 np0005539550 NetworkManager[49039]: <info>  [1764405996.2842] device (tap8f35a6af-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:46:36 np0005539550 NetworkManager[49039]: <info>  [1764405996.2848] device (tap8f35a6af-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:46:36 np0005539550 systemd[1]: Started Virtual Machine qemu-113-instance-000000d3.
Nov 29 03:46:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:36.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 22 KiB/s wr, 64 op/s
Nov 29 03:46:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:37.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:37 np0005539550 nova_compute[257631]: 2025-11-29 08:46:37.109 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 5c287733-d168-4706-9ec4-10a10472896c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:46:37 np0005539550 nova_compute[257631]: 2025-11-29 08:46:37.110 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405997.109115, 5c287733-d168-4706-9ec4-10a10472896c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:37 np0005539550 nova_compute[257631]: 2025-11-29 08:46:37.110 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:46:37 np0005539550 nova_compute[257631]: 2025-11-29 08:46:37.127 257641 DEBUG nova.compute.manager [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:46:37 np0005539550 nova_compute[257631]: 2025-11-29 08:46:37.128 257641 DEBUG nova.objects.instance [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 03:46:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.149 257641 DEBUG nova.compute.manager [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.149 257641 DEBUG oslo_concurrency.lockutils [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.150 257641 DEBUG oslo_concurrency.lockutils [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.150 257641 DEBUG oslo_concurrency.lockutils [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.150 257641 DEBUG nova.compute.manager [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.150 257641 WARNING nova.compute.manager [req-ef9afeea-f1c3-4fe8-ab9a-ac452a63b8d2 req-62e1adca-b473-454b-ab04-f2ca884fc0b3 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.153 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.166 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.173 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.176 257641 INFO nova.virt.libvirt.driver [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance running successfully.#033[00m
Nov 29 03:46:38 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.179 257641 DEBUG nova.virt.libvirt.guest [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.180 257641 DEBUG nova.compute.manager [None req-4c1f7dd5-115d-4186-a054-85e2f519c4d4 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.218 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.219 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764405997.1121862, 5c287733-d168-4706-9ec4-10a10472896c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.219 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.253 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:38 np0005539550 nova_compute[257631]: 2025-11-29 08:46:38.257 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:38.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 974 KiB/s rd, 170 B/s wr, 75 op/s
Nov 29 03:46:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:39.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:39 np0005539550 nova_compute[257631]: 2025-11-29 08:46:39.698 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:40.133 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:40 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:40.134 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.135 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.337 257641 DEBUG nova.compute.manager [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.338 257641 DEBUG oslo_concurrency.lockutils [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.338 257641 DEBUG oslo_concurrency.lockutils [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.338 257641 DEBUG oslo_concurrency.lockutils [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.338 257641 DEBUG nova.compute.manager [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:40 np0005539550 nova_compute[257631]: 2025-11-29 08:46:40.339 257641 WARNING nova.compute.manager [req-11656aee-1d63-406b-a1b8-c4d1c2bf65d5 req-88fadc45-e5b3-42d4-9019-b043f753ab3c 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:46:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:46:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:40.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:46:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 370 KiB/s rd, 85 B/s wr, 106 op/s
Nov 29 03:46:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:41.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.287 257641 DEBUG nova.objects.instance [None req-6cfb8483-3d57-4ba8-b7d0-b9888979bfec 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.307 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406002.3076541, 5c287733-d168-4706-9ec4-10a10472896c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.308 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.339 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.342 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:42 np0005539550 nova_compute[257631]: 2025-11-29 08:46:42.368 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Nov 29 03:46:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:42.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 144 op/s
Nov 29 03:46:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:43.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:43 np0005539550 kernel: tap8f35a6af-5d (unregistering): left promiscuous mode
Nov 29 03:46:43 np0005539550 NetworkManager[49039]: <info>  [1764406003.0969] device (tap8f35a6af-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:43Z|00985|binding|INFO|Releasing lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b from this chassis (sb_readonly=0)
Nov 29 03:46:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:43Z|00986|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b down in Southbound
Nov 29 03:46:43 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:43Z|00987|binding|INFO|Removing iface tap8f35a6af-5d ovn-installed in OVS
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.111 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:43.116 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:43.117 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 unbound from our chassis#033[00m
Nov 29 03:46:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:43.118 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:43.119 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d2780960-5634-4b8a-af2c-2857ab66b7a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.127 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Nov 29 03:46:43 np0005539550 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d000000d3.scope: Consumed 6.071s CPU time.
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.156 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 systemd-machined[216673]: Machine qemu-113-instance-000000d3 terminated.
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.280 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.292 257641 DEBUG nova.compute.manager [None req-6cfb8483-3d57-4ba8-b7d0-b9888979bfec 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.299 257641 DEBUG nova.compute.manager [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.300 257641 DEBUG oslo_concurrency.lockutils [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.300 257641 DEBUG oslo_concurrency.lockutils [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.300 257641 DEBUG oslo_concurrency.lockutils [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.300 257641 DEBUG nova.compute.manager [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:43 np0005539550 nova_compute[257631]: 2025-11-29 08:46:43.301 257641 WARNING nova.compute.manager [req-f56df960-3416-47ed-8d1c-0c5f27946181 req-c7859f5c-64bb-4372-9228-d085a27190f9 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state suspending.#033[00m
Nov 29 03:46:44 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:44.136 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.558 257641 INFO nova.compute.manager [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Resuming#033[00m
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.559 257641 DEBUG nova.objects.instance [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'flavor' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.599 257641 DEBUG oslo_concurrency.lockutils [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.600 257641 DEBUG oslo_concurrency.lockutils [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquired lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.600 257641 DEBUG nova.network.neutron [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:46:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:44.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 03:46:44 np0005539550 nova_compute[257631]: 2025-11-29 08:46:44.698 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:45.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.419 257641 DEBUG nova.compute.manager [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.419 257641 DEBUG oslo_concurrency.lockutils [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.420 257641 DEBUG oslo_concurrency.lockutils [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.420 257641 DEBUG oslo_concurrency.lockutils [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.420 257641 DEBUG nova.compute.manager [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:45 np0005539550 nova_compute[257631]: 2025-11-29 08:46:45.421 257641 WARNING nova.compute.manager [req-0160d058-ba4d-4865-ab0c-6da1a0505b94 req-3d32c44d-61da-42db-a3dd-b72c262a0654 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state suspended and task_state resuming.#033[00m
Nov 29 03:46:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:46 np0005539550 podman[390345]: 2025-11-29 08:46:46.349874954 +0000 UTC m=+0.087628018 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:46:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 108 KiB/s rd, 0 B/s wr, 178 op/s
Nov 29 03:46:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.296 257641 DEBUG nova.network.neutron [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.314 257641 DEBUG oslo_concurrency.lockutils [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Releasing lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.319 257641 DEBUG nova.virt.libvirt.vif [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1551813917',display_name='tempest-TestServerAdvancedOps-server-1551813917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1551813917',id=211,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='053dbfa33fdc48d6ad41e28ea7a34860',ramdisk_id='',reservation_id='r-l0d1ky0j',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-979394289',owner_user_name='tempest-TestServerAdvancedOps-979394289-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:46:43Z,user_data=None,user_id='344d31c1be5d41b1af2cc7427e8e0457',uuid=5c287733-d168-4706-9ec4-10a10472896c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.319 257641 DEBUG nova.network.os_vif_util [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converting VIF {"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.320 257641 DEBUG nova.network.os_vif_util [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.320 257641 DEBUG os_vif [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.321 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.321 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.321 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f35a6af-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f35a6af-5d, col_values=(('external_ids', {'iface-id': '8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:84:f7', 'vm-uuid': '5c287733-d168-4706-9ec4-10a10472896c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.325 257641 INFO os_vif [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d')#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.344 257641 DEBUG nova.objects.instance [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:47 np0005539550 kernel: tap8f35a6af-5d: entered promiscuous mode
Nov 29 03:46:47 np0005539550 NetworkManager[49039]: <info>  [1764406007.4051] manager: (tap8f35a6af-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/425)
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:47Z|00988|binding|INFO|Claiming lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for this chassis.
Nov 29 03:46:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:47Z|00989|binding|INFO|8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b: Claiming fa:16:3e:eb:84:f7 10.100.0.10
Nov 29 03:46:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:47.413 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:47.415 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 bound to our chassis#033[00m
Nov 29 03:46:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:47.416 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:47 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:47.417 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[76d1aa3a-cfcc-448a-aed3-f304aaff8509]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:47Z|00990|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b up in Southbound
Nov 29 03:46:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:47Z|00991|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b ovn-installed in OVS
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.423 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.424 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 nova_compute[257631]: 2025-11-29 08:46:47.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:47 np0005539550 systemd-udevd[390384]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:46:47 np0005539550 NetworkManager[49039]: <info>  [1764406007.4444] device (tap8f35a6af-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:46:47 np0005539550 NetworkManager[49039]: <info>  [1764406007.4457] device (tap8f35a6af-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:46:47 np0005539550 systemd-machined[216673]: New machine qemu-114-instance-000000d3.
Nov 29 03:46:47 np0005539550 systemd[1]: Started Virtual Machine qemu-114-instance-000000d3.
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.159 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.557 257641 DEBUG nova.virt.libvirt.host [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Removed pending event for 5c287733-d168-4706-9ec4-10a10472896c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.557 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406008.556832, 5c287733-d168-4706-9ec4-10a10472896c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.557 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Started (Lifecycle Event)#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.579 257641 DEBUG nova.compute.manager [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.580 257641 DEBUG nova.objects.instance [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.585 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.590 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.613 257641 INFO nova.virt.libvirt.driver [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance running successfully.#033[00m
Nov 29 03:46:48 np0005539550 virtqemud[256287]: argument unsupported: QEMU guest agent is not configured
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.616 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.616 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406008.560183, 5c287733-d168-4706-9ec4-10a10472896c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.616 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.619 257641 DEBUG nova.virt.libvirt.guest [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.620 257641 DEBUG nova.compute.manager [None req-b67c4a29-8355-4576-ab7c-bb6e49402e75 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 0 B/s wr, 176 op/s
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.642 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.645 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:46:48 np0005539550 nova_compute[257631]: 2025-11-29 08:46:48.687 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Nov 29 03:46:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:49 np0005539550 nova_compute[257631]: 2025-11-29 08:46:49.733 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:49 np0005539550 nova_compute[257631]: 2025-11-29 08:46:49.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:49 np0005539550 nova_compute[257631]: 2025-11-29 08:46:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:46:49 np0005539550 nova_compute[257631]: 2025-11-29 08:46:49.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.497 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.497 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.497 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.498 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:50.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 141 op/s
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.676 257641 DEBUG nova.compute.manager [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.677 257641 DEBUG oslo_concurrency.lockutils [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.677 257641 DEBUG oslo_concurrency.lockutils [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.677 257641 DEBUG oslo_concurrency.lockutils [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.678 257641 DEBUG nova.compute.manager [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:50 np0005539550 nova_compute[257631]: 2025-11-29 08:46:50.678 257641 WARNING nova.compute.manager [req-a3f64867-9b88-4ab2-8c88-22e5ed021aeb req-083efcc8-044d-48f7-9a25-e3361eafccfc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state None.#033[00m
Nov 29 03:46:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:51.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:52.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 0 B/s wr, 90 op/s
Nov 29 03:46:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:53.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.161 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.391 257641 DEBUG nova.compute.manager [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.391 257641 DEBUG oslo_concurrency.lockutils [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.392 257641 DEBUG oslo_concurrency.lockutils [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.392 257641 DEBUG oslo_concurrency.lockutils [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.392 257641 DEBUG nova.compute.manager [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.393 257641 WARNING nova.compute.manager [req-4fc0511a-d44e-4c1d-be73-beec7a33a3cc req-9778d9c4-c730-4d93-b046-628809e53641 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.405 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.406 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.406 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.407 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.407 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.408 257641 INFO nova.compute.manager [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Terminating instance#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.409 257641 DEBUG nova.compute.manager [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:46:53 np0005539550 kernel: tap8f35a6af-5d (unregistering): left promiscuous mode
Nov 29 03:46:53 np0005539550 NetworkManager[49039]: <info>  [1764406013.4576] device (tap8f35a6af-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:46:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:53Z|00992|binding|INFO|Releasing lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b from this chassis (sb_readonly=0)
Nov 29 03:46:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:53Z|00993|binding|INFO|Setting lport 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b down in Southbound
Nov 29 03:46:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:46:53Z|00994|binding|INFO|Removing iface tap8f35a6af-5d ovn-installed in OVS
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:53.478 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:84:f7 10.100.0.10'], port_security=['fa:16:3e:eb:84:f7 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5c287733-d168-4706-9ec4-10a10472896c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3048113f-fdf7-404e-95c4-103607bd9494', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '053dbfa33fdc48d6ad41e28ea7a34860', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e9e770e0-4f26-4f6d-b80b-88a92d385749', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f0f357b6-c4f1-44c5-abd1-dd13220dd7b2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:46:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:53.480 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b in datapath 3048113f-fdf7-404e-95c4-103607bd9494 unbound from our chassis#033[00m
Nov 29 03:46:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:53.480 158978 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3048113f-fdf7-404e-95c4-103607bd9494 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Nov 29 03:46:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:46:53.481 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c8742dd3-18e7-4f79-9cf3-8632d52b135a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000d3.scope: Deactivated successfully.
Nov 29 03:46:53 np0005539550 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d000000d3.scope: Consumed 5.285s CPU time.
Nov 29 03:46:53 np0005539550 systemd-machined[216673]: Machine qemu-114-instance-000000d3 terminated.
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.634 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.642 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.652 257641 INFO nova.virt.libvirt.driver [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Instance destroyed successfully.#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.652 257641 DEBUG nova.objects.instance [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lazy-loading 'resources' on Instance uuid 5c287733-d168-4706-9ec4-10a10472896c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.686 257641 DEBUG nova.virt.libvirt.vif [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:46:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1551813917',display_name='tempest-TestServerAdvancedOps-server-1551813917',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1551813917',id=211,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:46:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='053dbfa33fdc48d6ad41e28ea7a34860',ramdisk_id='',reservation_id='r-l0d1ky0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-979394289',owner_user_name='tempest-TestServerAdvancedOps-979394289-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:46:48Z,user_data=None,user_id='344d31c1be5d41b1af2cc7427e8e0457',uuid=5c287733-d168-4706-9ec4-10a10472896c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.686 257641 DEBUG nova.network.os_vif_util [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converting VIF {"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.687 257641 DEBUG nova.network.os_vif_util [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.687 257641 DEBUG os_vif [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.689 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.689 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f35a6af-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.693 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.698 257641 INFO os_vif [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:84:f7,bridge_name='br-int',has_traffic_filtering=True,id=8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b,network=Network(3048113f-fdf7-404e-95c4-103607bd9494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f35a6af-5d')#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.841 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [{"id": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "address": "fa:16:3e:eb:84:f7", "network": {"id": "3048113f-fdf7-404e-95c4-103607bd9494", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1142492760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "053dbfa33fdc48d6ad41e28ea7a34860", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f35a6af-5d", "ovs_interfaceid": "8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.872 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-5c287733-d168-4706-9ec4-10a10472896c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.872 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:46:53 np0005539550 nova_compute[257631]: 2025-11-29 08:46:53.873 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.344 257641 INFO nova.virt.libvirt.driver [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Deleting instance files /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c_del#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.345 257641 INFO nova.virt.libvirt.driver [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Deletion of /var/lib/nova/instances/5c287733-d168-4706-9ec4-10a10472896c_del complete#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.393 257641 INFO nova.compute.manager [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.394 257641 DEBUG oslo.service.loopingcall [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.395 257641 DEBUG nova.compute.manager [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.395 257641 DEBUG nova.network.neutron [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:46:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:46:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:54.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:46:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 152 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 81 KiB/s rd, 911 KiB/s wr, 78 op/s
Nov 29 03:46:54 np0005539550 nova_compute[257631]: 2025-11-29 08:46:54.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:55.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.544 257641 DEBUG nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.545 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.545 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.545 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.546 257641 DEBUG nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.546 257641 DEBUG nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-unplugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.546 257641 DEBUG nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.546 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "5c287733-d168-4706-9ec4-10a10472896c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.546 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.547 257641 DEBUG oslo_concurrency.lockutils [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.547 257641 DEBUG nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] No waiting events found dispatching network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.547 257641 WARNING nova.compute.manager [req-59ad76c1-19f6-4dd3-8728-832fb64d9950 req-2f3097b8-aa7f-4f8a-ba7e-2dfafb625170 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received unexpected event network-vif-plugged-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:46:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.945 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:46:55 np0005539550 nova_compute[257631]: 2025-11-29 08:46:55.945 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561818659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.384 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.538 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.540 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4034MB free_disk=20.971885681152344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.540 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.540 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:56.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.643 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 5c287733-d168-4706-9ec4-10a10472896c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.643 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.643 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:46:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 134 KiB/s rd, 2.2 MiB/s wr, 61 op/s
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.750 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.896 257641 DEBUG nova.network.neutron [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.913 257641 INFO nova.compute.manager [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Took 2.52 seconds to deallocate network for instance.#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.959 257641 DEBUG nova.compute.manager [req-e794ceeb-f679-4a6b-8afb-5248452f81ed req-94f7ca74-a2e1-4769-8e51-1e7c49846cbb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Received event network-vif-deleted-8f35a6af-5dab-42cc-8ea4-6d0f5d28f31b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:46:56 np0005539550 nova_compute[257631]: 2025-11-29 08:46:56.962 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:46:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:57.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561395682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.196 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.201 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.226 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.252 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.252 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.252 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.297 257641 DEBUG oslo_concurrency.processutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:46:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:46:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516855714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.733 257641 DEBUG oslo_concurrency.processutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.738 257641 DEBUG nova.compute.provider_tree [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.766 257641 DEBUG nova.scheduler.client.report [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.784 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.834 257641 INFO nova.scheduler.client.report [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Deleted allocations for instance 5c287733-d168-4706-9ec4-10a10472896c#033[00m
Nov 29 03:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:57 np0005539550 nova_compute[257631]: 2025-11-29 08:46:57.908 257641 DEBUG oslo_concurrency.lockutils [None req-a9bb38f8-3fdc-4d29-8775-672cb48ca5aa 344d31c1be5d41b1af2cc7427e8e0457 053dbfa33fdc48d6ad41e28ea7a34860 - - default default] Lock "5c287733-d168-4706-9ec4-10a10472896c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:46:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:46:58 np0005539550 nova_compute[257631]: 2025-11-29 08:46:58.253 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:58 np0005539550 nova_compute[257631]: 2025-11-29 08:46:58.254 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:46:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:46:58.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 157 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 157 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Nov 29 03:46:58 np0005539550 nova_compute[257631]: 2025-11-29 08:46:58.692 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:46:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:46:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:46:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:46:59.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:46:59
Nov 29 03:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root']
Nov 29 03:46:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:46:59 np0005539550 nova_compute[257631]: 2025-11-29 08:46:59.825 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:00 np0005539550 nova_compute[257631]: 2025-11-29 08:47:00.432 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:00.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 176 KiB/s rd, 3.8 MiB/s wr, 111 op/s
Nov 29 03:47:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:01.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:01 np0005539550 nova_compute[257631]: 2025-11-29 08:47:01.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:47:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3452965658' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:47:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:47:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3452965658' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:47:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:02.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 3.8 MiB/s wr, 107 op/s
Nov 29 03:47:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:03.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:03 np0005539550 nova_compute[257631]: 2025-11-29 08:47:03.694 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:04 np0005539550 podman[390600]: 2025-11-29 08:47:04.327333772 +0000 UTC m=+0.063667723 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 03:47:04 np0005539550 podman[390601]: 2025-11-29 08:47:04.32725735 +0000 UTC m=+0.054535871 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:47:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000076s ======
Nov 29 03:47:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:04.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Nov 29 03:47:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 3.8 MiB/s wr, 118 op/s
Nov 29 03:47:04 np0005539550 nova_compute[257631]: 2025-11-29 08:47:04.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:05.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:06.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 145 KiB/s rd, 2.9 MiB/s wr, 90 op/s
Nov 29 03:47:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:07.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:07 np0005539550 nova_compute[257631]: 2025-11-29 08:47:07.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:47:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:08.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 105 op/s
Nov 29 03:47:08 np0005539550 nova_compute[257631]: 2025-11-29 08:47:08.651 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406013.650246, 5c287733-d168-4706-9ec4-10a10472896c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:08 np0005539550 nova_compute[257631]: 2025-11-29 08:47:08.651 257641 INFO nova.compute.manager [-] [instance: 5c287733-d168-4706-9ec4-10a10472896c] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:47:08 np0005539550 nova_compute[257631]: 2025-11-29 08:47:08.688 257641 DEBUG nova.compute.manager [None req-a6928620-7f59-459e-916b-6bfd49da08a8 - - - - - -] [instance: 5c287733-d168-4706-9ec4-10a10472896c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:08 np0005539550 nova_compute[257631]: 2025-11-29 08:47:08.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:47:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:47:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:47:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:47:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:47:09 np0005539550 nova_compute[257631]: 2025-11-29 08:47:09.862 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:10.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 201 KiB/s wr, 98 op/s
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.657251) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030657338, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 1958, "num_deletes": 258, "total_data_size": 3291546, "memory_usage": 3353608, "flush_reason": "Manual Compaction"}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030671278, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 2049691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63509, "largest_seqno": 65466, "table_properties": {"data_size": 2042770, "index_size": 3674, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 18455, "raw_average_key_size": 21, "raw_value_size": 2027416, "raw_average_value_size": 2376, "num_data_blocks": 161, "num_entries": 853, "num_filter_entries": 853, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764405867, "oldest_key_time": 1764405867, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 14090 microseconds, and 6035 cpu microseconds.
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.671353) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 2049691 bytes OK
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.671375) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.673277) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.673300) EVENT_LOG_v1 {"time_micros": 1764406030673295, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.673318) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 3283366, prev total WAL file size 3283366, number of live WAL files 2.
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.674421) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323534' seq:72057594037927935, type:22 .. '6D6772737461740032353036' seq:0, type:0; will stop at (end)
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(2001KB)], [143(12MB)]
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030674489, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 14905891, "oldest_snapshot_seqno": -1}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 10118 keys, 12177955 bytes, temperature: kUnknown
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030760205, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 12177955, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12114251, "index_size": 37287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 265899, "raw_average_key_size": 26, "raw_value_size": 11938301, "raw_average_value_size": 1179, "num_data_blocks": 1420, "num_entries": 10118, "num_filter_entries": 10118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.760579) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 12177955 bytes
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.762734) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.7 rd, 141.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 12.3 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(13.2) write-amplify(5.9) OK, records in: 10578, records dropped: 460 output_compression: NoCompression
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.762753) EVENT_LOG_v1 {"time_micros": 1764406030762743, "job": 88, "event": "compaction_finished", "compaction_time_micros": 85808, "compaction_time_cpu_micros": 32816, "output_level": 6, "num_output_files": 1, "total_output_size": 12177955, "num_input_records": 10578, "num_output_records": 10118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030763172, "job": 88, "event": "table_file_deletion", "file_number": 145}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406030765245, "job": 88, "event": "table_file_deletion", "file_number": 143}
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.674171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.765430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.765437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.765439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.765441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:10.765442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:11.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:12.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 14 KiB/s wr, 96 op/s
Nov 29 03:47:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:13.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:13 np0005539550 nova_compute[257631]: 2025-11-29 08:47:13.697 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:14.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 180 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 478 KiB/s wr, 126 op/s
Nov 29 03:47:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Nov 29 03:47:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Nov 29 03:47:14 np0005539550 nova_compute[257631]: 2025-11-29 08:47:14.865 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:14 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Nov 29 03:47:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:15.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:15 np0005539550 nova_compute[257631]: 2025-11-29 08:47:15.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:16.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.0 MiB/s wr, 131 op/s
Nov 29 03:47:16 np0005539550 nova_compute[257631]: 2025-11-29 08:47:16.973 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:16.974 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:16.975 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:17 np0005539550 podman[390696]: 2025-11-29 08:47:17.363876947 +0000 UTC m=+0.100453033 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 03:47:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:18.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 219 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.9 MiB/s wr, 109 op/s
Nov 29 03:47:18 np0005539550 nova_compute[257631]: 2025-11-29 08:47:18.699 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:18.977 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:18.978 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:19.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:19 np0005539550 nova_compute[257631]: 2025-11-29 08:47:19.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:19.977 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.335 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.335 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.352 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001344795447143123 of space, bias 1.0, pg target 0.4034386341429369 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031503364624046156 of space, bias 1.0, pg target 0.9451009387213847 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.541 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.541 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.550 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.550 257641 INFO nova.compute.claims [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:47:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 235 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.0 MiB/s wr, 127 op/s
Nov 29 03:47:20 np0005539550 nova_compute[257631]: 2025-11-29 08:47:20.737 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439902347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.178 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.184 257641 DEBUG nova.compute.provider_tree [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.203 257641 DEBUG nova.scheduler.client.report [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.231 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.232 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.300 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.300 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.333 257641 INFO nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.354 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.428 257641 INFO nova.virt.block_device [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Booting with volume snapshot 91a127b9-1cd6-49a6-97a2-a9c44bf2b597 at /dev/vda#033[00m
Nov 29 03:47:21 np0005539550 nova_compute[257631]: 2025-11-29 08:47:21.844 257641 DEBUG nova.policy [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:47:22 np0005539550 nova_compute[257631]: 2025-11-29 08:47:22.416 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Successfully created port: d0d81d7d-191f-4b7c-96af-3c8434b48320 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:47:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:22.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 240 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 113 op/s
Nov 29 03:47:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:23.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.420 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Successfully updated port: d0d81d7d-191f-4b7c-96af-3c8434b48320 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.445 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.446 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.446 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.578 257641 DEBUG nova.compute.manager [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-changed-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.579 257641 DEBUG nova.compute.manager [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Refreshing instance network info cache due to event network-changed-d0d81d7d-191f-4b7c-96af-3c8434b48320. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.579 257641 DEBUG oslo_concurrency.lockutils [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.642 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:47:23 np0005539550 nova_compute[257631]: 2025-11-29 08:47:23.760 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.279 257641 DEBUG nova.network.neutron [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Updating instance_info_cache with network_info: [{"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.296 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.297 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Instance network_info: |[{"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.297 257641 DEBUG oslo_concurrency.lockutils [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.297 257641 DEBUG nova.network.neutron [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Refreshing network info cache for port d0d81d7d-191f-4b7c-96af-3c8434b48320 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:24.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 407 KiB/s rd, 4.1 MiB/s wr, 100 op/s
Nov 29 03:47:24 np0005539550 nova_compute[257631]: 2025-11-29 08:47:24.888 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:25.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:25 np0005539550 nova_compute[257631]: 2025-11-29 08:47:25.873 257641 DEBUG nova.network.neutron [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Updated VIF entry in instance network info cache for port d0d81d7d-191f-4b7c-96af-3c8434b48320. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:25 np0005539550 nova_compute[257631]: 2025-11-29 08:47:25.874 257641 DEBUG nova.network.neutron [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Updating instance_info_cache with network_info: [{"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:25 np0005539550 nova_compute[257631]: 2025-11-29 08:47:25.894 257641 DEBUG oslo_concurrency.lockutils [req-43f15a7a-6533-49f8-8200-3051d947059d req-785d0424-b2c5-4aad-ab2a-209105bc7d4a 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-914862a1-ed26-42cf-a984-a50e43676d80" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.021 257641 DEBUG os_brick.utils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.023 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.035 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.036 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d75f19-43e3-47f4-a69f-b8c590fea9ec]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.037 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.046 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.047 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[a27b8545-5a77-4c67-9a68-bf4daaed74fa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.049 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.063 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.064 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[34bf7bb5-8a8d-472a-9a14-7579bc96e413]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.065 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[2266ff80-1b8f-4039-9558-3bae46637c1e]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.066 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.102 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.105 257641 DEBUG os_brick.initiator.connectors.lightos [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.105 257641 DEBUG os_brick.initiator.connectors.lightos [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.106 257641 DEBUG os_brick.initiator.connectors.lightos [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.106 257641 DEBUG os_brick.utils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:47:26 np0005539550 nova_compute[257631]: 2025-11-29 08:47:26.107 257641 DEBUG nova.virt.block_device [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Updating existing volume attachment record: 4db65d67-6002-43a1-b283-b01500d0ded7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:47:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:26.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 3.5 MiB/s wr, 85 op/s
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.086 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.088 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.088 257641 INFO nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Creating image(s)#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.089 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.089 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Ensure instance console log exists: /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.089 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.090 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.090 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.092 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Start _get_guest_xml network_info=[{"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '4db65d67-6002-43a1-b283-b01500d0ded7', 'device_type': 'disk', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-04876073-8d9f-4092-96a1-ca8b66cb7194', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '04876073-8d9f-4092-96a1-ca8b66cb7194', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '914862a1-ed26-42cf-a984-a50e43676d80', 'attached_at': '', 'detached_at': '', 'volume_id': '04876073-8d9f-4092-96a1-ca8b66cb7194', 'serial': '04876073-8d9f-4092-96a1-ca8b66cb7194'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.098 257641 WARNING nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.103 257641 DEBUG nova.virt.libvirt.host [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.103 257641 DEBUG nova.virt.libvirt.host [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.107 257641 DEBUG nova.virt.libvirt.host [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.107 257641 DEBUG nova.virt.libvirt.host [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.109 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.109 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.109 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.109 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.110 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.110 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.110 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.110 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.111 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.111 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.111 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.111 257641 DEBUG nova.virt.hardware [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.138 257641 DEBUG nova.storage.rbd_utils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 914862a1-ed26-42cf-a984-a50e43676d80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.141 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:27.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039718221' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.595 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.628 257641 DEBUG nova.virt.libvirt.vif [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-448548342',display_name='tempest-TestVolumeBootPattern-server-448548342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-448548342',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-2fm7vexc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:21Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=914862a1-ed26-42cf-a984-a50e43676d80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.629 257641 DEBUG nova.network.os_vif_util [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.630 257641 DEBUG nova.network.os_vif_util [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.631 257641 DEBUG nova.objects.instance [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 914862a1-ed26-42cf-a984-a50e43676d80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.649 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <uuid>914862a1-ed26-42cf-a984-a50e43676d80</uuid>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <name>instance-000000d6</name>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestVolumeBootPattern-server-448548342</nova:name>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:47:27</nova:creationTime>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <nova:port uuid="d0d81d7d-191f-4b7c-96af-3c8434b48320">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="serial">914862a1-ed26-42cf-a984-a50e43676d80</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="uuid">914862a1-ed26-42cf-a984-a50e43676d80</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/914862a1-ed26-42cf-a984-a50e43676d80_disk.config">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-04876073-8d9f-4092-96a1-ca8b66cb7194">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <serial>04876073-8d9f-4092-96a1-ca8b66cb7194</serial>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:cc:5d:5a"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <target dev="tapd0d81d7d-19"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/console.log" append="off"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:47:27 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:47:27 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:47:27 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:47:27 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.650 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Preparing to wait for external event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.651 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.651 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.651 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.652 257641 DEBUG nova.virt.libvirt.vif [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-448548342',display_name='tempest-TestVolumeBootPattern-server-448548342',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-448548342',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-2fm7vexc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:21Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=914862a1-ed26-42cf-a984-a50e43676d80,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.652 257641 DEBUG nova.network.os_vif_util [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.653 257641 DEBUG nova.network.os_vif_util [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.653 257641 DEBUG os_vif [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.654 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.654 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.655 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.658 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.658 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0d81d7d-19, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.659 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0d81d7d-19, col_values=(('external_ids', {'iface-id': 'd0d81d7d-191f-4b7c-96af-3c8434b48320', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:5d:5a', 'vm-uuid': '914862a1-ed26-42cf-a984-a50e43676d80'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:27 np0005539550 NetworkManager[49039]: <info>  [1764406047.6609] manager: (tapd0d81d7d-19): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.663 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.666 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.667 257641 INFO os_vif [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19')#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.716 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.716 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.717 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:cc:5d:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.717 257641 INFO nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Using config drive#033[00m
Nov 29 03:47:27 np0005539550 nova_compute[257631]: 2025-11-29 08:47:27.739 257641 DEBUG nova.storage.rbd_utils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 914862a1-ed26-42cf-a984-a50e43676d80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.037 257641 INFO nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Creating config drive at /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.042 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ecafxiz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.177 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ecafxiz" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.205 257641 DEBUG nova.storage.rbd_utils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 914862a1-ed26-42cf-a984-a50e43676d80_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.209 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config 914862a1-ed26-42cf-a984-a50e43676d80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.378 257641 DEBUG oslo_concurrency.processutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config 914862a1-ed26-42cf-a984-a50e43676d80_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.379 257641 INFO nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Deleting local config drive /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80/disk.config because it was imported into RBD.#033[00m
Nov 29 03:47:28 np0005539550 kernel: tapd0d81d7d-19: entered promiscuous mode
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.4385] manager: (tapd0d81d7d-19): new Tun device (/org/freedesktop/NetworkManager/Devices/427)
Nov 29 03:47:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:28Z|00995|binding|INFO|Claiming lport d0d81d7d-191f-4b7c-96af-3c8434b48320 for this chassis.
Nov 29 03:47:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:28Z|00996|binding|INFO|d0d81d7d-191f-4b7c-96af-3c8434b48320: Claiming fa:16:3e:cc:5d:5a 10.100.0.10
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.488 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.504 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:5a 10.100.0.10'], port_security=['fa:16:3e:cc:5d:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '914862a1-ed26-42cf-a984-a50e43676d80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e58ae615-f827-4417-b716-6b0f5aa1e5ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0d81d7d-191f-4b7c-96af-3c8434b48320) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.505 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0d81d7d-191f-4b7c-96af-3c8434b48320 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.506 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:47:28 np0005539550 systemd-machined[216673]: New machine qemu-115-instance-000000d6.
Nov 29 03:47:28 np0005539550 systemd-udevd[390873]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.517 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[89ed5d46-2d93-4f8b-b811-d42905e81505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.518 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.520 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.520 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[95c8ab36-4264-4d52-a094-e260061cf34b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.521 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ffcc4007-af4c-4203-aff5-112e0d296e92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.5319] device (tapd0d81d7d-19): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.5333] device (tapd0d81d7d-19): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.532 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[95e49706-3f88-48bb-ad13-1811167aff3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 systemd[1]: Started Virtual Machine qemu-115-instance-000000d6.
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.554 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[65b207ea-ebce-4523-8d68-4358f354dfe8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:28Z|00997|binding|INFO|Setting lport d0d81d7d-191f-4b7c-96af-3c8434b48320 ovn-installed in OVS
Nov 29 03:47:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:28Z|00998|binding|INFO|Setting lport d0d81d7d-191f-4b7c-96af-3c8434b48320 up in Southbound
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.559 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.590 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[37b939d4-62d3-41e9-865a-d6051138c313]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.595 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9d158900-ec01-4b09-baa7-6624a4c475cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.5966] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/428)
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.625 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a9c756-e6bc-4633-8d0e-de1abf9259a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.627 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[96d27168-813d-4fa8-a12b-77c0e59da7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.6467] device (tap3d510715-d0): carrier: link connected
Nov 29 03:47:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:28.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.651 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[58c493b1-8fc0-4745-9790-83f912aa6b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 337 KiB/s rd, 3.0 MiB/s wr, 79 op/s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.667 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[12b7277d-87b1-42b3-88c2-8bf953b202e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909631, 'reachable_time': 15155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390905, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.683 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f886d036-8e32-48f0-8128-3072bebabfec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 909631, 'tstamp': 909631}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390906, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.697 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e9315f90-6769-4dbc-a772-8bf44b6c5545]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 288], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909631, 'reachable_time': 15155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390907, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.731 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[30df6c9b-0dd4-418d-86fe-02e75d6d70ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.792 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[85cbacd8-d2be-4523-8ba8-c2182ac35190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.793 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.794 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.794 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:28 np0005539550 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:47:28 np0005539550 NetworkManager[49039]: <info>  [1764406048.7968] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.800 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.800 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:28 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:28Z|00999|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.803 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.804 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b147cccf-d7e4-4179-bace-aa82067657c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.805 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:47:28 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:28.806 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.815 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.840 257641 DEBUG nova.compute.manager [req-046b6632-b71e-4688-87be-2a7441116bd6 req-63b30b04-3e18-45ee-ad7c-3cb31c1a6faf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.841 257641 DEBUG oslo_concurrency.lockutils [req-046b6632-b71e-4688-87be-2a7441116bd6 req-63b30b04-3e18-45ee-ad7c-3cb31c1a6faf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.842 257641 DEBUG oslo_concurrency.lockutils [req-046b6632-b71e-4688-87be-2a7441116bd6 req-63b30b04-3e18-45ee-ad7c-3cb31c1a6faf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.842 257641 DEBUG oslo_concurrency.lockutils [req-046b6632-b71e-4688-87be-2a7441116bd6 req-63b30b04-3e18-45ee-ad7c-3cb31c1a6faf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:28 np0005539550 nova_compute[257631]: 2025-11-29 08:47:28.842 257641 DEBUG nova.compute.manager [req-046b6632-b71e-4688-87be-2a7441116bd6 req-63b30b04-3e18-45ee-ad7c-3cb31c1a6faf 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Processing event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:47:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:29.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:29 np0005539550 podman[390940]: 2025-11-29 08:47:29.195380736 +0000 UTC m=+0.050952461 container create b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:47:29 np0005539550 systemd[1]: Started libpod-conmon-b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe.scope.
Nov 29 03:47:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:29 np0005539550 podman[390940]: 2025-11-29 08:47:29.17184937 +0000 UTC m=+0.027421115 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:47:29 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419f845ce1079c83c8185cb694a4ad223e84c4db4cfb2a427d511c59068e4dda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:29 np0005539550 podman[390940]: 2025-11-29 08:47:29.282288915 +0000 UTC m=+0.137860660 container init b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:47:29 np0005539550 podman[390940]: 2025-11-29 08:47:29.28724073 +0000 UTC m=+0.142812455 container start b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 03:47:29 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [NOTICE]   (390960) : New worker (390962) forked
Nov 29 03:47:29 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [NOTICE]   (390960) : Loading success.
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.569 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.570 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406049.5697253, 914862a1-ed26-42cf-a984-a50e43676d80 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.571 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] VM Started (Lifecycle Event)#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.574 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.577 257641 INFO nova.virt.libvirt.driver [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Instance spawned successfully.#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.577 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.602 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.606 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.607 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.607 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.608 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.608 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.608 257641 DEBUG nova.virt.libvirt.driver [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.611 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.646 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.647 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406049.5698295, 914862a1-ed26-42cf-a984-a50e43676d80 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.647 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.670 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.674 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406049.5734437, 914862a1-ed26-42cf-a984-a50e43676d80 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.674 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.679 257641 INFO nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Took 2.59 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.679 257641 DEBUG nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.691 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.694 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.722 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.753 257641 INFO nova.compute.manager [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Took 9.35 seconds to build instance.#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.770 257641 DEBUG oslo_concurrency.lockutils [None req-e7f698b7-a352-4799-ac48-e7122a1e84ae b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.789 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.789 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.805 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.878 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.878 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.884 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.884 257641 INFO nova.compute.claims [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.891 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:29 np0005539550 nova_compute[257631]: 2025-11-29 08:47:29.997 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737864329' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.437 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.442 257641 DEBUG nova.compute.provider_tree [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.461 257641 DEBUG nova.scheduler.client.report [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.492 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.493 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.552 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.552 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.570 257641 INFO nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.586 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:47:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:30.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 253 KiB/s rd, 1.5 MiB/s wr, 76 op/s
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.678 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.680 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.680 257641 INFO nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Creating image(s)#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.707 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.737 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.763 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.767 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.805 257641 DEBUG nova.policy [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de2965680b714b539553cf0792584e1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.846 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.847 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.848 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.849 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.875 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:30 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.879 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8d6c8cb2-2581-465e-a048-10774b364ab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:30.999 257641 DEBUG nova.compute.manager [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.000 257641 DEBUG oslo_concurrency.lockutils [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.001 257641 DEBUG oslo_concurrency.lockutils [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.001 257641 DEBUG oslo_concurrency.lockutils [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.002 257641 DEBUG nova.compute.manager [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] No waiting events found dispatching network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.002 257641 WARNING nova.compute.manager [req-53261766-a8c4-4b30-8fb3-c93990a23f14 req-70334b1a-0481-4c11-937a-032d845a35a1 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received unexpected event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:47:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:31.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.163 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8d6c8cb2-2581-465e-a048-10774b364ab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.232 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] resizing rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.336 257641 DEBUG nova.objects.instance [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'migration_context' on Instance uuid 8d6c8cb2-2581-465e-a048-10774b364ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.359 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.360 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Ensure instance console log exists: /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.360 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.361 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.361 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.422 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Successfully created port: 8f9c52aa-0f9a-4731-b369-e0132cf94c40 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.830 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.831 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.832 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.832 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.832 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.834 257641 INFO nova.compute.manager [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Terminating instance#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.835 257641 DEBUG nova.compute.manager [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:47:31 np0005539550 kernel: tapd0d81d7d-19 (unregistering): left promiscuous mode
Nov 29 03:47:31 np0005539550 NetworkManager[49039]: <info>  [1764406051.8837] device (tapd0d81d7d-19): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:47:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:31Z|01000|binding|INFO|Releasing lport d0d81d7d-191f-4b7c-96af-3c8434b48320 from this chassis (sb_readonly=0)
Nov 29 03:47:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:31Z|01001|binding|INFO|Setting lport d0d81d7d-191f-4b7c-96af-3c8434b48320 down in Southbound
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.890 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:31 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:31Z|01002|binding|INFO|Removing iface tapd0d81d7d-19 ovn-installed in OVS
Nov 29 03:47:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:31.896 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:5d:5a 10.100.0.10'], port_security=['fa:16:3e:cc:5d:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '914862a1-ed26-42cf-a984-a50e43676d80', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e58ae615-f827-4417-b716-6b0f5aa1e5ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=d0d81d7d-191f-4b7c-96af-3c8434b48320) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:31.897 158978 INFO neutron.agent.ovn.metadata.agent [-] Port d0d81d7d-191f-4b7c-96af-3c8434b48320 in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:47:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:31.899 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:47:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:31.900 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[327e1c70-8f3a-4a35-943c-2fb38962a7ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:31 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:31.900 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:47:31 np0005539550 nova_compute[257631]: 2025-11-29 08:47:31.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:31 np0005539550 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000d6.scope: Deactivated successfully.
Nov 29 03:47:31 np0005539550 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d000000d6.scope: Consumed 3.455s CPU time.
Nov 29 03:47:31 np0005539550 systemd-machined[216673]: Machine qemu-115-instance-000000d6 terminated.
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [NOTICE]   (390960) : haproxy version is 2.8.14-c23fe91
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [NOTICE]   (390960) : path to executable is /usr/sbin/haproxy
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [WARNING]  (390960) : Exiting Master process...
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [WARNING]  (390960) : Exiting Master process...
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [ALERT]    (390960) : Current worker (390962) exited with code 143 (Terminated)
Nov 29 03:47:32 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[390956]: [WARNING]  (390960) : All workers exited. Exiting... (0)
Nov 29 03:47:32 np0005539550 systemd[1]: libpod-b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe.scope: Deactivated successfully.
Nov 29 03:47:32 np0005539550 podman[391227]: 2025-11-29 08:47:32.035653151 +0000 UTC m=+0.043373358 container died b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe-userdata-shm.mount: Deactivated successfully.
Nov 29 03:47:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-419f845ce1079c83c8185cb694a4ad223e84c4db4cfb2a427d511c59068e4dda-merged.mount: Deactivated successfully.
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.070 257641 INFO nova.virt.libvirt.driver [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Instance destroyed successfully.#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.071 257641 DEBUG nova.objects.instance [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 914862a1-ed26-42cf-a984-a50e43676d80 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:32 np0005539550 podman[391227]: 2025-11-29 08:47:32.078749552 +0000 UTC m=+0.086469749 container cleanup b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:47:32 np0005539550 systemd[1]: libpod-conmon-b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe.scope: Deactivated successfully.
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.093 257641 DEBUG nova.virt.libvirt.vif [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:47:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-448548342',display_name='tempest-TestVolumeBootPattern-server-448548342',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-448548342',id=214,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-2fm7vexc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:47:29Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=914862a1-ed26-42cf-a984-a50e43676d80,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.094 257641 DEBUG nova.network.os_vif_util [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "address": "fa:16:3e:cc:5d:5a", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0d81d7d-19", "ovs_interfaceid": "d0d81d7d-191f-4b7c-96af-3c8434b48320", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.095 257641 DEBUG nova.network.os_vif_util [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.095 257641 DEBUG os_vif [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.097 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.097 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0d81d7d-19, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.135 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.139 257641 INFO os_vif [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:5d:5a,bridge_name='br-int',has_traffic_filtering=True,id=d0d81d7d-191f-4b7c-96af-3c8434b48320,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0d81d7d-19')#033[00m
Nov 29 03:47:32 np0005539550 podman[391267]: 2025-11-29 08:47:32.142875464 +0000 UTC m=+0.043579733 container remove b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.148 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[32841e25-5d04-459e-98b1-af18f597ef2b]: (4, ('Sat Nov 29 08:47:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe)\nb3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe\nSat Nov 29 08:47:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (b3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe)\nb3b306b6d2dc48ea0b8b4f6d9e599b6f39d8fc478f6041a0146d0fc17dbbadfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.151 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[63fff321-b785-4fcc-9a58-35541cd39cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.152 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:32 np0005539550 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.167 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8ec086-568c-493e-a08f-8d250a68a9cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.169 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.173 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.179 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7168ec95-45e8-4045-bd1d-99003d4bbea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.180 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c67ca2d-9942-415b-a134-19fe08cf17ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.195 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c9aeae7b-6db6-49c4-b509-55cd5f6a4fd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 909624, 'reachable_time': 26397, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391346, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.199 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:47:32 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:32.199 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[96e0814b-46e9-4a06-a715-0b23ef2dc5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:32 np0005539550 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.378 257641 INFO nova.virt.libvirt.driver [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Deleting instance files /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80_del#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.379 257641 INFO nova.virt.libvirt.driver [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Deletion of /var/lib/nova/instances/914862a1-ed26-42cf-a984-a50e43676d80_del complete#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.430 257641 INFO nova.compute.manager [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Took 0.59 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.430 257641 DEBUG oslo.service.loopingcall [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.431 257641 DEBUG nova.compute.manager [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.431 257641 DEBUG nova.network.neutron [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.439 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Successfully updated port: 8f9c52aa-0f9a-4731-b369-e0132cf94c40 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.450 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.450 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquired lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:32 np0005539550 nova_compute[257631]: 2025-11-29 08:47:32.450 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:47:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:32.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 253 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 504 KiB/s rd, 1.0 MiB/s wr, 69 op/s
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.123 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-unplugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.124 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.124 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.125 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.125 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] No waiting events found dispatching network-vif-unplugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.126 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-unplugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.126 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-changed-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.127 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Refreshing instance network info cache due to event network-changed-8f9c52aa-0f9a-4731-b369-e0132cf94c40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.127 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:33.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:33 np0005539550 nova_compute[257631]: 2025-11-29 08:47:33.514 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:47:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:34.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 328 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.3 MiB/s wr, 158 op/s
Nov 29 03:47:34 np0005539550 nova_compute[257631]: 2025-11-29 08:47:34.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:35 np0005539550 podman[391358]: 2025-11-29 08:47:35.312711348 +0000 UTC m=+0.051349771 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 03:47:35 np0005539550 podman[391357]: 2025-11-29 08:47:35.344750958 +0000 UTC m=+0.085557356 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:47:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.685 257641 DEBUG nova.network.neutron [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.706 257641 INFO nova.compute.manager [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Took 3.28 seconds to deallocate network for instance.#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.739 257641 DEBUG nova.compute.manager [req-9a1469eb-2b5e-4ea2-8ea3-a77c30e7b664 req-6f69f77f-3b3e-42d1-9f25-79a4be0e36dc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-deleted-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.902 257641 INFO nova.compute.manager [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Took 0.20 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.904 257641 DEBUG nova.compute.manager [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Deleting volume: 04876073-8d9f-4092-96a1-ca8b66cb7194 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.946 257641 DEBUG nova.network.neutron [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updating instance_info_cache with network_info: [{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.969 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Releasing lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.969 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Instance network_info: |[{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.970 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.970 257641 DEBUG nova.network.neutron [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Refreshing network info cache for port 8f9c52aa-0f9a-4731-b369-e0132cf94c40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.974 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Start _get_guest_xml network_info=[{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.979 257641 WARNING nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.984 257641 DEBUG nova.virt.libvirt.host [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.985 257641 DEBUG nova.virt.libvirt.host [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.988 257641 DEBUG nova.virt.libvirt.host [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.988 257641 DEBUG nova.virt.libvirt.host [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.989 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.989 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.990 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.990 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.990 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.991 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.991 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.991 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.991 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.992 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.992 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.992 257641 DEBUG nova.virt.hardware [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:47:35 np0005539550 nova_compute[257631]: 2025-11-29 08:47:35.995 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.093 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.094 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.162 257641 DEBUG oslo_concurrency.processutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655348592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.436 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.466 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.470 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384095364' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1384095364' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2297897491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.606 257641 DEBUG oslo_concurrency.processutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.613 257641 DEBUG nova.compute.provider_tree [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.629 257641 DEBUG nova.scheduler.client.report [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.650 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:36.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 339 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 147 op/s
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.694 257641 INFO nova.scheduler.client.report [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 914862a1-ed26-42cf-a984-a50e43676d80#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.791 257641 DEBUG oslo_concurrency.lockutils [None req-9c5831a7-0ace-4b69-aac7-f7260c056e0e b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:47:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4148940385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.906 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.908 257641 DEBUG nova.virt.libvirt.vif [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=216,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-cffgh20a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:30Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=8d6c8cb2-2581-465e-a048-10774b364ab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.908 257641 DEBUG nova.network.os_vif_util [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.909 257641 DEBUG nova.network.os_vif_util [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.910 257641 DEBUG nova.objects.instance [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d6c8cb2-2581-465e-a048-10774b364ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.927 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <uuid>8d6c8cb2-2581-465e-a048-10774b364ab5</uuid>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <name>instance-000000d8</name>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519</nova:name>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:47:35</nova:creationTime>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:user uuid="de2965680b714b539553cf0792584e1e">tempest-TestSecurityGroupsBasicOps-1136856573-project-member</nova:user>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:project uuid="75423dfb570f4b2bbc2f8de4f3a65d18">tempest-TestSecurityGroupsBasicOps-1136856573</nova:project>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <nova:port uuid="8f9c52aa-0f9a-4731-b369-e0132cf94c40">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="serial">8d6c8cb2-2581-465e-a048-10774b364ab5</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="uuid">8d6c8cb2-2581-465e-a048-10774b364ab5</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8d6c8cb2-2581-465e-a048-10774b364ab5_disk">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:27:38:f4"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <target dev="tap8f9c52aa-0f"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/console.log" append="off"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:47:36 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:47:36 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:47:36 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:47:36 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.928 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Preparing to wait for external event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.928 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.928 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.928 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.929 257641 DEBUG nova.virt.libvirt.vif [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=216,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-cffgh20a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:47:30Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=8d6c8cb2-2581-465e-a048-10774b364ab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.929 257641 DEBUG nova.network.os_vif_util [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.930 257641 DEBUG nova.network.os_vif_util [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.930 257641 DEBUG os_vif [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.930 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.931 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.931 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.934 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.934 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8f9c52aa-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.935 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8f9c52aa-0f, col_values=(('external_ids', {'iface-id': '8f9c52aa-0f9a-4731-b369-e0132cf94c40', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:38:f4', 'vm-uuid': '8d6c8cb2-2581-465e-a048-10774b364ab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.936 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:36 np0005539550 NetworkManager[49039]: <info>  [1764406056.9375] manager: (tap8f9c52aa-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.938 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.942 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.943 257641 INFO os_vif [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f')#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.995 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.996 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.996 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] No VIF found with MAC fa:16:3e:27:38:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:47:36 np0005539550 nova_compute[257631]: 2025-11-29 08:47:36.996 257641 INFO nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Using config drive#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.020 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.123313) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057123353, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 497, "num_deletes": 251, "total_data_size": 503435, "memory_usage": 513848, "flush_reason": "Manual Compaction"}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057128051, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 498197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65467, "largest_seqno": 65963, "table_properties": {"data_size": 495437, "index_size": 795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6778, "raw_average_key_size": 19, "raw_value_size": 489836, "raw_average_value_size": 1379, "num_data_blocks": 35, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406030, "oldest_key_time": 1764406030, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 4803 microseconds, and 1621 cpu microseconds.
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.128113) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 498197 bytes OK
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.128137) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129490) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129518) EVENT_LOG_v1 {"time_micros": 1764406057129512, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129538) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 500569, prev total WAL file size 500569, number of live WAL files 2.
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(486KB)], [146(11MB)]
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057130064, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 12676152, "oldest_snapshot_seqno": -1}
Nov 29 03:47:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:37.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 9959 keys, 10769585 bytes, temperature: kUnknown
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057195367, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 10769585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10708200, "index_size": 35352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263315, "raw_average_key_size": 26, "raw_value_size": 10536187, "raw_average_value_size": 1057, "num_data_blocks": 1331, "num_entries": 9959, "num_filter_entries": 9959, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.195625) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10769585 bytes
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.197090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.8 rd, 164.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(47.1) write-amplify(21.6) OK, records in: 10473, records dropped: 514 output_compression: NoCompression
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.197109) EVENT_LOG_v1 {"time_micros": 1764406057197100, "job": 90, "event": "compaction_finished", "compaction_time_micros": 65394, "compaction_time_cpu_micros": 25675, "output_level": 6, "num_output_files": 1, "total_output_size": 10769585, "num_input_records": 10473, "num_output_records": 9959, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057197319, "job": 90, "event": "table_file_deletion", "file_number": 148}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406057199755, "job": 90, "event": "table_file_deletion", "file_number": 146}
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.129915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.199822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.199828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.199830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.199833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:47:37.199835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.222 257641 DEBUG nova.network.neutron [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updated VIF entry in instance network info cache for port 8f9c52aa-0f9a-4731-b369-e0132cf94c40. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.223 257641 DEBUG nova.network.neutron [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updating instance_info_cache with network_info: [{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.238 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.238 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.239 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "914862a1-ed26-42cf-a984-a50e43676d80-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.239 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.239 257641 DEBUG oslo_concurrency.lockutils [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "914862a1-ed26-42cf-a984-a50e43676d80-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.239 257641 DEBUG nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] No waiting events found dispatching network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.239 257641 WARNING nova.compute.manager [req-0380cf18-0a50-45e7-8ee5-17e45dbfea71 req-b4addc06-fc29-430d-b77b-c5f5aaf848eb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Received unexpected event network-vif-plugged-d0d81d7d-191f-4b7c-96af-3c8434b48320 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.390 257641 INFO nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Creating config drive at /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.397 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplyjxo06 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.536 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplyjxo06" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.567 257641 DEBUG nova.storage.rbd_utils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] rbd image 8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.571 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config 8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.725 257641 DEBUG oslo_concurrency.processutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config 8d6c8cb2-2581-465e-a048-10774b364ab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.726 257641 INFO nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Deleting local config drive /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5/disk.config because it was imported into RBD.#033[00m
Nov 29 03:47:37 np0005539550 kernel: tap8f9c52aa-0f: entered promiscuous mode
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.7764] manager: (tap8f9c52aa-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/431)
Nov 29 03:47:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:37Z|01003|binding|INFO|Claiming lport 8f9c52aa-0f9a-4731-b369-e0132cf94c40 for this chassis.
Nov 29 03:47:37 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:37Z|01004|binding|INFO|8f9c52aa-0f9a-4731-b369-e0132cf94c40: Claiming fa:16:3e:27:38:f4 10.100.0.14
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.783 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539550 nova_compute[257631]: 2025-11-29 08:47:37.802 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.8036] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.8062] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/433)
Nov 29 03:47:37 np0005539550 systemd-machined[216673]: New machine qemu-116-instance-000000d8.
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.806 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:38:f4 10.100.0.14'], port_security=['fa:16:3e:27:38:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8d6c8cb2-2581-465e-a048-10774b364ab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21923162-fe0c-4f50-88b5-19d1d684fafc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8aeea424-b712-4a7c-90b2-3d14349e7c5b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdba913e-28a6-4e48-bfcc-da07411da078, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f9c52aa-0f9a-4731-b369-e0132cf94c40) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.807 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9c52aa-0f9a-4731-b369-e0132cf94c40 in datapath 21923162-fe0c-4f50-88b5-19d1d684fafc bound to our chassis#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.809 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 21923162-fe0c-4f50-88b5-19d1d684fafc#033[00m
Nov 29 03:47:37 np0005539550 systemd-udevd[391684]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.820 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d7fc571a-2bd2-4ced-8b47-5de7528a33ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.821 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap21923162-f1 in ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.8226] device (tap8f9c52aa-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.8233] device (tap8f9c52aa-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.822 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap21923162-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.822 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[82115a1e-e86e-47ec-a4db-e00cfe5e573d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.824 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0a87862d-5e8c-401c-ae8e-3fd61e7e914f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 systemd[1]: Started Virtual Machine qemu-116-instance-000000d8.
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.833 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c8f2e3-2f50-472d-8e0b-1fa28edd7618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.858 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[80faee34-e46f-4179-a62c-b704c8d15c45]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.886 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[45fd7eb0-1821-4b35-a31f-28082333dce9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.9011] manager: (tap21923162-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/434)
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.900 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[707a39ba-63d4-4f32-a1f5-6a1c85df37dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.936 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[748b5cfc-8929-466d-b821-a5d688912258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.939 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9e17ddc3-4fd0-4002-907b-f48390132fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 NetworkManager[49039]: <info>  [1764406057.9576] device (tap21923162-f0): carrier: link connected
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.962 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[4eeeede9-b9e0-4216-b95d-0269b2547759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.978 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2f04acf0-f175-4efa-b248-259f4be6f75a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21923162-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:39:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910562, 'reachable_time': 30831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391717, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:37 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:37.992 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4c56a53e-8caf-44ee-a598-ce53c7f3e3c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:399b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 910562, 'tstamp': 910562}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391718, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.007 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[416190f2-c26e-41e0-b26d-64109a9d954c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap21923162-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:39:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910562, 'reachable_time': 30831, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391719, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.027 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.048 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd2c354-bcdf-49f2-a3c7-8fdbaf724c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:38Z|01005|binding|INFO|Setting lport 8f9c52aa-0f9a-4731-b369-e0132cf94c40 ovn-installed in OVS
Nov 29 03:47:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:38Z|01006|binding|INFO|Setting lport 8f9c52aa-0f9a-4731-b369-e0132cf94c40 up in Southbound
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.064 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.110 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb9d2d5-e288-4ea0-b0e6-2c890a4685f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.112 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21923162-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.112 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.113 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21923162-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 NetworkManager[49039]: <info>  [1764406058.1152] manager: (tap21923162-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Nov 29 03:47:38 np0005539550 kernel: tap21923162-f0: entered promiscuous mode
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.117 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.118 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap21923162-f0, col_values=(('external_ids', {'iface-id': '19e88d44-d740-439a-9106-0dfa712bfe7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:38 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:38Z|01007|binding|INFO|Releasing lport 19e88d44-d740-439a-9106-0dfa712bfe7d from this chassis (sb_readonly=0)
Nov 29 03:47:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.133 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.134 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.136 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3e3305a1-0a71-4d97-9193-164125c978cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.136 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-21923162-fe0c-4f50-88b5-19d1d684fafc
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/21923162-fe0c-4f50-88b5-19d1d684fafc.pid.haproxy
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 21923162-fe0c-4f50-88b5-19d1d684fafc
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:47:38 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:38.137 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'env', 'PROCESS_TAG=haproxy-21923162-fe0c-4f50-88b5-19d1d684fafc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/21923162-fe0c-4f50-88b5-19d1d684fafc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:47:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Nov 29 03:47:38 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.200 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406058.2000964, 8d6c8cb2-2581-465e-a048-10774b364ab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.201 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] VM Started (Lifecycle Event)#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.233 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.241 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406058.2013347, 8d6c8cb2-2581-465e-a048-10774b364ab5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.242 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.265 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.268 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.295 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:38 np0005539550 podman[391793]: 2025-11-29 08:47:38.484543432 +0000 UTC m=+0.050813067 container create 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:38 np0005539550 systemd[1]: Started libpod-conmon-8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0.scope.
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.535 257641 DEBUG nova.compute.manager [req-6edca17e-11ea-4882-8862-54d4f9922f3b req-e0842a57-bb5b-4c5c-86a1-3f8a262b56e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.536 257641 DEBUG oslo_concurrency.lockutils [req-6edca17e-11ea-4882-8862-54d4f9922f3b req-e0842a57-bb5b-4c5c-86a1-3f8a262b56e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.536 257641 DEBUG oslo_concurrency.lockutils [req-6edca17e-11ea-4882-8862-54d4f9922f3b req-e0842a57-bb5b-4c5c-86a1-3f8a262b56e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.537 257641 DEBUG oslo_concurrency.lockutils [req-6edca17e-11ea-4882-8862-54d4f9922f3b req-e0842a57-bb5b-4c5c-86a1-3f8a262b56e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.537 257641 DEBUG nova.compute.manager [req-6edca17e-11ea-4882-8862-54d4f9922f3b req-e0842a57-bb5b-4c5c-86a1-3f8a262b56e7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Processing event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.538 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.542 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406058.5425074, 8d6c8cb2-2581-465e-a048-10774b364ab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.543 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:47:38 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:38 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c38a8412ba57d01b27df411495609211d1febccb88b6f8210ece8db292c78cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.548 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.552 257641 INFO nova.virt.libvirt.driver [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Instance spawned successfully.#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.553 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:47:38 np0005539550 podman[391793]: 2025-11-29 08:47:38.461635162 +0000 UTC m=+0.027904807 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:47:38 np0005539550 podman[391793]: 2025-11-29 08:47:38.561890109 +0000 UTC m=+0.128159764 container init 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.562 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.566 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:47:38 np0005539550 podman[391793]: 2025-11-29 08:47:38.567499981 +0000 UTC m=+0.133769616 container start 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.580 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.580 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.581 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.582 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.583 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.584 257641 DEBUG nova.virt.libvirt.driver [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.590 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:47:38 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [NOTICE]   (391812) : New worker (391814) forked
Nov 29 03:47:38 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [NOTICE]   (391812) : Loading success.
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.650 257641 INFO nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Took 7.97 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.651 257641 DEBUG nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 339 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Nov 29 03:47:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:38.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.708 257641 INFO nova.compute.manager [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Took 8.86 seconds to build instance.#033[00m
Nov 29 03:47:38 np0005539550 nova_compute[257631]: 2025-11-29 08:47:38.731 257641 DEBUG oslo_concurrency.lockutils [None req-b4569492-e7b0-4c9b-93d9-9da44f767b7c de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:39.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:47:39 np0005539550 nova_compute[257631]: 2025-11-29 08:47:39.913 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2d9d9829-1618-486c-87bd-9d3afa4d1875 does not exist
Nov 29 03:47:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1f2510b8-263a-4378-ba94-36e73143ec5b does not exist
Nov 29 03:47:40 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d25b0b88-ab53-4dfa-962a-c40bda95e312 does not exist
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.634 257641 DEBUG nova.compute.manager [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.634 257641 DEBUG oslo_concurrency.lockutils [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.634 257641 DEBUG oslo_concurrency.lockutils [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.635 257641 DEBUG oslo_concurrency.lockutils [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.635 257641 DEBUG nova.compute.manager [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] No waiting events found dispatching network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:40 np0005539550 nova_compute[257631]: 2025-11-29 08:47:40.637 257641 WARNING nova.compute.manager [req-69019565-91b3-407b-8d97-78e7eaec2d83 req-7ccda83c-d10a-439b-a1dc-71b8cfb4e616 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received unexpected event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:47:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.3 MiB/s wr, 302 op/s
Nov 29 03:47:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:40.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:40 np0005539550 podman[391964]: 2025-11-29 08:47:40.997727779 +0000 UTC m=+0.054717726 container create 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:47:41 np0005539550 systemd[1]: Started libpod-conmon-207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754.scope.
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:40.96658281 +0000 UTC m=+0.023572807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:41.07959827 +0000 UTC m=+0.136588207 container init 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:41.087335016 +0000 UTC m=+0.144324933 container start 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:41.090453715 +0000 UTC m=+0.147443632 container attach 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:47:41 np0005539550 affectionate_nash[391981]: 167 167
Nov 29 03:47:41 np0005539550 systemd[1]: libpod-207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754.scope: Deactivated successfully.
Nov 29 03:47:41 np0005539550 conmon[391981]: conmon 207dd53b42458c98c6e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754.scope/container/memory.events
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:41.095972895 +0000 UTC m=+0.152962812 container died 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 03:47:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-be6c02cf3a0504a856f19d8f7a8d596749c1081e9040083fd6ae88b0e3990e91-merged.mount: Deactivated successfully.
Nov 29 03:47:41 np0005539550 podman[391964]: 2025-11-29 08:47:41.139133027 +0000 UTC m=+0.196122964 container remove 207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nash, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:47:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:41 np0005539550 systemd[1]: libpod-conmon-207dd53b42458c98c6e76ae5c6e3acc121f9b93d817cf6fd9df0e0776b3e2754.scope: Deactivated successfully.
Nov 29 03:47:41 np0005539550 podman[392004]: 2025-11-29 08:47:41.323739248 +0000 UTC m=+0.042436384 container create a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:47:41 np0005539550 systemd[1]: Started libpod-conmon-a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733.scope.
Nov 29 03:47:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:41 np0005539550 podman[392004]: 2025-11-29 08:47:41.303826155 +0000 UTC m=+0.022523321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:41 np0005539550 podman[392004]: 2025-11-29 08:47:41.41389917 +0000 UTC m=+0.132596326 container init a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:41 np0005539550 podman[392004]: 2025-11-29 08:47:41.420124648 +0000 UTC m=+0.138821794 container start a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:47:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:41 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:47:41 np0005539550 podman[392004]: 2025-11-29 08:47:41.492386576 +0000 UTC m=+0.211083712 container attach a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:47:41 np0005539550 nova_compute[257631]: 2025-11-29 08:47:41.938 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:42 np0005539550 angry_shirley[392020]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:47:42 np0005539550 angry_shirley[392020]: --> relative data size: 1.0
Nov 29 03:47:42 np0005539550 angry_shirley[392020]: --> All data devices are unavailable
Nov 29 03:47:42 np0005539550 systemd[1]: libpod-a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733.scope: Deactivated successfully.
Nov 29 03:47:42 np0005539550 podman[392004]: 2025-11-29 08:47:42.206780004 +0000 UTC m=+0.925477140 container died a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b8700d64a4a674c20ef08ebe008137a0c07e0f78f43aaee988422c400c8315f5-merged.mount: Deactivated successfully.
Nov 29 03:47:42 np0005539550 podman[392004]: 2025-11-29 08:47:42.279664519 +0000 UTC m=+0.998361645 container remove a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shirley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 03:47:42 np0005539550 systemd[1]: libpod-conmon-a4e3959495e5be634bd2ae3f327eb09cddbed0728044ca42ee1bdb9cd912e733.scope: Deactivated successfully.
Nov 29 03:47:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 312 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.8 MiB/s wr, 313 op/s
Nov 29 03:47:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:42.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.918988227 +0000 UTC m=+0.042099246 container create d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 03:47:42 np0005539550 systemd[1]: Started libpod-conmon-d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec.scope.
Nov 29 03:47:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.984443453 +0000 UTC m=+0.107554502 container init d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.991166283 +0000 UTC m=+0.114277292 container start d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.994654282 +0000 UTC m=+0.117765301 container attach d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.899497254 +0000 UTC m=+0.022608303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:42 np0005539550 zen_carson[392205]: 167 167
Nov 29 03:47:42 np0005539550 systemd[1]: libpod-d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec.scope: Deactivated successfully.
Nov 29 03:47:42 np0005539550 podman[392189]: 2025-11-29 08:47:42.99655905 +0000 UTC m=+0.119670079 container died d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-178a91e04e94d9b5753ac945f77c1dac1e4528074794c924519ec0538b6d647f-merged.mount: Deactivated successfully.
Nov 29 03:47:43 np0005539550 podman[392189]: 2025-11-29 08:47:43.037567578 +0000 UTC m=+0.160678597 container remove d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_carson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:47:43 np0005539550 systemd[1]: libpod-conmon-d34d9d4348eaaa9bc839c01f14107b8f26170288fc1e64c718a2e88eb3f6f1ec.scope: Deactivated successfully.
Nov 29 03:47:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:43 np0005539550 podman[392231]: 2025-11-29 08:47:43.204404289 +0000 UTC m=+0.040557488 container create 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:47:43 np0005539550 systemd[1]: Started libpod-conmon-2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc.scope.
Nov 29 03:47:43 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5af7f2a60b5a8768d3baf2c26ea40be3174fbd1044fa34c41cf56ec8b9c135a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5af7f2a60b5a8768d3baf2c26ea40be3174fbd1044fa34c41cf56ec8b9c135a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5af7f2a60b5a8768d3baf2c26ea40be3174fbd1044fa34c41cf56ec8b9c135a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:43 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5af7f2a60b5a8768d3baf2c26ea40be3174fbd1044fa34c41cf56ec8b9c135a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:43 np0005539550 podman[392231]: 2025-11-29 08:47:43.27362108 +0000 UTC m=+0.109774279 container init 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:47:43 np0005539550 podman[392231]: 2025-11-29 08:47:43.186617839 +0000 UTC m=+0.022771058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:43 np0005539550 podman[392231]: 2025-11-29 08:47:43.284088475 +0000 UTC m=+0.120241684 container start 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:47:43 np0005539550 podman[392231]: 2025-11-29 08:47:43.289878952 +0000 UTC m=+0.126032171 container attach 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:43 np0005539550 nova_compute[257631]: 2025-11-29 08:47:43.801 257641 DEBUG nova.compute.manager [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-changed-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:43 np0005539550 nova_compute[257631]: 2025-11-29 08:47:43.803 257641 DEBUG nova.compute.manager [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Refreshing instance network info cache due to event network-changed-8f9c52aa-0f9a-4731-b369-e0132cf94c40. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:47:43 np0005539550 nova_compute[257631]: 2025-11-29 08:47:43.804 257641 DEBUG oslo_concurrency.lockutils [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:43 np0005539550 nova_compute[257631]: 2025-11-29 08:47:43.804 257641 DEBUG oslo_concurrency.lockutils [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:43 np0005539550 nova_compute[257631]: 2025-11-29 08:47:43.805 257641 DEBUG nova.network.neutron [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Refreshing network info cache for port 8f9c52aa-0f9a-4731-b369-e0132cf94c40 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:47:44 np0005539550 modest_moser[392248]: {
Nov 29 03:47:44 np0005539550 modest_moser[392248]:    "0": [
Nov 29 03:47:44 np0005539550 modest_moser[392248]:        {
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "devices": [
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "/dev/loop3"
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            ],
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "lv_name": "ceph_lv0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "lv_size": "7511998464",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "name": "ceph_lv0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "tags": {
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.cluster_name": "ceph",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.crush_device_class": "",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.encrypted": "0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.osd_id": "0",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.type": "block",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:                "ceph.vdo": "0"
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            },
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "type": "block",
Nov 29 03:47:44 np0005539550 modest_moser[392248]:            "vg_name": "ceph_vg0"
Nov 29 03:47:44 np0005539550 modest_moser[392248]:        }
Nov 29 03:47:44 np0005539550 modest_moser[392248]:    ]
Nov 29 03:47:44 np0005539550 modest_moser[392248]: }
Nov 29 03:47:44 np0005539550 systemd[1]: libpod-2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc.scope: Deactivated successfully.
Nov 29 03:47:44 np0005539550 podman[392231]: 2025-11-29 08:47:44.04654873 +0000 UTC m=+0.882701929 container died 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:47:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f5af7f2a60b5a8768d3baf2c26ea40be3174fbd1044fa34c41cf56ec8b9c135a-merged.mount: Deactivated successfully.
Nov 29 03:47:44 np0005539550 podman[392231]: 2025-11-29 08:47:44.097790516 +0000 UTC m=+0.933943705 container remove 2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_moser, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:47:44 np0005539550 systemd[1]: libpod-conmon-2bbb4e4906d603ad062093ed6edb51abf95d0b8a17090148b49bdfb6b05c21fc.scope: Deactivated successfully.
Nov 29 03:47:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 481 KiB/s wr, 243 op/s
Nov 29 03:47:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:44.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.693622504 +0000 UTC m=+0.036843673 container create b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:47:44 np0005539550 systemd[1]: Started libpod-conmon-b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d.scope.
Nov 29 03:47:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.677891456 +0000 UTC m=+0.021112635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.779475447 +0000 UTC m=+0.122696646 container init b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.786127475 +0000 UTC m=+0.129348644 container start b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.789304185 +0000 UTC m=+0.132525374 container attach b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:44 np0005539550 angry_euler[392427]: 167 167
Nov 29 03:47:44 np0005539550 systemd[1]: libpod-b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d.scope: Deactivated successfully.
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.794105847 +0000 UTC m=+0.137327016 container died b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:47:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-47b7e82dca79df53790d4e042556f01776a0752e435372ac2d93359749370f3e-merged.mount: Deactivated successfully.
Nov 29 03:47:44 np0005539550 podman[392409]: 2025-11-29 08:47:44.827773779 +0000 UTC m=+0.170994948 container remove b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:47:44 np0005539550 systemd[1]: libpod-conmon-b09131ab8d4380a4037a44437db9678766f3f7c02d149cf1e3b818a4c389b44d.scope: Deactivated successfully.
Nov 29 03:47:44 np0005539550 nova_compute[257631]: 2025-11-29 08:47:44.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:45 np0005539550 podman[392451]: 2025-11-29 08:47:45.033621578 +0000 UTC m=+0.050926430 container create ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:47:45 np0005539550 systemd[1]: Started libpod-conmon-ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5.scope.
Nov 29 03:47:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:47:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff976e81165751e43fd17e6ce92fa85fad136a828b48ead3d65c249976a938d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:45 np0005539550 podman[392451]: 2025-11-29 08:47:45.008979594 +0000 UTC m=+0.026284476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:47:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff976e81165751e43fd17e6ce92fa85fad136a828b48ead3d65c249976a938d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff976e81165751e43fd17e6ce92fa85fad136a828b48ead3d65c249976a938d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff976e81165751e43fd17e6ce92fa85fad136a828b48ead3d65c249976a938d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:47:45 np0005539550 podman[392451]: 2025-11-29 08:47:45.118607469 +0000 UTC m=+0.135912321 container init ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:47:45 np0005539550 podman[392451]: 2025-11-29 08:47:45.125990125 +0000 UTC m=+0.143294977 container start ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:47:45 np0005539550 podman[392451]: 2025-11-29 08:47:45.129193317 +0000 UTC m=+0.146498189 container attach ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:45 np0005539550 nova_compute[257631]: 2025-11-29 08:47:45.219 257641 DEBUG nova.network.neutron [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updated VIF entry in instance network info cache for port 8f9c52aa-0f9a-4731-b369-e0132cf94c40. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:47:45 np0005539550 nova_compute[257631]: 2025-11-29 08:47:45.220 257641 DEBUG nova.network.neutron [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updating instance_info_cache with network_info: [{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:45 np0005539550 nova_compute[257631]: 2025-11-29 08:47:45.258 257641 DEBUG oslo_concurrency.lockutils [req-faf8a547-a8e9-478d-afc8-5ff27b291128 req-0b1cb929-785d-4af6-bb90-6c0e1f2844ee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Nov 29 03:47:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Nov 29 03:47:45 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]: {
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:        "osd_id": 0,
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:        "type": "bluestore"
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]:    }
Nov 29 03:47:45 np0005539550 lucid_albattani[392467]: }
Nov 29 03:47:45 np0005539550 systemd[1]: libpod-ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5.scope: Deactivated successfully.
Nov 29 03:47:46 np0005539550 podman[392488]: 2025-11-29 08:47:46.003154713 +0000 UTC m=+0.025374943 container died ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:47:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ff976e81165751e43fd17e6ce92fa85fad136a828b48ead3d65c249976a938d8-merged.mount: Deactivated successfully.
Nov 29 03:47:46 np0005539550 podman[392488]: 2025-11-29 08:47:46.060014461 +0000 UTC m=+0.082234661 container remove ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:47:46 np0005539550 systemd[1]: libpod-conmon-ea7e608490fe185229d0dccceffb0f8263bbfa7bac11275a53d3d1198a6a47f5.scope: Deactivated successfully.
Nov 29 03:47:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:47:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:47:46 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev eeceb8e2-bd58-48c1-941a-734a578307df does not exist
Nov 29 03:47:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e56c19b9-d30e-4e57-899c-ed8a597f6f10 does not exist
Nov 29 03:47:46 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5d095a2d-c480-4560-a6fa-a0eff1b60e2d does not exist
Nov 29 03:47:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 21 KiB/s wr, 232 op/s
Nov 29 03:47:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:46.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:46 np0005539550 nova_compute[257631]: 2025-11-29 08:47:46.942 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:47 np0005539550 nova_compute[257631]: 2025-11-29 08:47:47.069 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406052.0665605, 914862a1-ed26-42cf-a984-a50e43676d80 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:47:47 np0005539550 nova_compute[257631]: 2025-11-29 08:47:47.069 257641 INFO nova.compute.manager [-] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:47:47 np0005539550 nova_compute[257631]: 2025-11-29 08:47:47.098 257641 DEBUG nova.compute.manager [None req-a2224e42-87f4-42da-b8ca-252471bcb404 - - - - - -] [instance: 914862a1-ed26-42cf-a984-a50e43676d80] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:47:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:47.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:47 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:47:48 np0005539550 podman[392553]: 2025-11-29 08:47:48.420083956 +0000 UTC m=+0.147100436 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 20 KiB/s wr, 199 op/s
Nov 29 03:47:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:48.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:49.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:49 np0005539550 nova_compute[257631]: 2025-11-29 08:47:49.917 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 328 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.6 MiB/s wr, 157 op/s
Nov 29 03:47:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:50.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:50 np0005539550 nova_compute[257631]: 2025-11-29 08:47:50.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:50 np0005539550 nova_compute[257631]: 2025-11-29 08:47:50.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:47:50 np0005539550 nova_compute[257631]: 2025-11-29 08:47:50.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:47:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:51.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:51 np0005539550 nova_compute[257631]: 2025-11-29 08:47:51.521 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:47:51 np0005539550 nova_compute[257631]: 2025-11-29 08:47:51.521 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:47:51 np0005539550 nova_compute[257631]: 2025-11-29 08:47:51.521 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:47:51 np0005539550 nova_compute[257631]: 2025-11-29 08:47:51.521 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8d6c8cb2-2581-465e-a048-10774b364ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:51Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:38:f4 10.100.0.14
Nov 29 03:47:51 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:51Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:38:f4 10.100.0.14
Nov 29 03:47:51 np0005539550 nova_compute[257631]: 2025-11-29 08:47:51.945 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 360 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 4.7 MiB/s wr, 149 op/s
Nov 29 03:47:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:52.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:47:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:53.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:47:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 403 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 7.1 MiB/s wr, 179 op/s
Nov 29 03:47:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:54.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:54 np0005539550 nova_compute[257631]: 2025-11-29 08:47:54.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:54 np0005539550 nova_compute[257631]: 2025-11-29 08:47:54.994 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updating instance_info_cache with network_info: [{"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.007 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-8d6c8cb2-2581-465e-a048-10774b364ab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.008 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.008 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:55.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.946 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.947 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:47:55 np0005539550 nova_compute[257631]: 2025-11-29 08:47:55.947 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690292819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.423 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.508 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.508 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000d8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:47:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.6 MiB/s wr, 177 op/s
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.670 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.672 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3907MB free_disk=20.85297393798828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.673 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.673 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:56.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.786 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 8d6c8cb2-2581-465e-a048-10774b364ab5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.787 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.787 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.836 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:47:56 np0005539550 nova_compute[257631]: 2025-11-29 08:47:56.948 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:57.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:47:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3212363587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:47:57 np0005539550 nova_compute[257631]: 2025-11-29 08:47:57.302 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:47:57 np0005539550 nova_compute[257631]: 2025-11-29 08:47:57.308 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:47:57 np0005539550 nova_compute[257631]: 2025-11-29 08:47:57.460 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:47:57 np0005539550 nova_compute[257631]: 2025-11-29 08:47:57.492 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:47:57 np0005539550 nova_compute[257631]: 2025-11-29 08:47:57.493 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:47:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.332 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.332 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.332 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.333 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.333 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.335 257641 INFO nova.compute.manager [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Terminating instance#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.335 257641 DEBUG nova.compute.manager [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:47:58 np0005539550 kernel: tap8f9c52aa-0f (unregistering): left promiscuous mode
Nov 29 03:47:58 np0005539550 NetworkManager[49039]: <info>  [1764406078.3974] device (tap8f9c52aa-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:47:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:58Z|01008|binding|INFO|Releasing lport 8f9c52aa-0f9a-4731-b369-e0132cf94c40 from this chassis (sb_readonly=0)
Nov 29 03:47:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:58Z|01009|binding|INFO|Setting lport 8f9c52aa-0f9a-4731-b369-e0132cf94c40 down in Southbound
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 ovn_controller[148680]: 2025-11-29T08:47:58Z|01010|binding|INFO|Removing iface tap8f9c52aa-0f ovn-installed in OVS
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.408 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.415 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:38:f4 10.100.0.14'], port_security=['fa:16:3e:27:38:f4 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '8d6c8cb2-2581-465e-a048-10774b364ab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-21923162-fe0c-4f50-88b5-19d1d684fafc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75423dfb570f4b2bbc2f8de4f3a65d18', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6b0b27c2-85a4-42c9-bec5-1f8a0407b6d6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdba913e-28a6-4e48-bfcc-da07411da078, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=8f9c52aa-0f9a-4731-b369-e0132cf94c40) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.417 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 8f9c52aa-0f9a-4731-b369-e0132cf94c40 in datapath 21923162-fe0c-4f50-88b5-19d1d684fafc unbound from our chassis#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.418 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 21923162-fe0c-4f50-88b5-19d1d684fafc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.419 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[15662b8e-4d33-4cb4-9b05-e4bcd36f387c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.420 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc namespace which is not needed anymore#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.425 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d000000d8.scope: Deactivated successfully.
Nov 29 03:47:58 np0005539550 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d000000d8.scope: Consumed 13.499s CPU time.
Nov 29 03:47:58 np0005539550 systemd-machined[216673]: Machine qemu-116-instance-000000d8 terminated.
Nov 29 03:47:58 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [NOTICE]   (391812) : haproxy version is 2.8.14-c23fe91
Nov 29 03:47:58 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [NOTICE]   (391812) : path to executable is /usr/sbin/haproxy
Nov 29 03:47:58 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [WARNING]  (391812) : Exiting Master process...
Nov 29 03:47:58 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [ALERT]    (391812) : Current worker (391814) exited with code 143 (Terminated)
Nov 29 03:47:58 np0005539550 neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc[391808]: [WARNING]  (391812) : All workers exited. Exiting... (0)
Nov 29 03:47:58 np0005539550 systemd[1]: libpod-8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0.scope: Deactivated successfully.
Nov 29 03:47:58 np0005539550 podman[392704]: 2025-11-29 08:47:58.553425281 +0000 UTC m=+0.044030545 container died 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.553 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.558 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.569 257641 INFO nova.virt.libvirt.driver [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Instance destroyed successfully.#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.570 257641 DEBUG nova.objects.instance [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lazy-loading 'resources' on Instance uuid 8d6c8cb2-2581-465e-a048-10774b364ab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.585 257641 DEBUG nova.virt.libvirt.vif [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:47:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1136856573-gen-1-816728519',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1136856573-ge',id=216,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIqztyNxcFNGX+Jw/TzqSaf7IKcldRL0GvIZInM3adtkD/2hkuxV9cfJutZL0j7me7Di9qNueMjWPCgJZ95kQGG++Pk0vOKucPX3FrXhBYJOxZ/WNUp453mg0OmkvApWRQ==',key_name='tempest-TestSecurityGroupsBasicOps-1327998220',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75423dfb570f4b2bbc2f8de4f3a65d18',ramdisk_id='',reservation_id='r-cffgh20a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1136856573',owner_user_name='tempest-TestSecurityGroupsBasicOps-1136856573-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:47:38Z,user_data=None,user_id='de2965680b714b539553cf0792584e1e',uuid=8d6c8cb2-2581-465e-a048-10774b364ab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.586 257641 DEBUG nova.network.os_vif_util [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converting VIF {"id": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "address": "fa:16:3e:27:38:f4", "network": {"id": "21923162-fe0c-4f50-88b5-19d1d684fafc", "bridge": "br-int", "label": "tempest-network-smoke--1457068308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75423dfb570f4b2bbc2f8de4f3a65d18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8f9c52aa-0f", "ovs_interfaceid": "8f9c52aa-0f9a-4731-b369-e0132cf94c40", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.587 257641 DEBUG nova.network.os_vif_util [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.588 257641 DEBUG os_vif [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.589 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.590 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8f9c52aa-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0-userdata-shm.mount: Deactivated successfully.
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.594 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:47:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9c38a8412ba57d01b27df411495609211d1febccb88b6f8210ece8db292c78cc-merged.mount: Deactivated successfully.
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.597 257641 INFO os_vif [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:38:f4,bridge_name='br-int',has_traffic_filtering=True,id=8f9c52aa-0f9a-4731-b369-e0132cf94c40,network=Network(21923162-fe0c-4f50-88b5-19d1d684fafc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8f9c52aa-0f')#033[00m
Nov 29 03:47:58 np0005539550 podman[392704]: 2025-11-29 08:47:58.60673991 +0000 UTC m=+0.097345184 container cleanup 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:47:58 np0005539550 systemd[1]: libpod-conmon-8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0.scope: Deactivated successfully.
Nov 29 03:47:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.1 MiB/s wr, 163 op/s
Nov 29 03:47:58 np0005539550 podman[392759]: 2025-11-29 08:47:58.676217899 +0000 UTC m=+0.044635651 container remove 8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:47:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:47:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:47:58.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.682 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[62530e12-4f5e-4f1d-9504-3e7d19a389ca]: (4, ('Sat Nov 29 08:47:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc (8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0)\n8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0\nSat Nov 29 08:47:58 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc (8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0)\n8abc87eab05e18656b847c3995a967376fe40130e3ac561bc7b9db731f3ad5a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.685 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[eb25fb9a-fff6-4fa0-adb0-928b2e38e0a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.687 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21923162-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:47:58 np0005539550 kernel: tap21923162-f0: left promiscuous mode
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.693 257641 DEBUG nova.compute.manager [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-unplugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.693 257641 DEBUG oslo_concurrency.lockutils [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.694 257641 DEBUG oslo_concurrency.lockutils [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.694 257641 DEBUG oslo_concurrency.lockutils [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.694 257641 DEBUG nova.compute.manager [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] No waiting events found dispatching network-vif-unplugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.695 257641 DEBUG nova.compute.manager [req-9f2abc98-32a6-452b-8b7f-8e5cfef83d96 req-b6db15b1-8a82-413e-98e0-002ec93ba586 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-unplugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.695 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.702 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.705 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b8accd73-2f9c-433e-a755-781c5b5881d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.718 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0f763263-5bc9-4005-afb8-503f772a8773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.719 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bfad291f-4a45-478e-9d51-0e9f9d7d04ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.736 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[229baafc-18db-4edf-9515-80a4ed29bfc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 910554, 'reachable_time': 19721, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 392780, 'error': None, 'target': 'ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 systemd[1]: run-netns-ovnmeta\x2d21923162\x2dfe0c\x2d4f50\x2d88b5\x2d19d1d684fafc.mount: Deactivated successfully.
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.739 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-21923162-fe0c-4f50-88b5-19d1d684fafc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:47:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:58.740 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[6bc88598-26fd-47cb-a865-bc1df54f66c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.992 257641 INFO nova.virt.libvirt.driver [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Deleting instance files /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5_del#033[00m
Nov 29 03:47:58 np0005539550 nova_compute[257631]: 2025-11-29 08:47:58.993 257641 INFO nova.virt.libvirt.driver [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Deletion of /var/lib/nova/instances/8d6c8cb2-2581-465e-a048-10774b364ab5_del complete#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.038 257641 INFO nova.compute.manager [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.039 257641 DEBUG oslo.service.loopingcall [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.040 257641 DEBUG nova.compute.manager [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.040 257641 DEBUG nova.network.neutron [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:47:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:47:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:47:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:47:59.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:47:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:47:59
Nov 29 03:47:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:47:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:47:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'backups', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes']
Nov 29 03:47:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.493 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:59.524 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:47:59 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:47:59.524 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.525 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.948 257641 DEBUG nova.network.neutron [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:47:59 np0005539550 nova_compute[257631]: 2025-11-29 08:47:59.963 257641 INFO nova.compute.manager [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Took 0.92 seconds to deallocate network for instance.#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.013 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.014 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.078 257641 DEBUG oslo_concurrency.processutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.329 257641 DEBUG nova.compute.manager [req-c1e612df-47c9-45fa-bd5b-eb3d616d3f81 req-5c8590fc-0a29-4f87-abe8-58d05d84e7ec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-deleted-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417215259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.630 257641 DEBUG oslo_concurrency.processutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.635 257641 DEBUG nova.compute.provider_tree [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.655 257641 DEBUG nova.scheduler.client.report [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 367 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 682 KiB/s rd, 6.1 MiB/s wr, 174 op/s
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.683 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:00.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.757 257641 INFO nova.scheduler.client.report [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Deleted allocations for instance 8d6c8cb2-2581-465e-a048-10774b364ab5#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.840 257641 DEBUG nova.compute.manager [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.841 257641 DEBUG oslo_concurrency.lockutils [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.841 257641 DEBUG oslo_concurrency.lockutils [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.841 257641 DEBUG oslo_concurrency.lockutils [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.841 257641 DEBUG nova.compute.manager [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] No waiting events found dispatching network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.842 257641 WARNING nova.compute.manager [req-5ee3298a-1700-4008-9e9a-226c6ada57da req-7ceabd10-a328-4b78-a156-cfd4566466ff 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Received unexpected event network-vif-plugged-8f9c52aa-0f9a-4731-b369-e0132cf94c40 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:48:00 np0005539550 nova_compute[257631]: 2025-11-29 08:48:00.859 257641 DEBUG oslo_concurrency.lockutils [None req-395876f5-f203-4144-b9b3-8f7f405c87fa de2965680b714b539553cf0792584e1e 75423dfb570f4b2bbc2f8de4f3a65d18 - - default default] Lock "8d6c8cb2-2581-465e-a048-10774b364ab5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:01.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 345 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 398 KiB/s rd, 3.9 MiB/s wr, 125 op/s
Nov 29 03:48:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:02.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:02 np0005539550 nova_compute[257631]: 2025-11-29 08:48:02.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Nov 29 03:48:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Nov 29 03:48:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:03.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:03.526 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:03 np0005539550 nova_compute[257631]: 2025-11-29 08:48:03.592 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Nov 29 03:48:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Nov 29 03:48:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 328 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 29 03:48:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:04 np0005539550 nova_compute[257631]: 2025-11-29 08:48:04.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:05.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Nov 29 03:48:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Nov 29 03:48:05 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Nov 29 03:48:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:06 np0005539550 podman[392809]: 2025-11-29 08:48:06.312848256 +0000 UTC m=+0.054695065 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:48:06 np0005539550 podman[392808]: 2025-11-29 08:48:06.346745354 +0000 UTC m=+0.089120936 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:48:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 342 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 6.7 MiB/s wr, 226 op/s
Nov 29 03:48:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:48:08 np0005539550 nova_compute[257631]: 2025-11-29 08:48:08.618 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 236 op/s
Nov 29 03:48:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:08.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:08 np0005539550 nova_compute[257631]: 2025-11-29 08:48:08.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:48:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:48:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:48:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:48:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:48:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:09.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:09 np0005539550 nova_compute[257631]: 2025-11-29 08:48:09.926 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539550 nova_compute[257631]: 2025-11-29 08:48:10.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Nov 29 03:48:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 325 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 6.3 MiB/s wr, 237 op/s
Nov 29 03:48:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Nov 29 03:48:10 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Nov 29 03:48:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:10.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:10 np0005539550 nova_compute[257631]: 2025-11-29 08:48:10.701 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:11.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 313 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.8 MiB/s wr, 242 op/s
Nov 29 03:48:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:13 np0005539550 nova_compute[257631]: 2025-11-29 08:48:13.567 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406078.5660717, 8d6c8cb2-2581-465e-a048-10774b364ab5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:13 np0005539550 nova_compute[257631]: 2025-11-29 08:48:13.567 257641 INFO nova.compute.manager [-] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:48:13 np0005539550 nova_compute[257631]: 2025-11-29 08:48:13.590 257641 DEBUG nova.compute.manager [None req-c9cade63-e4e6-4d44-9754-9c3ef332b5b2 - - - - - -] [instance: 8d6c8cb2-2581-465e-a048-10774b364ab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:13 np0005539550 nova_compute[257631]: 2025-11-29 08:48:13.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.3 MiB/s wr, 236 op/s
Nov 29 03:48:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:14.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:14 np0005539550 nova_compute[257631]: 2025-11-29 08:48:14.928 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:15.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:15 np0005539550 nova_compute[257631]: 2025-11-29 08:48:15.931 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:15 np0005539550 nova_compute[257631]: 2025-11-29 08:48:15.932 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:15 np0005539550 nova_compute[257631]: 2025-11-29 08:48:15.932 257641 INFO nova.compute.manager [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Unshelving#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.013 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.014 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.019 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_requests' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.030 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.040 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.040 257641 INFO nova.compute.claims [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.158 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057975497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.613 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.619 257641 DEBUG nova.compute.provider_tree [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.635 257641 DEBUG nova.scheduler.client.report [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.658 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 731 KiB/s wr, 146 op/s
Nov 29 03:48:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:16 np0005539550 nova_compute[257631]: 2025-11-29 08:48:16.939 257641 INFO nova.network.neutron [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating port bef873ea-4c5b-4b48-8022-9c3171bdab37 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:48:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:17.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.604 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.604 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.605 257641 DEBUG nova.network.neutron [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.853 257641 DEBUG nova.compute.manager [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.854 257641 DEBUG nova.compute.manager [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing instance network info cache due to event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:17 np0005539550 nova_compute[257631]: 2025-11-29 08:48:17.854 257641 DEBUG oslo_concurrency.lockutils [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:18 np0005539550 nova_compute[257631]: 2025-11-29 08:48:18.624 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 KiB/s wr, 136 op/s
Nov 29 03:48:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:18.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:18.978 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:18.978 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:18.978 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.005 257641 DEBUG nova.network.neutron [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.029 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.032 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.033 257641 INFO nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating image(s)#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.074 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.079 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.082 257641 DEBUG oslo_concurrency.lockutils [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.082 257641 DEBUG nova.network.neutron [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.146 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.187 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.193 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "fe384358f2437fb41f9f311580b078fc43e959c5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.194 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "fe384358f2437fb41f9f311580b078fc43e959c5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:19.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:19 np0005539550 podman[392979]: 2025-11-29 08:48:19.375639524 +0000 UTC m=+0.106437725 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.495 257641 DEBUG nova.virt.libvirt.imagebackend [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image locations are: [{'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.556 257641 DEBUG nova.virt.libvirt.imagebackend [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Selected location: {'url': 'rbd://b66774a7-56d9-5535-bd8c-681234404870/images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.556 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] cloning images/e21e97a9-475e-4ed4-bd3f-ad8000b59f07@snap to None/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.667 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "fe384358f2437fb41f9f311580b078fc43e959c5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.814 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.886 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] flattening vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Nov 29 03:48:19 np0005539550 nova_compute[257631]: 2025-11-29 08:48:19.930 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.385 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Image rbd:vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.386 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.386 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Ensure instance console log exists: /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.386 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.387 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.387 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.389 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start _get_guest_xml network_info=[{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:47:58Z,direct_url=<?>,disk_format='raw',id=e21e97a9-475e-4ed4-bd3f-ad8000b59f07,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1254459075-shelved',owner='2e636ab14fe94059b82b9cbcf8831d87',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:48:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.392 257641 WARNING nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.396 257641 DEBUG nova.virt.libvirt.host [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.397 257641 DEBUG nova.virt.libvirt.host [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.178915410385259e-06 of space, bias 1.0, pg target 0.0024536746231155777 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003150154708728829 of space, bias 1.0, pg target 0.9450464126186487 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0040716458449655685 of space, bias 1.0, pg target 1.2214937534896706 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.399 257641 DEBUG nova.virt.libvirt.host [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.400 257641 DEBUG nova.virt.libvirt.host [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.401 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.401 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-29T08:47:58Z,direct_url=<?>,disk_format='raw',id=e21e97a9-475e-4ed4-bd3f-ad8000b59f07,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1254459075-shelved',owner='2e636ab14fe94059b82b9cbcf8831d87',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-29T08:48:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.401 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.402 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.403 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.403 257641 DEBUG nova.virt.hardware [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.403 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.416 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.577 257641 DEBUG nova.network.neutron [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated VIF entry in instance network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.578 257641 DEBUG nova.network.neutron [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.603 257641 DEBUG oslo_concurrency.lockutils [req-67c45255-207f-4d43-8b4a-e9c864aa017b req-15263d59-687a-4559-a7e3-955ab06c49b5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 256 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 916 KiB/s wr, 107 op/s
Nov 29 03:48:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:20.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244804696' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.917 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.945 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:20 np0005539550 nova_compute[257631]: 2025-11-29 08:48:20.949 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:21.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921628575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.445 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.446 257641 DEBUG nova.virt.libvirt.vif [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='e21e97a9-475e-4ed4-bd3f-ad8000b59f07',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-892403284',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member',shelved_at='2025-11-29T08:48:07.036984',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='e21e97a9-475e-4ed4-bd3f-ad8000b59f07'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:15Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.447 257641 DEBUG nova.network.os_vif_util [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.447 257641 DEBUG nova.network.os_vif_util [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.449 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.474 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <uuid>6b7b3384-3de2-4098-b77f-3658f82eedfc</uuid>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <name>instance-000000d7</name>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestShelveInstance-server-1254459075</nova:name>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:48:20</nova:creationTime>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:user uuid="14d446574294425e9bc89e596ea56dc9">tempest-TestShelveInstance-498716578-project-member</nova:user>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:project uuid="2e636ab14fe94059b82b9cbcf8831d87">tempest-TestShelveInstance-498716578</nova:project>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="e21e97a9-475e-4ed4-bd3f-ad8000b59f07"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <nova:port uuid="bef873ea-4c5b-4b48-8022-9c3171bdab37">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="serial">6b7b3384-3de2-4098-b77f-3658f82eedfc</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="uuid">6b7b3384-3de2-4098-b77f-3658f82eedfc</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:cd:1a:28"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <target dev="tapbef873ea-4c"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/console.log" append="off"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:48:21 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:48:21 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:48:21 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:48:21 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.475 257641 DEBUG nova.compute.manager [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Preparing to wait for external event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.476 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.476 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.477 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.478 257641 DEBUG nova.virt.libvirt.vif [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='e21e97a9-475e-4ed4-bd3f-ad8000b59f07',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-892403284',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:47:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member',shelved_at='2025-11-29T08:48:07.036984',shelved_host='compute-1.ctlplane.example.com',shelved_image_id='e21e97a9-475e-4ed4-bd3f-ad8000b59f07'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:15Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.479 257641 DEBUG nova.network.os_vif_util [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.480 257641 DEBUG nova.network.os_vif_util [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.480 257641 DEBUG os_vif [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.481 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.483 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.483 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.489 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.490 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbef873ea-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.490 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbef873ea-4c, col_values=(('external_ids', {'iface-id': 'bef873ea-4c5b-4b48-8022-9c3171bdab37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:1a:28', 'vm-uuid': '6b7b3384-3de2-4098-b77f-3658f82eedfc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.492 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:21 np0005539550 NetworkManager[49039]: <info>  [1764406101.4932] manager: (tapbef873ea-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/436)
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.501 257641 INFO os_vif [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c')#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.576 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.577 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.577 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No VIF found with MAC fa:16:3e:cd:1a:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.578 257641 INFO nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Using config drive#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.614 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.636 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:21 np0005539550 nova_compute[257631]: 2025-11-29 08:48:21.688 257641 DEBUG nova.objects.instance [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'keypairs' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 294 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.8 MiB/s wr, 127 op/s
Nov 29 03:48:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:22.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:22 np0005539550 nova_compute[257631]: 2025-11-29 08:48:22.872 257641 INFO nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Creating config drive at /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config#033[00m
Nov 29 03:48:22 np0005539550 nova_compute[257631]: 2025-11-29 08:48:22.878 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpli60h7kk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.017 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpli60h7kk" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.046 257641 DEBUG nova.storage.rbd_utils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.050 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.194 257641 DEBUG oslo_concurrency.processutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config 6b7b3384-3de2-4098-b77f-3658f82eedfc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.195 257641 INFO nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deleting local config drive /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:23.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:23 np0005539550 kernel: tapbef873ea-4c: entered promiscuous mode
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.2452] manager: (tapbef873ea-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Nov 29 03:48:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:23Z|01011|binding|INFO|Claiming lport bef873ea-4c5b-4b48-8022-9c3171bdab37 for this chassis.
Nov 29 03:48:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:23Z|01012|binding|INFO|bef873ea-4c5b-4b48-8022-9c3171bdab37: Claiming fa:16:3e:cd:1a:28 10.100.0.6
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.245 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.252 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.2625] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.261 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.2633] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/439)
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.266 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:1a:28 10.100.0.6'], port_security=['fa:16:3e:cd:1a:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b7b3384-3de2-4098-b77f-3658f82eedfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'aa85fffb-3c65-4631-aec4-f04bc3fcc9b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=bef873ea-4c5b-4b48-8022-9c3171bdab37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.267 158978 INFO neutron.agent.ovn.metadata.agent [-] Port bef873ea-4c5b-4b48-8022-9c3171bdab37 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f bound to our chassis#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.268 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e328485-18b8-4dc7-b012-0dd256b9b97f#033[00m
Nov 29 03:48:23 np0005539550 systemd-machined[216673]: New machine qemu-117-instance-000000d7.
Nov 29 03:48:23 np0005539550 systemd-udevd[393303]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.280 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c4666852-66bd-4bb4-9232-71585eccb1d3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.281 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e328485-11 in ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.283 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e328485-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c542cb3f-29ea-41d2-a3dd-52bd8b0de1d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.284 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[611ebc47-247c-4fa0-bb1f-d7562599b650]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.2909] device (tapbef873ea-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.2915] device (tapbef873ea-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.295 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c6152a-0cc3-494e-be08-fd2517922681]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 systemd[1]: Started Virtual Machine qemu-117-instance-000000d7.
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.312 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dba512-50f1-4637-9e2b-4809bb480e86]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.344 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[930f6645-eaae-4fce-92b2-615014deefb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.3660] manager: (tap7e328485-10): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.365 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[cf596e52-61f1-4d87-9784-953e8276b5c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 systemd-udevd[393306]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.397 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[438f47b8-c7fd-4694-8830-3dcffcfff6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.407 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[92a641ab-5a68-4e1a-9732-422413417224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.4338] device (tap7e328485-10): carrier: link connected
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.437 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[9e45e0ff-1895-48b5-91ae-9475004f4f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.453 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[15f97840-7cfd-4ac2-8b18-d3ca37e70154]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915109, 'reachable_time': 21410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393335, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.470 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6de2135c-0a70-4f35-8502-87596b582dab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:c011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 915109, 'tstamp': 915109}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393336, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.489 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[96c41d18-3ec0-435c-88ea-11d2e0564483]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 294], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915109, 'reachable_time': 21410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393337, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.521 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[60010a7c-be2b-4967-9a2a-1fe28c512cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.528 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.532 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.563 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:23Z|01013|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 ovn-installed in OVS
Nov 29 03:48:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:23Z|01014|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 up in Southbound
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.579 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4e2c5a25-b47c-44de-8f87-e7fd6d8c9c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.580 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.580 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.581 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e328485-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.582 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 NetworkManager[49039]: <info>  [1764406103.5827] manager: (tap7e328485-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Nov 29 03:48:23 np0005539550 kernel: tap7e328485-10: entered promiscuous mode
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.586 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e328485-10, col_values=(('external_ids', {'iface-id': '239aff46-c81c-4ca8-9f81-35eea8cc0198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:23 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:23Z|01015|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.601 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.602 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.603 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bcdae9-2ea5-45f5-ad27-da132af151b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.603 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:48:23 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:23.604 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'env', 'PROCESS_TAG=haproxy-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e328485-18b8-4dc7-b012-0dd256b9b97f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.800 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406103.7999206, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.807 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Started (Lifecycle Event)#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.836 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.842 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406103.8000815, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.842 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.852 257641 DEBUG nova.compute.manager [req-4f4cef9b-b40f-40ba-8a4e-6f85d45f6885 req-64ecee42-a3d6-481e-96d5-b94a8d7b8320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.853 257641 DEBUG oslo_concurrency.lockutils [req-4f4cef9b-b40f-40ba-8a4e-6f85d45f6885 req-64ecee42-a3d6-481e-96d5-b94a8d7b8320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.853 257641 DEBUG oslo_concurrency.lockutils [req-4f4cef9b-b40f-40ba-8a4e-6f85d45f6885 req-64ecee42-a3d6-481e-96d5-b94a8d7b8320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.854 257641 DEBUG oslo_concurrency.lockutils [req-4f4cef9b-b40f-40ba-8a4e-6f85d45f6885 req-64ecee42-a3d6-481e-96d5-b94a8d7b8320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.854 257641 DEBUG nova.compute.manager [req-4f4cef9b-b40f-40ba-8a4e-6f85d45f6885 req-64ecee42-a3d6-481e-96d5-b94a8d7b8320 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Processing event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.855 257641 DEBUG nova.compute.manager [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.859 257641 DEBUG nova.virt.libvirt.driver [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.862 257641 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance spawned successfully.#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.869 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.873 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406103.859155, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.874 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.897 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.904 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:23 np0005539550 nova_compute[257631]: 2025-11-29 08:48:23.924 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:23 np0005539550 podman[393412]: 2025-11-29 08:48:23.958824142 +0000 UTC m=+0.051429452 container create 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:48:24 np0005539550 systemd[1]: Started libpod-conmon-1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4.scope.
Nov 29 03:48:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:24 np0005539550 podman[393412]: 2025-11-29 08:48:23.931986793 +0000 UTC m=+0.024592133 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:48:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7966c4af1b21e08c84362c009f7cc30dcadac759dab33dfce3b98135dfd48db5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:24 np0005539550 podman[393412]: 2025-11-29 08:48:24.045322961 +0000 UTC m=+0.137928281 container init 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 03:48:24 np0005539550 podman[393412]: 2025-11-29 08:48:24.052042481 +0000 UTC m=+0.144647791 container start 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:24 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [NOTICE]   (393432) : New worker (393434) forked
Nov 29 03:48:24 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [NOTICE]   (393432) : Loading success.
Nov 29 03:48:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Nov 29 03:48:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Nov 29 03:48:24 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Nov 29 03:48:24 np0005539550 nova_compute[257631]: 2025-11-29 08:48:24.639 257641 DEBUG nova.compute.manager [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 331 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 7.2 MiB/s wr, 210 op/s
Nov 29 03:48:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:24 np0005539550 nova_compute[257631]: 2025-11-29 08:48:24.732 257641 DEBUG oslo_concurrency.lockutils [None req-4e47d2de-9f7a-4aee-83b2-2059458cd26f 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:24 np0005539550 nova_compute[257631]: 2025-11-29 08:48:24.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.979 257641 DEBUG nova.compute.manager [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.980 257641 DEBUG oslo_concurrency.lockutils [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.981 257641 DEBUG oslo_concurrency.lockutils [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.981 257641 DEBUG oslo_concurrency.lockutils [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.982 257641 DEBUG nova.compute.manager [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:25 np0005539550 nova_compute[257631]: 2025-11-29 08:48:25.982 257641 WARNING nova.compute.manager [req-fbd9af6b-1279-40b1-a60a-8dbe89437e6d req-813953a7-1abd-4feb-9cba-3f2b95bc4ada 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received unexpected event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:48:26 np0005539550 nova_compute[257631]: 2025-11-29 08:48:26.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.2 MiB/s wr, 254 op/s
Nov 29 03:48:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:27.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:27 np0005539550 nova_compute[257631]: 2025-11-29 08:48:27.324 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 279 op/s
Nov 29 03:48:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:29.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:29 np0005539550 nova_compute[257631]: 2025-11-29 08:48:29.936 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Nov 29 03:48:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.2 MiB/s rd, 6.4 MiB/s wr, 258 op/s
Nov 29 03:48:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Nov 29 03:48:30 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Nov 29 03:48:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:31.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:31 np0005539550 nova_compute[257631]: 2025-11-29 08:48:31.446 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:31 np0005539550 nova_compute[257631]: 2025-11-29 08:48:31.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 225 op/s
Nov 29 03:48:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:33.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 76 op/s
Nov 29 03:48:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:34.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:34 np0005539550 nova_compute[257631]: 2025-11-29 08:48:34.978 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:35.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:36 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:36Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:1a:28 10.100.0.6
Nov 29 03:48:36 np0005539550 nova_compute[257631]: 2025-11-29 08:48:36.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 20 KiB/s wr, 47 op/s
Nov 29 03:48:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:48:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:36.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:48:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:37.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:37 np0005539550 podman[393500]: 2025-11-29 08:48:37.355907529 +0000 UTC m=+0.083229737 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:48:37 np0005539550 podman[393501]: 2025-11-29 08:48:37.37608686 +0000 UTC m=+0.103108090 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 03:48:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 474 KiB/s rd, 6.4 KiB/s wr, 24 op/s
Nov 29 03:48:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:48:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640786827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:48:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:48:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640786827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:48:39 np0005539550 nova_compute[257631]: 2025-11-29 08:48:39.154 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:39.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:39 np0005539550 nova_compute[257631]: 2025-11-29 08:48:39.981 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 520 KiB/s rd, 20 KiB/s wr, 46 op/s
Nov 29 03:48:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:40.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:41 np0005539550 nova_compute[257631]: 2025-11-29 08:48:41.567 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:42.146 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.146 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:42 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:42.147 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:48:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 539 KiB/s rd, 17 KiB/s wr, 44 op/s
Nov 29 03:48:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.859 257641 DEBUG nova.compute.manager [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.860 257641 DEBUG nova.compute.manager [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing instance network info cache due to event network-changed-bef873ea-4c5b-4b48-8022-9c3171bdab37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.860 257641 DEBUG oslo_concurrency.lockutils [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.860 257641 DEBUG oslo_concurrency.lockutils [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.860 257641 DEBUG nova.network.neutron [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Refreshing network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.914 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.915 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.915 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.915 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.915 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.916 257641 INFO nova.compute.manager [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Terminating instance#033[00m
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.917 257641 DEBUG nova.compute.manager [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:48:42 np0005539550 kernel: tapbef873ea-4c (unregistering): left promiscuous mode
Nov 29 03:48:42 np0005539550 NetworkManager[49039]: <info>  [1764406122.9876] device (tapbef873ea-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:48:42 np0005539550 nova_compute[257631]: 2025-11-29 08:48:42.995 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:42Z|01016|binding|INFO|Releasing lport bef873ea-4c5b-4b48-8022-9c3171bdab37 from this chassis (sb_readonly=0)
Nov 29 03:48:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:42Z|01017|binding|INFO|Setting lport bef873ea-4c5b-4b48-8022-9c3171bdab37 down in Southbound
Nov 29 03:48:42 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:42Z|01018|binding|INFO|Removing iface tapbef873ea-4c ovn-installed in OVS
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.003 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:1a:28 10.100.0.6'], port_security=['fa:16:3e:cd:1a:28 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b7b3384-3de2-4098-b77f-3658f82eedfc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'aa85fffb-3c65-4631-aec4-f04bc3fcc9b5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=bef873ea-4c5b-4b48-8022-9c3171bdab37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.004 158978 INFO neutron.agent.ovn.metadata.agent [-] Port bef873ea-4c5b-4b48-8022-9c3171bdab37 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f unbound from our chassis#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.006 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e328485-18b8-4dc7-b012-0dd256b9b97f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.007 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bea436b7-8337-41a5-a06e-882bb6f79db8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.008 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace which is not needed anymore#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.019 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d000000d7.scope: Deactivated successfully.
Nov 29 03:48:43 np0005539550 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d000000d7.scope: Consumed 13.817s CPU time.
Nov 29 03:48:43 np0005539550 systemd-machined[216673]: Machine qemu-117-instance-000000d7 terminated.
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.138 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [NOTICE]   (393432) : haproxy version is 2.8.14-c23fe91
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [NOTICE]   (393432) : path to executable is /usr/sbin/haproxy
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [WARNING]  (393432) : Exiting Master process...
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [WARNING]  (393432) : Exiting Master process...
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [ALERT]    (393432) : Current worker (393434) exited with code 143 (Terminated)
Nov 29 03:48:43 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[393428]: [WARNING]  (393432) : All workers exited. Exiting... (0)
Nov 29 03:48:43 np0005539550 systemd[1]: libpod-1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4.scope: Deactivated successfully.
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.144 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 podman[393565]: 2025-11-29 08:48:43.150487782 +0000 UTC m=+0.044321573 container died 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.156 257641 INFO nova.virt.libvirt.driver [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Instance destroyed successfully.#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.157 257641 DEBUG nova.objects.instance [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'resources' on Instance uuid 6b7b3384-3de2-4098-b77f-3658f82eedfc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.172 257641 DEBUG nova.virt.libvirt.vif [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:47:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1254459075',display_name='tempest-TestShelveInstance-server-1254459075',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1254459075',id=215,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA6ZJ0e8HULKzf15gAhzf0Pozq+BNpsY6JGJkWj4En/gstCBUIDEBBlhlRRy2j+ObTo/Olxsq8yNRaWE1A2BtIbVFq5FCJEzTcF45GwsvPUIsE1i0kjUMi8fiaEVQJMjJA==',key_name='tempest-TestShelveInstance-892403284',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-id4k7nof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:48:24Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=6b7b3384-3de2-4098-b77f-3658f82eedfc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.173 257641 DEBUG nova.network.os_vif_util [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.174 257641 DEBUG nova.network.os_vif_util [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.174 257641 DEBUG os_vif [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.176 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.177 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbef873ea-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.178 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.180 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4-userdata-shm.mount: Deactivated successfully.
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.183 257641 INFO os_vif [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:1a:28,bridge_name='br-int',has_traffic_filtering=True,id=bef873ea-4c5b-4b48-8022-9c3171bdab37,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbef873ea-4c')#033[00m
Nov 29 03:48:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7966c4af1b21e08c84362c009f7cc30dcadac759dab33dfce3b98135dfd48db5-merged.mount: Deactivated successfully.
Nov 29 03:48:43 np0005539550 podman[393565]: 2025-11-29 08:48:43.191681994 +0000 UTC m=+0.085515775 container cleanup 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:48:43 np0005539550 systemd[1]: libpod-conmon-1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4.scope: Deactivated successfully.
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.212 257641 DEBUG nova.compute.manager [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.214 257641 DEBUG oslo_concurrency.lockutils [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.214 257641 DEBUG oslo_concurrency.lockutils [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.214 257641 DEBUG oslo_concurrency.lockutils [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.215 257641 DEBUG nova.compute.manager [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.215 257641 DEBUG nova.compute.manager [req-a6d916ab-7aa4-4c48-aef9-9955dfd0e64e req-cd2df03c-270e-420e-84f5-6fd0739ba6a8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-unplugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:48:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:43.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:43 np0005539550 podman[393619]: 2025-11-29 08:48:43.258537626 +0000 UTC m=+0.042786614 container remove 1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.264 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[12975dbe-b2a8-4839-bbba-a1fa4ced4d39]: (4, ('Sat Nov 29 08:48:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4)\n1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4\nSat Nov 29 08:48:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4)\n1a18a29529297358e8cd513d0e7850565d6249df7e3d59a5f58770155595fdc4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.267 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[19abc9cb-ac6a-4dfe-bdde-2d00d38a0447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.268 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:43 np0005539550 kernel: tap7e328485-10: left promiscuous mode
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.270 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.276 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3471deaa-86d5-42df-8a38-4431b92de5dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.287 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.292 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2c821272-7710-461f-ae8e-cbd87c450f22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.293 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[2631a2b1-cf8b-41e3-8ea0-152f863de310]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.310 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[257a934b-c013-478c-aed7-7dffeeb89348]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 915100, 'reachable_time': 19352, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393637, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.313 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:48:43 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:43.314 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[dde69839-8e40-456e-8383-8dab92cb61f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:43 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7e328485\x2d18b8\x2d4dc7\x2db012\x2d0dd256b9b97f.mount: Deactivated successfully.
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.590 257641 INFO nova.virt.libvirt.driver [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deleting instance files /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc_del#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.591 257641 INFO nova.virt.libvirt.driver [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deletion of /var/lib/nova/instances/6b7b3384-3de2-4098-b77f-3658f82eedfc_del complete#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.646 257641 INFO nova.compute.manager [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.646 257641 DEBUG oslo.service.loopingcall [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.647 257641 DEBUG nova.compute.manager [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:48:43 np0005539550 nova_compute[257631]: 2025-11-29 08:48:43.647 257641 DEBUG nova.network.neutron [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.104 257641 DEBUG nova.network.neutron [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updated VIF entry in instance network info cache for port bef873ea-4c5b-4b48-8022-9c3171bdab37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.104 257641 DEBUG nova.network.neutron [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [{"id": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "address": "fa:16:3e:cd:1a:28", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbef873ea-4c", "ovs_interfaceid": "bef873ea-4c5b-4b48-8022-9c3171bdab37", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.121 257641 DEBUG oslo_concurrency.lockutils [req-24380e0c-e4f8-482c-9988-a75c113a2841 req-f7fbe6a4-504f-4bd4-8ff5-33d4e850a2b6 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-6b7b3384-3de2-4098-b77f-3658f82eedfc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.356 257641 DEBUG nova.network.neutron [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.372 257641 INFO nova.compute.manager [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Took 0.73 seconds to deallocate network for instance.#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.420 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.421 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.471 257641 DEBUG oslo_concurrency.processutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 241 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 759 KiB/s rd, 29 KiB/s wr, 71 op/s
Nov 29 03:48:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:44.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Nov 29 03:48:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Nov 29 03:48:44 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Nov 29 03:48:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150676638' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.965 257641 DEBUG oslo_concurrency.processutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.975 257641 DEBUG nova.compute.manager [req-b0db23df-c4c8-4a74-b45b-1cb5b20fbd26 req-3dbd0b55-b47a-40bc-bd32-daba3a59e866 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-deleted-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.981 257641 DEBUG nova.compute.provider_tree [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:44 np0005539550 nova_compute[257631]: 2025-11-29 08:48:44.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.005 257641 DEBUG nova.scheduler.client.report [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.029 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.066 257641 INFO nova.scheduler.client.report [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Deleted allocations for instance 6b7b3384-3de2-4098-b77f-3658f82eedfc#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.153 257641 DEBUG oslo_concurrency.lockutils [None req-87d17dab-5c03-4d2f-8b22-2fd8ab4cb159 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.239s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:45.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.278 257641 DEBUG nova.compute.manager [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.279 257641 DEBUG oslo_concurrency.lockutils [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.280 257641 DEBUG oslo_concurrency.lockutils [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.280 257641 DEBUG oslo_concurrency.lockutils [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "6b7b3384-3de2-4098-b77f-3658f82eedfc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.281 257641 DEBUG nova.compute.manager [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] No waiting events found dispatching network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:45 np0005539550 nova_compute[257631]: 2025-11-29 08:48:45.281 257641 WARNING nova.compute.manager [req-8ffe040c-f6b3-4426-9ee0-e866ebca88e6 req-45dd399e-84a2-47d0-ba70-de2b83d31aec 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Received unexpected event network-vif-plugged-bef873ea-4c5b-4b48-8022-9c3171bdab37 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:48:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 212 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 700 KiB/s rd, 30 KiB/s wr, 92 op/s
Nov 29 03:48:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:46 np0005539550 nova_compute[257631]: 2025-11-29 08:48:46.951 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:46 np0005539550 nova_compute[257631]: 2025-11-29 08:48:46.952 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:46 np0005539550 nova_compute[257631]: 2025-11-29 08:48:46.970 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.026 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.028 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.037 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.038 257641 INFO nova.compute.claims [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.152 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:47.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841033632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.630 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.638 257641 DEBUG nova.compute.provider_tree [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.660 257641 DEBUG nova.scheduler.client.report [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.689 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.690 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.762 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.763 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.787 257641 INFO nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.808 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.910 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.912 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.913 257641 INFO nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Creating image(s)#033[00m
Nov 29 03:48:47 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.951 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:47.999 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.045 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.051 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.089 257641 DEBUG nova.policy [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '01dac15e9e59419d98c66eb08a434612', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56b9fd3ff0794089a4e086cd4ca0c36d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.124 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.125 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "f62ef5f82502d01c82174408aec7f3ac942e2488" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.126 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.127 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "f62ef5f82502d01c82174408aec7f3ac942e2488" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:48 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:48.150 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.161 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.166 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8dae1bd9-84b7-4bad-95d7-218720568a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:48 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev aed7435b-315f-49bc-81bf-dbe735469450 does not exist
Nov 29 03:48:48 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 56959b90-922f-4461-a3fa-5f537d3d76ec does not exist
Nov 29 03:48:48 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4de9af5d-6ad3-4be4-af0a-c922acb8bbde does not exist
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:48:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.485 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 8dae1bd9-84b7-4bad-95d7-218720568a12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.557 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] resizing rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Nov 29 03:48:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 657 KiB/s rd, 30 KiB/s wr, 91 op/s
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.691 257641 DEBUG nova.objects.instance [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lazy-loading 'migration_context' on Instance uuid 8dae1bd9-84b7-4bad-95d7-218720568a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.712 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.713 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Ensure instance console log exists: /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.714 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.714 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.715 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.836765465 +0000 UTC m=+0.038091565 container create b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:48 np0005539550 systemd[1]: Started libpod-conmon-b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f.scope.
Nov 29 03:48:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.82038081 +0000 UTC m=+0.021706910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.92589221 +0000 UTC m=+0.127218310 container init b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.933269797 +0000 UTC m=+0.134595907 container start b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.93734902 +0000 UTC m=+0.138675090 container attach b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:48:48 np0005539550 nova_compute[257631]: 2025-11-29 08:48:48.938 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Successfully created port: 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:48:48 np0005539550 adoring_nobel[394258]: 167 167
Nov 29 03:48:48 np0005539550 systemd[1]: libpod-b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f.scope: Deactivated successfully.
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.943210098 +0000 UTC m=+0.144536178 container died b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:48:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b1763254f22eb6079161996b152b58fb387596d089a0a591bbcd4961f01b5ae8-merged.mount: Deactivated successfully.
Nov 29 03:48:48 np0005539550 podman[394241]: 2025-11-29 08:48:48.985436107 +0000 UTC m=+0.186762187 container remove b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:48:49 np0005539550 systemd[1]: libpod-conmon-b6fc28401bc6c651d5f1f65cee4c116574184919d454f107e5def737aa10d43f.scope: Deactivated successfully.
Nov 29 03:48:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:48:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:49 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:48:49 np0005539550 podman[394282]: 2025-11-29 08:48:49.224341043 +0000 UTC m=+0.064072223 container create 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:49 np0005539550 systemd[1]: Started libpod-conmon-4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9.scope.
Nov 29 03:48:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:49.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:49 np0005539550 podman[394282]: 2025-11-29 08:48:49.204417278 +0000 UTC m=+0.044148458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:49 np0005539550 podman[394282]: 2025-11-29 08:48:49.30723062 +0000 UTC m=+0.146961810 container init 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:48:49 np0005539550 podman[394282]: 2025-11-29 08:48:49.319443509 +0000 UTC m=+0.159174659 container start 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:48:49 np0005539550 podman[394282]: 2025-11-29 08:48:49.322786864 +0000 UTC m=+0.162518024 container attach 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.816 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Successfully updated port: 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.836 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.837 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquired lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.837 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.934 257641 DEBUG nova.compute.manager [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-changed-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.935 257641 DEBUG nova.compute.manager [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Refreshing instance network info cache due to event network-changed-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.935 257641 DEBUG oslo_concurrency.lockutils [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:49 np0005539550 nova_compute[257631]: 2025-11-29 08:48:49.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.015 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:48:50 np0005539550 wonderful_perlman[394298]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:48:50 np0005539550 wonderful_perlman[394298]: --> relative data size: 1.0
Nov 29 03:48:50 np0005539550 wonderful_perlman[394298]: --> All data devices are unavailable
Nov 29 03:48:50 np0005539550 systemd[1]: libpod-4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9.scope: Deactivated successfully.
Nov 29 03:48:50 np0005539550 podman[394282]: 2025-11-29 08:48:50.149926819 +0000 UTC m=+0.989657979 container died 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:48:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d9aead9407d88f4dfad228a50bb1b0ee314f7da169efc739473883db367dce2c-merged.mount: Deactivated successfully.
Nov 29 03:48:50 np0005539550 podman[394282]: 2025-11-29 08:48:50.227749676 +0000 UTC m=+1.067480826 container remove 4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:48:50 np0005539550 systemd[1]: libpod-conmon-4966aae816ced99819f221a6de4f122fa044cd2a8ee878eea8947f8d815979c9.scope: Deactivated successfully.
Nov 29 03:48:50 np0005539550 podman[394314]: 2025-11-29 08:48:50.308143399 +0000 UTC m=+0.120772194 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:48:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 219 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 425 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Nov 29 03:48:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:50.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.88910454 +0000 UTC m=+0.039210662 container create 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.906 257641 DEBUG nova.network.neutron [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updating instance_info_cache with network_info: [{"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:50 np0005539550 systemd[1]: Started libpod-conmon-3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28.scope.
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.922 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Releasing lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.922 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Instance network_info: |[{"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.922 257641 DEBUG oslo_concurrency.lockutils [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.922 257641 DEBUG nova.network.neutron [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Refreshing network info cache for port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.925 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Start _get_guest_xml network_info=[{"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_format': None, 'size': 0, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'image_id': '4873db8c-b414-4e95-acd9-77caabebe722'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.931 257641 WARNING nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.938 257641 DEBUG nova.virt.libvirt.host [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.939 257641 DEBUG nova.virt.libvirt.host [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.942 257641 DEBUG nova.virt.libvirt.host [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.942 257641 DEBUG nova.virt.libvirt.host [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.943 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.944 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T07:48:23Z,direct_url=<?>,disk_format='qcow2',id=4873db8c-b414-4e95-acd9-77caabebe722,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='313f5427e3624aa189013c3cc05bee02',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T07:48:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.944 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.945 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.945 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.945 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.945 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.946 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.946 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.946 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.946 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.947 257641 DEBUG nova.virt.hardware [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:48:50 np0005539550 nova_compute[257631]: 2025-11-29 08:48:50.950 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.871845758 +0000 UTC m=+0.021951900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.970642571 +0000 UTC m=+0.120748693 container init 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.979647877 +0000 UTC m=+0.129754009 container start 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.983087563 +0000 UTC m=+0.133193735 container attach 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:48:50 np0005539550 inspiring_nightingale[394510]: 167 167
Nov 29 03:48:50 np0005539550 systemd[1]: libpod-3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28.scope: Deactivated successfully.
Nov 29 03:48:50 np0005539550 podman[394494]: 2025-11-29 08:48:50.986628661 +0000 UTC m=+0.136734793 container died 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:48:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0d8f95e97ed757f2cddf1927a9ae517214272d6c8640908219a9c1ce4b464bf7-merged.mount: Deactivated successfully.
Nov 29 03:48:51 np0005539550 podman[394494]: 2025-11-29 08:48:51.03052831 +0000 UTC m=+0.180634422 container remove 3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_nightingale, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:48:51 np0005539550 systemd[1]: libpod-conmon-3e01be8decba4e419cfc3e76632ca210abd8fd30d5edf13b2349b5b98c0e2f28.scope: Deactivated successfully.
Nov 29 03:48:51 np0005539550 podman[394554]: 2025-11-29 08:48:51.207669663 +0000 UTC m=+0.037113530 container create d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:51 np0005539550 systemd[1]: Started libpod-conmon-d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a.scope.
Nov 29 03:48:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce23b5e7c09141abf5cf4d2077d86a39320b9caaafc6f7faf8a3593da701431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce23b5e7c09141abf5cf4d2077d86a39320b9caaafc6f7faf8a3593da701431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce23b5e7c09141abf5cf4d2077d86a39320b9caaafc6f7faf8a3593da701431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce23b5e7c09141abf5cf4d2077d86a39320b9caaafc6f7faf8a3593da701431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:51 np0005539550 podman[394554]: 2025-11-29 08:48:51.192388591 +0000 UTC m=+0.021832478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:51 np0005539550 podman[394554]: 2025-11-29 08:48:51.297081891 +0000 UTC m=+0.126525808 container init d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:51 np0005539550 podman[394554]: 2025-11-29 08:48:51.303116252 +0000 UTC m=+0.132560129 container start d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:48:51 np0005539550 podman[394554]: 2025-11-29 08:48:51.306668191 +0000 UTC m=+0.136112088 container attach d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:48:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879262987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.400 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.430 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.435 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:48:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455123844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.869 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.870 257641 DEBUG nova.virt.libvirt.vif [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1470590521',display_name='tempest-TestServerBasicOps-server-1470590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1470590521',id=218,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/M7edo5b6KW4Db6z+vbxGOAVJ7Yg3aHNlLjdEnvoE8RI2KqGq39F9EgOj78xH716z2Q0O3Cv66UM2Q54F47ITtnbMTRzYAzFEfv9yqVsIaSnlCwborXFjq3RQE8jj3A==',key_name='tempest-TestServerBasicOps-1340609054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b9fd3ff0794089a4e086cd4ca0c36d',ramdisk_id='',reservation_id='r-swkxo9fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-919100436',owner_user_name='tempest-TestServerBasicOps-919100436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='01dac15e9e59419d98c66eb08a434612',uuid=8dae1bd9-84b7-4bad-95d7-218720568a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.871 257641 DEBUG nova.network.os_vif_util [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converting VIF {"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.872 257641 DEBUG nova.network.os_vif_util [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.873 257641 DEBUG nova.objects.instance [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dae1bd9-84b7-4bad-95d7-218720568a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.890 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <uuid>8dae1bd9-84b7-4bad-95d7-218720568a12</uuid>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <name>instance-000000da</name>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestServerBasicOps-server-1470590521</nova:name>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:48:50</nova:creationTime>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:user uuid="01dac15e9e59419d98c66eb08a434612">tempest-TestServerBasicOps-919100436-project-member</nova:user>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:project uuid="56b9fd3ff0794089a4e086cd4ca0c36d">tempest-TestServerBasicOps-919100436</nova:project>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:root type="image" uuid="4873db8c-b414-4e95-acd9-77caabebe722"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <nova:port uuid="882b2916-31ab-4fc4-b3c9-6d4653c8d5a2">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="serial">8dae1bd9-84b7-4bad-95d7-218720568a12</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="uuid">8dae1bd9-84b7-4bad-95d7-218720568a12</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8dae1bd9-84b7-4bad-95d7-218720568a12_disk">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:51 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:51 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:0f:5c:71"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <target dev="tap882b2916-31"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/console.log" append="off"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:48:51 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:48:51 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:48:51 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:48:51 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.891 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Preparing to wait for external event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.891 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.892 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.892 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.892 257641 DEBUG nova.virt.libvirt.vif [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1470590521',display_name='tempest-TestServerBasicOps-server-1470590521',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1470590521',id=218,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/M7edo5b6KW4Db6z+vbxGOAVJ7Yg3aHNlLjdEnvoE8RI2KqGq39F9EgOj78xH716z2Q0O3Cv66UM2Q54F47ITtnbMTRzYAzFEfv9yqVsIaSnlCwborXFjq3RQE8jj3A==',key_name='tempest-TestServerBasicOps-1340609054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56b9fd3ff0794089a4e086cd4ca0c36d',ramdisk_id='',reservation_id='r-swkxo9fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-919100436',owner_user_name='tempest-TestServerBasicOps-919100436-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:48:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='01dac15e9e59419d98c66eb08a434612',uuid=8dae1bd9-84b7-4bad-95d7-218720568a12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.893 257641 DEBUG nova.network.os_vif_util [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converting VIF {"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.893 257641 DEBUG nova.network.os_vif_util [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.894 257641 DEBUG os_vif [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.894 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.895 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.895 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.910 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.910 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap882b2916-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.911 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap882b2916-31, col_values=(('external_ids', {'iface-id': '882b2916-31ab-4fc4-b3c9-6d4653c8d5a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:5c:71', 'vm-uuid': '8dae1bd9-84b7-4bad-95d7-218720568a12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539550 NetworkManager[49039]: <info>  [1764406131.9146] manager: (tap882b2916-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.915 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.920 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.922 257641 INFO os_vif [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31')#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.940 257641 DEBUG nova.network.neutron [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updated VIF entry in instance network info cache for port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.940 257641 DEBUG nova.network.neutron [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updating instance_info_cache with network_info: [{"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.952 257641 DEBUG oslo_concurrency.lockutils [req-92d434af-da68-406e-8d5e-99e54db48fe4 req-5a75f634-ed86-434d-a010-5f0e1ead307f 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.992 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.992 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.992 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] No VIF found with MAC fa:16:3e:0f:5c:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:48:51 np0005539550 nova_compute[257631]: 2025-11-29 08:48:51.993 257641 INFO nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Using config drive#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.020 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]: {
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:    "0": [
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:        {
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "devices": [
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "/dev/loop3"
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            ],
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "lv_name": "ceph_lv0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "lv_size": "7511998464",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "name": "ceph_lv0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "tags": {
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.cluster_name": "ceph",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.crush_device_class": "",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.encrypted": "0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.osd_id": "0",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.type": "block",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:                "ceph.vdo": "0"
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            },
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "type": "block",
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:            "vg_name": "ceph_vg0"
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:        }
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]:    ]
Nov 29 03:48:52 np0005539550 happy_ptolemy[394571]: }
Nov 29 03:48:52 np0005539550 systemd[1]: libpod-d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a.scope: Deactivated successfully.
Nov 29 03:48:52 np0005539550 podman[394554]: 2025-11-29 08:48:52.068838608 +0000 UTC m=+0.898282475 container died d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:48:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4ce23b5e7c09141abf5cf4d2077d86a39320b9caaafc6f7faf8a3593da701431-merged.mount: Deactivated successfully.
Nov 29 03:48:52 np0005539550 podman[394554]: 2025-11-29 08:48:52.121157528 +0000 UTC m=+0.950601415 container remove d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:48:52 np0005539550 systemd[1]: libpod-conmon-d5c867aac978dc491ec2dc70a31475e5818f518fc0c283aff57fbd92a185d49a.scope: Deactivated successfully.
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.464 257641 INFO nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Creating config drive at /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.476 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplekvv7qr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.641 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplekvv7qr" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.679 257641 DEBUG nova.storage.rbd_utils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] rbd image 8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.682 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config 8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 235 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Nov 29 03:48:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:52.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.834663557 +0000 UTC m=+0.054887675 container create 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:48:52 np0005539550 systemd[1]: Started libpod-conmon-2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8.scope.
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.883 257641 DEBUG oslo_concurrency.processutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config 8dae1bd9-84b7-4bad-95d7-218720568a12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.884 257641 INFO nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Deleting local config drive /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12/disk.config because it was imported into RBD.#033[00m
Nov 29 03:48:52 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.813914597 +0000 UTC m=+0.034138735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.926303651 +0000 UTC m=+0.146527769 container init 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.937990483 +0000 UTC m=+0.158214591 container start 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.941628044 +0000 UTC m=+0.161852162 container attach 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:48:52 np0005539550 heuristic_kowalevski[394859]: 167 167
Nov 29 03:48:52 np0005539550 systemd[1]: libpod-2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8.scope: Deactivated successfully.
Nov 29 03:48:52 np0005539550 podman[394816]: 2025-11-29 08:48:52.945191853 +0000 UTC m=+0.165415961 container died 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:48:52 np0005539550 kernel: tap882b2916-31: entered promiscuous mode
Nov 29 03:48:52 np0005539550 NetworkManager[49039]: <info>  [1764406132.9527] manager: (tap882b2916-31): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Nov 29 03:48:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:52Z|01019|binding|INFO|Claiming lport 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 for this chassis.
Nov 29 03:48:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:52Z|01020|binding|INFO|882b2916-31ab-4fc4-b3c9-6d4653c8d5a2: Claiming fa:16:3e:0f:5c:71 10.100.0.11
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.959 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.963 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:5c:71 10.100.0.11'], port_security=['fa:16:3e:0f:5c:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dae1bd9-84b7-4bad-95d7-218720568a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b9fd3ff0794089a4e086cd4ca0c36d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8db62d0-26b9-4cd2-8074-3bef6ec0dd74 f32c0d02-bfab-4a17-8d0a-3d3d92c87366', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf8b6146-f390-4c84-9a4c-6f550bc45407, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.965 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 in datapath 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d bound to our chassis#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.967 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d#033[00m
Nov 29 03:48:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8a733549f55364400e727e6129b45c58bd0fe2baaf3b96b50778ceab840f547a-merged.mount: Deactivated successfully.
Nov 29 03:48:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:52Z|01021|binding|INFO|Setting lport 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 ovn-installed in OVS
Nov 29 03:48:52 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:52Z|01022|binding|INFO|Setting lport 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 up in Southbound
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.978 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.979 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7192f0bb-7d4b-4604-8f4f-09b660ed26e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.981 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ebad506-a1 in ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.983 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ebad506-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.983 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[a98fbaa7-fd8f-4d5e-8b96-b3fbfecc8559]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.985 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4878e8d0-4306-4592-b381-136d367dd37f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:52 np0005539550 nova_compute[257631]: 2025-11-29 08:48:52.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:52 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:52.996 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[60c1265f-3d2b-4fce-8dc1-9bc7b2a57ba3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 podman[394816]: 2025-11-29 08:48:53.002906928 +0000 UTC m=+0.223131026 container remove 2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 03:48:53 np0005539550 systemd-udevd[394930]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:53 np0005539550 systemd[1]: libpod-conmon-2bb4e5790f67515dcae53b97dc1134a6d67be0519de6eddec5671d4f980f04d8.scope: Deactivated successfully.
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.013 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce70af8-e8fe-4c98-a3c0-35564d111705]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 systemd-machined[216673]: New machine qemu-118-instance-000000da.
Nov 29 03:48:53 np0005539550 NetworkManager[49039]: <info>  [1764406133.0269] device (tap882b2916-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:48:53 np0005539550 NetworkManager[49039]: <info>  [1764406133.0280] device (tap882b2916-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:48:53 np0005539550 systemd[1]: Started Virtual Machine qemu-118-instance-000000da.
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.045 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[a7110c5b-fee9-466d-bb22-39a9f9323d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.049 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5b68efad-1171-48c9-b4cd-222794ab34ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 NetworkManager[49039]: <info>  [1764406133.0509] manager: (tap1ebad506-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/444)
Nov 29 03:48:53 np0005539550 systemd-udevd[394937]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.079 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[d470bab9-33c5-405a-824f-38402fc16eb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.085 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d1c872-d1a6-4828-9cb2-f79b454f2074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 NetworkManager[49039]: <info>  [1764406133.1085] device (tap1ebad506-a0): carrier: link connected
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.113 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c31af04c-5bd5-4491-806c-a5c4559f9164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.131 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b8aaddc1-2059-412a-af0b-f37aa3c53db2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ebad506-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:81:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918077, 'reachable_time': 32144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394970, 'error': None, 'target': 'ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.148 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d280d012-b118-4f1c-a99e-b8fed16be04f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:813a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 918077, 'tstamp': 918077}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394977, 'error': None, 'target': 'ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.167 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6a565eab-9512-4225-8f2b-2e5755efa9ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ebad506-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:81:3a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 297], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918077, 'reachable_time': 32144, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 394982, 'error': None, 'target': 'ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 podman[394969]: 2025-11-29 08:48:53.18276375 +0000 UTC m=+0.053357817 container create 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.199 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7345a2a1-ca7b-4efc-8dda-ecb3865388f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 systemd[1]: Started libpod-conmon-4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6.scope.
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.249 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[853d4b57-0e69-4180-8bad-5b5fbdd18a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.253 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ebad506-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.254 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.254 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ebad506-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec70f7caf9abf96288641840517561325ba2b7f8f64b644c711d6f9977b23316/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539550 podman[394969]: 2025-11-29 08:48:53.162054812 +0000 UTC m=+0.032648929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.256 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:53 np0005539550 kernel: tap1ebad506-a0: entered promiscuous mode
Nov 29 03:48:53 np0005539550 NetworkManager[49039]: <info>  [1764406133.2568] manager: (tap1ebad506-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec70f7caf9abf96288641840517561325ba2b7f8f64b644c711d6f9977b23316/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.259 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ebad506-a0, col_values=(('external_ids', {'iface-id': '03e76e85-7a25-46d7-a701-941f7a1ac12d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:48:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec70f7caf9abf96288641840517561325ba2b7f8f64b644c711d6f9977b23316/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec70f7caf9abf96288641840517561325ba2b7f8f64b644c711d6f9977b23316/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539550 ovn_controller[148680]: 2025-11-29T08:48:53Z|01023|binding|INFO|Releasing lport 03e76e85-7a25-46d7-a701-941f7a1ac12d from this chassis (sb_readonly=0)
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.260 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:53 np0005539550 podman[394969]: 2025-11-29 08:48:53.27224582 +0000 UTC m=+0.142839907 container init 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:48:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:53.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.277 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.279 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ebad506-af98-4cc8-a8ed-1212b7b6ee2d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ebad506-af98-4cc8-a8ed-1212b7b6ee2d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.280 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[49521d7e-52c0-4787-a0a3-118e480e74bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.280 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/1ebad506-af98-4cc8-a8ed-1212b7b6ee2d.pid.haproxy
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:48:53 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:48:53.281 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'env', 'PROCESS_TAG=haproxy-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ebad506-af98-4cc8-a8ed-1212b7b6ee2d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:48:53 np0005539550 podman[394969]: 2025-11-29 08:48:53.286178378 +0000 UTC m=+0.156772445 container start 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:48:53 np0005539550 podman[394969]: 2025-11-29 08:48:53.289568783 +0000 UTC m=+0.160162850 container attach 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.483 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406133.4828298, 8dae1bd9-84b7-4bad-95d7-218720568a12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.484 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] VM Started (Lifecycle Event)#033[00m
Nov 29 03:48:53 np0005539550 podman[395066]: 2025-11-29 08:48:53.645891642 +0000 UTC m=+0.048929146 container create 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:48:53 np0005539550 systemd[1]: Started libpod-conmon-0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f.scope.
Nov 29 03:48:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:48:53 np0005539550 podman[395066]: 2025-11-29 08:48:53.619004719 +0000 UTC m=+0.022042223 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:48:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e3d558f50f86b1e3b14d77d1c742b6a6663cd4aee90be04860cc3c77c9e7630/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.719 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.726 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406133.4841511, 8dae1bd9-84b7-4bad-95d7-218720568a12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.726 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:48:53 np0005539550 podman[395066]: 2025-11-29 08:48:53.730652653 +0000 UTC m=+0.133690147 container init 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 03:48:53 np0005539550 podman[395066]: 2025-11-29 08:48:53.736535631 +0000 UTC m=+0.139573115 container start 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.747 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.752 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:53 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [NOTICE]   (395086) : New worker (395088) forked
Nov 29 03:48:53 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [NOTICE]   (395086) : Loading success.
Nov 29 03:48:53 np0005539550 nova_compute[257631]: 2025-11-29 08:48:53.778 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:54 np0005539550 youthful_golick[394992]: {
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:        "osd_id": 0,
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:        "type": "bluestore"
Nov 29 03:48:54 np0005539550 youthful_golick[394992]:    }
Nov 29 03:48:54 np0005539550 youthful_golick[394992]: }
Nov 29 03:48:54 np0005539550 systemd[1]: libpod-4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6.scope: Deactivated successfully.
Nov 29 03:48:54 np0005539550 conmon[394992]: conmon 4b9df22bb4876cb67792 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6.scope/container/memory.events
Nov 29 03:48:54 np0005539550 podman[395113]: 2025-11-29 08:48:54.323912133 +0000 UTC m=+0.025105170 container died 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:48:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ec70f7caf9abf96288641840517561325ba2b7f8f64b644c711d6f9977b23316-merged.mount: Deactivated successfully.
Nov 29 03:48:54 np0005539550 podman[395113]: 2025-11-29 08:48:54.37774968 +0000 UTC m=+0.078942677 container remove 4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:48:54 np0005539550 systemd[1]: libpod-conmon-4b9df22bb4876cb67792edd11f3e49c78fd474db6bf539fd9739e4d29ef36dd6.scope: Deactivated successfully.
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e4338bdc-456e-47bd-a601-72551b32a2a9 does not exist
Nov 29 03:48:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f80e78a6-e816-4477-9121-41e991259fd2 does not exist
Nov 29 03:48:54 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9eb94313-e885-4752-8bee-d9d3c71c378c does not exist
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:48:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 81 op/s
Nov 29 03:48:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:48:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:54.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.796 257641 DEBUG nova.compute.manager [req-dfd7edb9-2fee-405e-bed1-78eadc9deb25 req-ca3702f1-1f75-48f8-9465-92e89f8b9e72 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.796 257641 DEBUG oslo_concurrency.lockutils [req-dfd7edb9-2fee-405e-bed1-78eadc9deb25 req-ca3702f1-1f75-48f8-9465-92e89f8b9e72 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.797 257641 DEBUG oslo_concurrency.lockutils [req-dfd7edb9-2fee-405e-bed1-78eadc9deb25 req-ca3702f1-1f75-48f8-9465-92e89f8b9e72 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.797 257641 DEBUG oslo_concurrency.lockutils [req-dfd7edb9-2fee-405e-bed1-78eadc9deb25 req-ca3702f1-1f75-48f8-9465-92e89f8b9e72 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.797 257641 DEBUG nova.compute.manager [req-dfd7edb9-2fee-405e-bed1-78eadc9deb25 req-ca3702f1-1f75-48f8-9465-92e89f8b9e72 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Processing event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.798 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.804 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406134.804069, 8dae1bd9-84b7-4bad-95d7-218720568a12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.804 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.806 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.809 257641 INFO nova.virt.libvirt.driver [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Instance spawned successfully.#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.809 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.847 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.852 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.856 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.856 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.857 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.857 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.857 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.858 257641 DEBUG nova.virt.libvirt.driver [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.886 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.917 257641 INFO nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Took 7.01 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.918 257641 DEBUG nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.985 257641 INFO nova.compute.manager [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Took 7.97 seconds to build instance.#033[00m
Nov 29 03:48:54 np0005539550 nova_compute[257631]: 2025-11-29 08:48:54.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:55 np0005539550 nova_compute[257631]: 2025-11-29 08:48:55.001 257641 DEBUG oslo_concurrency.lockutils [None req-9bd4da4f-d96e-4409-8e4c-91eb4cbbbfde 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:55.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:48:55 np0005539550 nova_compute[257631]: 2025-11-29 08:48:55.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 263 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.3 MiB/s wr, 116 op/s
Nov 29 03:48:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:56.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.988 257641 DEBUG nova.compute.manager [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.989 257641 DEBUG oslo_concurrency.lockutils [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.989 257641 DEBUG oslo_concurrency.lockutils [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.990 257641 DEBUG oslo_concurrency.lockutils [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.990 257641 DEBUG nova.compute.manager [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] No waiting events found dispatching network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:48:56 np0005539550 nova_compute[257631]: 2025-11-29 08:48:56.990 257641 WARNING nova.compute.manager [req-37754a1a-1837-4384-9f6f-67ba2b7d939d req-8b469ab6-7d90-4dbc-9b43-e5e51d01ba40 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received unexpected event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:48:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:57.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.939 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.939 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.940 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.940 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:48:57 np0005539550 nova_compute[257631]: 2025-11-29 08:48:57.940 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.151 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406123.1500094, 6b7b3384-3de2-4098-b77f-3658f82eedfc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.152 257641 INFO nova.compute.manager [-] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.184 257641 DEBUG nova.compute.manager [None req-d669ed3d-652a-4007-acde-9f213e131f06 - - - - - -] [instance: 6b7b3384-3de2-4098-b77f-3658f82eedfc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:48:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:48:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:48:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373344537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.415 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.511 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.513 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000da as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:48:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.3 MiB/s wr, 115 op/s
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.706 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.707 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3929MB free_disk=20.96719741821289GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.708 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.708 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:48:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:48:58.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.811 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 8dae1bd9-84b7-4bad-95d7-218720568a12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.812 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.812 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:48:58 np0005539550 nova_compute[257631]: 2025-11-29 08:48:58.849 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.152 257641 DEBUG nova.compute.manager [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-changed-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.152 257641 DEBUG nova.compute.manager [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Refreshing instance network info cache due to event network-changed-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.153 257641 DEBUG oslo_concurrency.lockutils [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.154 257641 DEBUG oslo_concurrency.lockutils [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.155 257641 DEBUG nova.network.neutron [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Refreshing network info cache for port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:48:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:48:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381055498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.261 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.268 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.280 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:48:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:48:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:48:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:48:59.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.304 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.305 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:48:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:48:59
Nov 29 03:48:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:48:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:48:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', 'images', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 03:48:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:48:59 np0005539550 nova_compute[257631]: 2025-11-29 08:48:59.992 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 158 op/s
Nov 29 03:49:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:00.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:00 np0005539550 nova_compute[257631]: 2025-11-29 08:49:00.868 257641 DEBUG nova.network.neutron [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updated VIF entry in instance network info cache for port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:00 np0005539550 nova_compute[257631]: 2025-11-29 08:49:00.869 257641 DEBUG nova.network.neutron [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updating instance_info_cache with network_info: [{"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:00 np0005539550 nova_compute[257631]: 2025-11-29 08:49:00.892 257641 DEBUG oslo_concurrency.lockutils [req-4352a085-5abd-4a7e-8f6a-87be34efe8ed req-135ba74e-7299-48f8-be39-81112f1efcee 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-8dae1bd9-84b7-4bad-95d7-218720568a12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:01 np0005539550 nova_compute[257631]: 2025-11-29 08:49:01.425 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:01.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:01 np0005539550 nova_compute[257631]: 2025-11-29 08:49:01.428 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:01 np0005539550 nova_compute[257631]: 2025-11-29 08:49:01.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.5 MiB/s wr, 154 op/s
Nov 29 03:49:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:02.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:03.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:03 np0005539550 nova_compute[257631]: 2025-11-29 08:49:03.917 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Nov 29 03:49:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:04.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:04 np0005539550 nova_compute[257631]: 2025-11-29 08:49:04.995 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:05.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Nov 29 03:49:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:06.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:06 np0005539550 nova_compute[257631]: 2025-11-29 08:49:06.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:07.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:08 np0005539550 podman[395236]: 2025-11-29 08:49:08.387716368 +0000 UTC m=+0.105621195 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 03:49:08 np0005539550 podman[395235]: 2025-11-29 08:49:08.42256255 +0000 UTC m=+0.138808975 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:49:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Nov 29 03:49:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:08.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:08Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:5c:71 10.100.0.11
Nov 29 03:49:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:08Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:5c:71 10.100.0.11
Nov 29 03:49:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:49:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:49:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:49:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:49:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:49:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:09 np0005539550 nova_compute[257631]: 2025-11-29 08:49:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:09 np0005539550 nova_compute[257631]: 2025-11-29 08:49:09.997 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.693669) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150693798, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1282, "num_deletes": 258, "total_data_size": 1993718, "memory_usage": 2019920, "flush_reason": "Manual Compaction"}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Nov 29 03:49:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 301 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 965 KiB/s wr, 164 op/s
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150705430, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1936669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65964, "largest_seqno": 67245, "table_properties": {"data_size": 1930463, "index_size": 3408, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13718, "raw_average_key_size": 20, "raw_value_size": 1917747, "raw_average_value_size": 2845, "num_data_blocks": 149, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406058, "oldest_key_time": 1764406058, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 11878 microseconds, and 6228 cpu microseconds.
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.705565) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1936669 bytes OK
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.705598) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.708293) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.708335) EVENT_LOG_v1 {"time_micros": 1764406150708326, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.708360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1987918, prev total WAL file size 1987918, number of live WAL files 2.
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.709585) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373636' seq:0, type:0; will stop at (end)
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1891KB)], [149(10MB)]
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150709668, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12706254, "oldest_snapshot_seqno": -1}
Nov 29 03:49:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 10096 keys, 12540953 bytes, temperature: kUnknown
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150801822, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 12540953, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12476514, "index_size": 38086, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 267282, "raw_average_key_size": 26, "raw_value_size": 12300000, "raw_average_value_size": 1218, "num_data_blocks": 1444, "num_entries": 10096, "num_filter_entries": 10096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406150, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.802211) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 12540953 bytes
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.803591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.7 rd, 135.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 10.3 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.0) write-amplify(6.5) OK, records in: 10633, records dropped: 537 output_compression: NoCompression
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.803610) EVENT_LOG_v1 {"time_micros": 1764406150803601, "job": 92, "event": "compaction_finished", "compaction_time_micros": 92298, "compaction_time_cpu_micros": 30337, "output_level": 6, "num_output_files": 1, "total_output_size": 12540953, "num_input_records": 10633, "num_output_records": 10096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150804260, "job": 92, "event": "table_file_deletion", "file_number": 151}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406150806555, "job": 92, "event": "table_file_deletion", "file_number": 149}
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.709381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.806656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.806659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.806661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.806662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:10 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:49:10.806663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:49:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:11 np0005539550 nova_compute[257631]: 2025-11-29 08:49:11.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Nov 29 03:49:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:12.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:13.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 206 op/s
Nov 29 03:49:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:14.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:15 np0005539550 nova_compute[257631]: 2025-11-29 08:49:15.001 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:15.890 159086 DEBUG eventlet.wsgi.server [-] (159086) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:15.893 159086 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: Accept: */*#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: Connection: close#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: Content-Type: text/plain#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: Host: 169.254.169.254#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: User-Agent: curl/7.84.0#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: X-Forwarded-For: 10.100.0.11#015
Nov 29 03:49:15 np0005539550 ovn_metadata_agent[158973]: X-Ovn-Network-Id: 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.592 159086 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:49:16 np0005539550 haproxy-metadata-proxy-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395088]: 10.100.0.11:54466 [29/Nov/2025:08:49:15.888] listener listener/metadata 0/0/0/705/705 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.593 159086 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 0.7000916#033[00m
Nov 29 03:49:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 326 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.742 159086 DEBUG eventlet.wsgi.server [-] (159086) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.743 159086 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: Accept: */*#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: Connection: close#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: Content-Length: 100#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: Content-Type: application/x-www-form-urlencoded#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: Host: 169.254.169.254#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: User-Agent: curl/7.84.0#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: X-Forwarded-For: 10.100.0.11#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: X-Ovn-Network-Id: 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d#015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: #015
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 03:49:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:16.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.887 159086 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 03:49:16 np0005539550 haproxy-metadata-proxy-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395088]: 10.100.0.11:54480 [29/Nov/2025:08:49:16.741] listener listener/metadata 0/0/0/146/146 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 29 03:49:16 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:16.888 159086 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.1446226#033[00m
Nov 29 03:49:16 np0005539550 nova_compute[257631]: 2025-11-29 08:49:16.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:16 np0005539550 nova_compute[257631]: 2025-11-29 08:49:16.924 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 331 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 160 op/s
Nov 29 03:49:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:18.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.837 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.838 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.838 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.839 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.839 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.841 257641 INFO nova.compute.manager [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Terminating instance#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.844 257641 DEBUG nova.compute.manager [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:49:18 np0005539550 kernel: tap882b2916-31 (unregistering): left promiscuous mode
Nov 29 03:49:18 np0005539550 NetworkManager[49039]: <info>  [1764406158.9081] device (tap882b2916-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:49:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:18Z|01024|binding|INFO|Releasing lport 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 from this chassis (sb_readonly=0)
Nov 29 03:49:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:18Z|01025|binding|INFO|Setting lport 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 down in Southbound
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:18Z|01026|binding|INFO|Removing iface tap882b2916-31 ovn-installed in OVS
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.926 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:5c:71 10.100.0.11'], port_security=['fa:16:3e:0f:5c:71 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dae1bd9-84b7-4bad-95d7-218720568a12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56b9fd3ff0794089a4e086cd4ca0c36d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8db62d0-26b9-4cd2-8074-3bef6ec0dd74 f32c0d02-bfab-4a17-8d0a-3d3d92c87366', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bf8b6146-f390-4c84-9a4c-6f550bc45407, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.928 158978 INFO neutron.agent.ovn.metadata.agent [-] Port 882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 in datapath 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d unbound from our chassis#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.931 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.933 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[8c980c54-1db8-47fc-9be8-0f5b165adfe7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.934 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d namespace which is not needed anymore#033[00m
Nov 29 03:49:18 np0005539550 nova_compute[257631]: 2025-11-29 08:49:18.944 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:18 np0005539550 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d000000da.scope: Deactivated successfully.
Nov 29 03:49:18 np0005539550 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d000000da.scope: Consumed 14.942s CPU time.
Nov 29 03:49:18 np0005539550 systemd-machined[216673]: Machine qemu-118-instance-000000da terminated.
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.979 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.980 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:18.980 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.072 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.089 257641 INFO nova.virt.libvirt.driver [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Instance destroyed successfully.#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.089 257641 DEBUG nova.objects.instance [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lazy-loading 'resources' on Instance uuid 8dae1bd9-84b7-4bad-95d7-218720568a12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.104 257641 DEBUG nova.virt.libvirt.vif [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:48:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1470590521',display_name='tempest-TestServerBasicOps-server-1470590521',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1470590521',id=218,image_ref='4873db8c-b414-4e95-acd9-77caabebe722',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGJ/M7edo5b6KW4Db6z+vbxGOAVJ7Yg3aHNlLjdEnvoE8RI2KqGq39F9EgOj78xH716z2Q0O3Cv66UM2Q54F47ITtnbMTRzYAzFEfv9yqVsIaSnlCwborXFjq3RQE8jj3A==',key_name='tempest-TestServerBasicOps-1340609054',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:48:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56b9fd3ff0794089a4e086cd4ca0c36d',ramdisk_id='',reservation_id='r-swkxo9fv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='4873db8c-b414-4e95-acd9-77caabebe722',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-919100436',owner_user_name='tempest-TestServerBasicOps-919100436-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='01dac15e9e59419d98c66eb08a434612',uuid=8dae1bd9-84b7-4bad-95d7-218720568a12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.105 257641 DEBUG nova.network.os_vif_util [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converting VIF {"id": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "address": "fa:16:3e:0f:5c:71", "network": {"id": "1ebad506-af98-4cc8-a8ed-1212b7b6ee2d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1675712983-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56b9fd3ff0794089a4e086cd4ca0c36d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap882b2916-31", "ovs_interfaceid": "882b2916-31ab-4fc4-b3c9-6d4653c8d5a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.106 257641 DEBUG nova.network.os_vif_util [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.107 257641 DEBUG os_vif [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.110 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap882b2916-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.112 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.117 257641 INFO os_vif [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:5c:71,bridge_name='br-int',has_traffic_filtering=True,id=882b2916-31ab-4fc4-b3c9-6d4653c8d5a2,network=Network(1ebad506-af98-4cc8-a8ed-1212b7b6ee2d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap882b2916-31')#033[00m
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [NOTICE]   (395086) : haproxy version is 2.8.14-c23fe91
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [NOTICE]   (395086) : path to executable is /usr/sbin/haproxy
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [WARNING]  (395086) : Exiting Master process...
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [WARNING]  (395086) : Exiting Master process...
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [ALERT]    (395086) : Current worker (395088) exited with code 143 (Terminated)
Nov 29 03:49:19 np0005539550 neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d[395082]: [WARNING]  (395086) : All workers exited. Exiting... (0)
Nov 29 03:49:19 np0005539550 systemd[1]: libpod-0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f.scope: Deactivated successfully.
Nov 29 03:49:19 np0005539550 podman[395358]: 2025-11-29 08:49:19.14138534 +0000 UTC m=+0.065809628 container died 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f-userdata-shm.mount: Deactivated successfully.
Nov 29 03:49:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4e3d558f50f86b1e3b14d77d1c742b6a6663cd4aee90be04860cc3c77c9e7630-merged.mount: Deactivated successfully.
Nov 29 03:49:19 np0005539550 podman[395358]: 2025-11-29 08:49:19.187993687 +0000 UTC m=+0.112417995 container cleanup 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:19 np0005539550 systemd[1]: libpod-conmon-0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f.scope: Deactivated successfully.
Nov 29 03:49:19 np0005539550 podman[395414]: 2025-11-29 08:49:19.264107972 +0000 UTC m=+0.048575967 container remove 0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.269 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[9e069875-edba-4d84-a235-72c6ccc6a91d]: (4, ('Sat Nov 29 08:49:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d (0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f)\n0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f\nSat Nov 29 08:49:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d (0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f)\n0117be4be36c98fe9c7e6a42b2dc748988293be49abec13eee1b7532b2e8767f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.272 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e388154d-4a8c-4515-b51d-74b42cf8eeb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.273 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ebad506-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 kernel: tap1ebad506-a0: left promiscuous mode
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.295 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.299 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ed22f9d2-273b-4d22-bbe3-c4e21d0fd047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.324 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd016cf-3c5d-40e0-8284-5de5cf7980e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.326 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[00f2da23-c5a7-4182-875d-5256e4f89d14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.349 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[839ecbcc-a9c5-428b-918e-8ec1a60dd422]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 918070, 'reachable_time': 29359, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395433, 'error': None, 'target': 'ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.351 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ebad506-af98-4cc8-a8ed-1212b7b6ee2d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:49:19 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:19.352 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d15347-a014-439d-a7ac-4104ffa539e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:19 np0005539550 systemd[1]: run-netns-ovnmeta\x2d1ebad506\x2daf98\x2d4cc8\x2da8ed\x2d1212b7b6ee2d.mount: Deactivated successfully.
Nov 29 03:49:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.608 257641 INFO nova.virt.libvirt.driver [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Deleting instance files /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12_del#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.610 257641 INFO nova.virt.libvirt.driver [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Deletion of /var/lib/nova/instances/8dae1bd9-84b7-4bad-95d7-218720568a12_del complete#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.687 257641 INFO nova.compute.manager [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.688 257641 DEBUG oslo.service.loopingcall [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.688 257641 DEBUG nova.compute.manager [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.689 257641 DEBUG nova.network.neutron [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.857 257641 DEBUG nova.compute.manager [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-unplugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.857 257641 DEBUG oslo_concurrency.lockutils [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.858 257641 DEBUG oslo_concurrency.lockutils [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.858 257641 DEBUG oslo_concurrency.lockutils [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.859 257641 DEBUG nova.compute.manager [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] No waiting events found dispatching network-vif-unplugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:19 np0005539550 nova_compute[257631]: 2025-11-29 08:49:19.859 257641 DEBUG nova.compute.manager [req-3333bf70-8ea9-463d-acb5-4baa08e63cca req-6ed09101-2398-42e4-975c-a10dbe88a720 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-unplugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:49:20 np0005539550 nova_compute[257631]: 2025-11-29 08:49:20.005 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002193221605713754 of space, bias 1.0, pg target 0.6579664817141262 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0054166230457844775 of space, bias 1.0, pg target 1.6249869137353432 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:49:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 308 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 1.9 MiB/s wr, 182 op/s
Nov 29 03:49:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:20.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:21.123 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:21.125 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:49:21 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:21.126 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.136 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.147 257641 DEBUG nova.network.neutron [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.174 257641 INFO nova.compute.manager [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Took 1.48 seconds to deallocate network for instance.#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.221 257641 DEBUG nova.compute.manager [req-5b4eb749-cb7c-4c2b-96b8-c69b25e81b56 req-03f42322-620d-44d6-9b35-b412bead63d7 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-deleted-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.223 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.224 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.278 257641 DEBUG oslo_concurrency.processutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:21 np0005539550 podman[395436]: 2025-11-29 08:49:21.414964837 +0000 UTC m=+0.142699082 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:49:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868819727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.738 257641 DEBUG oslo_concurrency.processutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.746 257641 DEBUG nova.compute.provider_tree [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.773 257641 DEBUG nova.scheduler.client.report [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.815 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.853 257641 INFO nova.scheduler.client.report [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Deleted allocations for instance 8dae1bd9-84b7-4bad-95d7-218720568a12#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.943 257641 DEBUG oslo_concurrency.lockutils [None req-04c2d5d9-d8cb-4190-a2c6-26050d60b93b 01dac15e9e59419d98c66eb08a434612 56b9fd3ff0794089a4e086cd4ca0c36d - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.953 257641 DEBUG nova.compute.manager [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.953 257641 DEBUG oslo_concurrency.lockutils [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.954 257641 DEBUG oslo_concurrency.lockutils [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.954 257641 DEBUG oslo_concurrency.lockutils [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "8dae1bd9-84b7-4bad-95d7-218720568a12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.955 257641 DEBUG nova.compute.manager [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] No waiting events found dispatching network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:21 np0005539550 nova_compute[257631]: 2025-11-29 08:49:21.955 257641 WARNING nova.compute.manager [req-08992b91-4726-44f3-84cf-fc1a2a4c228f req-8ebc16cd-870f-47da-a5c8-aca922c23cfa 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Received unexpected event network-vif-plugged-882b2916-31ab-4fc4-b3c9-6d4653c8d5a2 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:49:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 277 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.0 MiB/s wr, 175 op/s
Nov 29 03:49:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:22.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:23.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:24 np0005539550 nova_compute[257631]: 2025-11-29 08:49:24.114 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 161 op/s
Nov 29 03:49:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:24.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:25 np0005539550 nova_compute[257631]: 2025-11-29 08:49:25.008 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:25.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 29 03:49:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:26.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:27.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 151 op/s
Nov 29 03:49:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:28.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:29 np0005539550 nova_compute[257631]: 2025-11-29 08:49:29.118 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:29.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:30 np0005539550 nova_compute[257631]: 2025-11-29 08:49:30.010 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 134 op/s
Nov 29 03:49:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:30.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:31.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:32 np0005539550 nova_compute[257631]: 2025-11-29 08:49:32.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 695 KiB/s rd, 2.5 MiB/s wr, 110 op/s
Nov 29 03:49:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:32 np0005539550 nova_compute[257631]: 2025-11-29 08:49:32.891 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:34 np0005539550 nova_compute[257631]: 2025-11-29 08:49:34.088 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406159.085729, 8dae1bd9-84b7-4bad-95d7-218720568a12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:34 np0005539550 nova_compute[257631]: 2025-11-29 08:49:34.088 257641 INFO nova.compute.manager [-] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:49:34 np0005539550 nova_compute[257631]: 2025-11-29 08:49:34.121 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:34 np0005539550 nova_compute[257631]: 2025-11-29 08:49:34.125 257641 DEBUG nova.compute.manager [None req-b1e305af-1e20-4d09-a633-00d28b3e0b3d - - - - - -] [instance: 8dae1bd9-84b7-4bad-95d7-218720568a12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 444 KiB/s rd, 2.2 MiB/s wr, 78 op/s
Nov 29 03:49:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:34.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539550 nova_compute[257631]: 2025-11-29 08:49:35.012 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:35.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 191 KiB/s rd, 773 KiB/s wr, 42 op/s
Nov 29 03:49:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:37.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 43 KiB/s wr, 5 op/s
Nov 29 03:49:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:39 np0005539550 nova_compute[257631]: 2025-11-29 08:49:39.124 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:39 np0005539550 podman[395544]: 2025-11-29 08:49:39.357313331 +0000 UTC m=+0.075020958 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:39 np0005539550 podman[395545]: 2025-11-29 08:49:39.387747373 +0000 UTC m=+0.103818089 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:49:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:39.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:40 np0005539550 nova_compute[257631]: 2025-11-29 08:49:40.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 297 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 43 KiB/s wr, 17 op/s
Nov 29 03:49:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:40.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:41.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 33 KiB/s rd, 21 KiB/s wr, 45 op/s
Nov 29 03:49:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:42.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.003 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.003 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.004 257641 INFO nova.compute.manager [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Unshelving#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.064 257641 INFO nova.virt.block_device [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Booting with volume 77422021-a8a1-4df0-9b4a-24ac0dd34ac3 at /dev/vda#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.230 257641 DEBUG os_brick.utils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.233 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.245 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.246 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[19a7ecc3-051d-4047-9953-312a3b368ad2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.248 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.256 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.257 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[3e44b309-2db0-4791-bbc1-b43f12e25e06]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.258 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.267 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.268 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[bae6476b-f9a6-46e4-860e-9c8689728acc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.269 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[40752571-9ff7-4f6c-9ddf-4346c00b295a]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.270 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.304 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.307 257641 DEBUG os_brick.initiator.connectors.lightos [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.308 257641 DEBUG os_brick.initiator.connectors.lightos [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.308 257641 DEBUG os_brick.initiator.connectors.lightos [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.309 257641 DEBUG os_brick.utils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:49:43 np0005539550 nova_compute[257631]: 2025-11-29 08:49:43.309 257641 DEBUG nova.virt.block_device [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating existing volume attachment record: 50ac5631-c875-4dcf-b755-17af20132ed1 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:49:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:43.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Nov 29 03:49:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Nov 29 03:49:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.126 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.422 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.423 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.430 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_requests' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.446 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'numa_topology' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.466 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.467 257641 INFO nova.compute.claims [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:49:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 3.6 KiB/s wr, 63 op/s
Nov 29 03:49:44 np0005539550 nova_compute[257631]: 2025-11-29 08:49:44.743 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:49:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2039906572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.155 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.162 257641 DEBUG nova.compute.provider_tree [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.179 257641 DEBUG nova.scheduler.client.report [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.203 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:45 np0005539550 nova_compute[257631]: 2025-11-29 08:49:45.394 257641 INFO nova.network.neutron [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating port ce1430a7-5573-42ed-950f-6b729838e557 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Nov 29 03:49:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:45.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.170 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.171 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.171 257641 DEBUG nova.network.neutron [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.716 257641 DEBUG nova.compute.manager [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-changed-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.717 257641 DEBUG nova.compute.manager [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing instance network info cache due to event network-changed-ce1430a7-5573-42ed-950f-6b729838e557. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:49:46 np0005539550 nova_compute[257631]: 2025-11-29 08:49:46.718 257641 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 78 op/s
Nov 29 03:49:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:46.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:47.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.589 257641 DEBUG nova.network.neutron [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.645 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.648 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.649 257641 INFO nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating image(s)#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.650 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.650 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Ensure instance console log exists: /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.651 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.651 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.652 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.656 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start _get_guest_xml network_info=[{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': '50ac5631-c875-4dcf-b755-17af20132ed1', 'device_type': 'disk', 'delete_on_termination': True, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'attached_at': '', 'detached_at': '', 'volume_id': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3', 'serial': '77422021-a8a1-4df0-9b4a-24ac0dd34ac3'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.658 257641 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.658 257641 DEBUG nova.network.neutron [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing network info cache for port ce1430a7-5573-42ed-950f-6b729838e557 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.666 257641 WARNING nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.675 257641 DEBUG nova.virt.libvirt.host [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.676 257641 DEBUG nova.virt.libvirt.host [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.680 257641 DEBUG nova.virt.libvirt.host [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.681 257641 DEBUG nova.virt.libvirt.host [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.683 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.683 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.684 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.684 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.685 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.685 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.686 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.686 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.687 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.687 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.688 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.688 257641 DEBUG nova.virt.hardware [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.689 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.750 257641 DEBUG nova.storage.rbd_utils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:47 np0005539550 nova_compute[257631]: 2025-11-29 08:49:47.755 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:49:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006181935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.177 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.200 257641 DEBUG nova.virt.libvirt.vif [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-691552797',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:43Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.200 257641 DEBUG nova.network.os_vif_util [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.201 257641 DEBUG nova.network.os_vif_util [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.202 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'pci_devices' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.223 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <uuid>e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</uuid>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <name>instance-000000dc</name>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestShelveInstance-server-965324894</nova:name>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:49:47</nova:creationTime>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:user uuid="14d446574294425e9bc89e596ea56dc9">tempest-TestShelveInstance-498716578-project-member</nova:user>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:project uuid="2e636ab14fe94059b82b9cbcf8831d87">tempest-TestShelveInstance-498716578</nova:project>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <nova:port uuid="ce1430a7-5573-42ed-950f-6b729838e557">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="serial">e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="uuid">e82bbbb4-4776-445f-9d1d-aea28fbbcaa8</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-77422021-a8a1-4df0-9b4a-24ac0dd34ac3">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <serial>77422021-a8a1-4df0-9b4a-24ac0dd34ac3</serial>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:0a:48:c9"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <target dev="tapce1430a7-55"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/console.log" append="off"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <input type="keyboard" bus="usb"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:49:48 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:49:48 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:49:48 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:49:48 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.224 257641 DEBUG nova.compute.manager [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Preparing to wait for external event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.225 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.225 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.225 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.226 257641 DEBUG nova.virt.libvirt.vif [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-691552797',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:43Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.226 257641 DEBUG nova.network.os_vif_util [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.227 257641 DEBUG nova.network.os_vif_util [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.227 257641 DEBUG os_vif [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.227 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.228 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.228 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.231 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.231 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce1430a7-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.231 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapce1430a7-55, col_values=(('external_ids', {'iface-id': 'ce1430a7-5573-42ed-950f-6b729838e557', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0a:48:c9', 'vm-uuid': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.233 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:48 np0005539550 NetworkManager[49039]: <info>  [1764406188.2348] manager: (tapce1430a7-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.235 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.241 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.243 257641 INFO os_vif [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55')#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.316 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.316 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.317 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] No VIF found with MAC fa:16:3e:0a:48:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.317 257641 INFO nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Using config drive#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.348 257641 DEBUG nova.storage.rbd_utils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.385 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.436 257641 DEBUG nova.objects.instance [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'keypairs' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 255 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Nov 29 03:49:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:48.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.887 257641 INFO nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Creating config drive at /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config#033[00m
Nov 29 03:49:48 np0005539550 nova_compute[257631]: 2025-11-29 08:49:48.898 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ijfm30m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.003 257641 DEBUG nova.network.neutron [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated VIF entry in instance network info cache for port ce1430a7-5573-42ed-950f-6b729838e557. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.004 257641 DEBUG nova.network.neutron [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.025 257641 DEBUG oslo_concurrency.lockutils [req-977030ab-b777-40d7-aec2-083d71b332eb req-4b568e8a-3ce2-424a-8587-f325ec50abb5 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.050 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ijfm30m" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.079 257641 DEBUG nova.storage.rbd_utils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] rbd image e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.083 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.352 257641 DEBUG oslo_concurrency.processutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.353 257641 INFO nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deleting local config drive /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8/disk.config because it was imported into RBD.#033[00m
Nov 29 03:49:49 np0005539550 kernel: tapce1430a7-55: entered promiscuous mode
Nov 29 03:49:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:49Z|01027|binding|INFO|Claiming lport ce1430a7-5573-42ed-950f-6b729838e557 for this chassis.
Nov 29 03:49:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:49Z|01028|binding|INFO|ce1430a7-5573-42ed-950f-6b729838e557: Claiming fa:16:3e:0a:48:c9 10.100.0.8
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.4366] manager: (tapce1430a7-55): new Tun device (/org/freedesktop/NetworkManager/Devices/447)
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.444 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.457 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.461 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.4651] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.4678] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.468 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:48:c9 10.100.0.8'], port_security=['fa:16:3e:0a:48:c9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '7', 'neutron:security_group_ids': '96207713-2918-4e0d-9816-6691655bac56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ce1430a7-5573-42ed-950f-6b729838e557) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.470 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ce1430a7-5573-42ed-950f-6b729838e557 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f bound to our chassis#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.472 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e328485-18b8-4dc7-b012-0dd256b9b97f#033[00m
Nov 29 03:49:49 np0005539550 systemd-machined[216673]: New machine qemu-119-instance-000000dc.
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.489 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ba2ca5-b462-4d20-bd8e-db1915a59f0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.491 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e328485-11 in ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.493 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e328485-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.493 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d6218da8-b85a-4844-83da-ede10684e54d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.495 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[74d15f53-ab8b-4f81-b19f-635e8bf9171b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:49.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.507 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[3edc86e3-c965-44ec-89b8-cf40a69bf4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 systemd[1]: Started Virtual Machine qemu-119-instance-000000dc.
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.532 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[bc736b3d-d2de-4fe2-a2d7-2318279a372b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 systemd-udevd[395733]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.5674] device (tapce1430a7-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.5686] device (tapce1430a7-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.570 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[1d71d2db-9489-48a5-934a-03201d097565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.5832] manager: (tap7e328485-10): new Veth device (/org/freedesktop/NetworkManager/Devices/450)
Nov 29 03:49:49 np0005539550 systemd-udevd[395739]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.582 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[41b4fbbf-eaae-4e40-92cb-314177e58ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.625 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[c59578de-ff80-4a14-94a2-f6efdd8c1b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.629 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[3a86dd59-7ac0-46e5-a369-d3e5e71a80e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.6526] device (tap7e328485-10): carrier: link connected
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.661 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[33a9fc5d-d04d-4b3b-bf8d-6de9d04d00c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.681 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd0d571-afe1-4fac-b436-ab7aa15253a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923731, 'reachable_time': 31883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395763, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.696 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.702 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[39cd4696-aeab-4a40-aa13-2cf8315cc0ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:c011'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 923731, 'tstamp': 923731}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395764, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.717 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.726 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc68ff4-1eb1-49cd-a3ba-ce3834aa455c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e328485-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:c0:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 300], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923731, 'reachable_time': 31883, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395765, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:49Z|01029|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 ovn-installed in OVS
Nov 29 03:49:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:49Z|01030|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 up in Southbound
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.728 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.760 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb02ae8-eee7-496a-a9fa-de0d0b30e7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.819 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[460407ae-d886-451d-a05c-f69fdbade2ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.820 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.821 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.821 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e328485-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:49 np0005539550 NetworkManager[49039]: <info>  [1764406189.8237] manager: (tap7e328485-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.823 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 kernel: tap7e328485-10: entered promiscuous mode
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.828 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.829 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e328485-10, col_values=(('external_ids', {'iface-id': '239aff46-c81c-4ca8-9f81-35eea8cc0198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:49:49 np0005539550 ovn_controller[148680]: 2025-11-29T08:49:49Z|01031|binding|INFO|Releasing lport 239aff46-c81c-4ca8-9f81-35eea8cc0198 from this chassis (sb_readonly=0)
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.845 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.846 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.847 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.849 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d55eb7-db67-49b6-a577-2e0ad9d7a070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.849 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/7e328485-18b8-4dc7-b012-0dd256b9b97f.pid.haproxy
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 7e328485-18b8-4dc7-b012-0dd256b9b97f
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:49:49 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:49:49.850 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'env', 'PROCESS_TAG=haproxy-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e328485-18b8-4dc7-b012-0dd256b9b97f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.908 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406189.907604, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.908 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Started (Lifecycle Event)#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.928 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.933 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406189.9078062, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.933 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.953 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.957 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:49 np0005539550 nova_compute[257631]: 2025-11-29 08:49:49.979 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.018 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:50 np0005539550 podman[395839]: 2025-11-29 08:49:50.226498765 +0000 UTC m=+0.050372422 container create 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:49:50 np0005539550 systemd[1]: Started libpod-conmon-059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294.scope.
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Nov 29 03:49:50 np0005539550 podman[395839]: 2025-11-29 08:49:50.20031843 +0000 UTC m=+0.024192137 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3f2e9afd509683bb62293a4dd6eb3f1ef7622e98836979fb76b0898f52a7636/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:50 np0005539550 podman[395839]: 2025-11-29 08:49:50.32656833 +0000 UTC m=+0.150442027 container init 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:49:50 np0005539550 podman[395839]: 2025-11-29 08:49:50.332350015 +0000 UTC m=+0.156223712 container start 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:49:50 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [NOTICE]   (395858) : New worker (395860) forked
Nov 29 03:49:50 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [NOTICE]   (395858) : Loading success.
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Nov 29 03:49:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.7 KiB/s wr, 61 op/s
Nov 29 03:49:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:50.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.808 257641 DEBUG nova.compute.manager [req-701f6be1-cccd-4e1c-99c1-08c77de710ab req-cf423c38-48f3-4b1f-80b1-d7fa578eb3c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.808 257641 DEBUG oslo_concurrency.lockutils [req-701f6be1-cccd-4e1c-99c1-08c77de710ab req-cf423c38-48f3-4b1f-80b1-d7fa578eb3c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.808 257641 DEBUG oslo_concurrency.lockutils [req-701f6be1-cccd-4e1c-99c1-08c77de710ab req-cf423c38-48f3-4b1f-80b1-d7fa578eb3c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.808 257641 DEBUG oslo_concurrency.lockutils [req-701f6be1-cccd-4e1c-99c1-08c77de710ab req-cf423c38-48f3-4b1f-80b1-d7fa578eb3c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.809 257641 DEBUG nova.compute.manager [req-701f6be1-cccd-4e1c-99c1-08c77de710ab req-cf423c38-48f3-4b1f-80b1-d7fa578eb3c2 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Processing event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.809 257641 DEBUG nova.compute.manager [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.813 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406190.8134434, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.814 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.815 257641 DEBUG nova.virt.libvirt.driver [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.818 257641 INFO nova.virt.libvirt.driver [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance spawned successfully.#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.819 257641 DEBUG nova.compute.manager [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.844 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.848 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.907 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:49:50 np0005539550 nova_compute[257631]: 2025-11-29 08:49:50.928 257641 DEBUG oslo_concurrency.lockutils [None req-9d3aea5e-b301-4e0b-8600-90f41bdcca5a 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 7.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:51.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:52 np0005539550 podman[395870]: 2025-11-29 08:49:52.35774992 +0000 UTC m=+0.095020870 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:49:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 147 op/s
Nov 29 03:49:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:52.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.917 257641 DEBUG nova.compute.manager [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.917 257641 DEBUG oslo_concurrency.lockutils [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.917 257641 DEBUG oslo_concurrency.lockutils [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.918 257641 DEBUG oslo_concurrency.lockutils [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.918 257641 DEBUG nova.compute.manager [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:49:52 np0005539550 nova_compute[257631]: 2025-11-29 08:49:52.918 257641 WARNING nova.compute.manager [req-29f2bd74-df98-4266-8900-07bf25ec8df2 req-9bfe4dd6-f7a9-45b9-b5fc-a5caf69ac779 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received unexpected event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with vm_state active and task_state None.#033[00m
Nov 29 03:49:53 np0005539550 nova_compute[257631]: 2025-11-29 08:49:53.232 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:53.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:53 np0005539550 nova_compute[257631]: 2025-11-29 08:49:53.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:53 np0005539550 nova_compute[257631]: 2025-11-29 08:49:53.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:49:53 np0005539550 nova_compute[257631]: 2025-11-29 08:49:53.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:49:54 np0005539550 nova_compute[257631]: 2025-11-29 08:49:54.116 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:49:54 np0005539550 nova_compute[257631]: 2025-11-29 08:49:54.116 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:49:54 np0005539550 nova_compute[257631]: 2025-11-29 08:49:54.117 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:49:54 np0005539550 nova_compute[257631]: 2025-11-29 08:49:54.117 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:49:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 21 KiB/s wr, 143 op/s
Nov 29 03:49:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:54.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.024 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:49:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Nov 29 03:49:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Nov 29 03:49:55 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.745 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.762 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.763 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.764 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:55 np0005539550 nova_compute[257631]: 2025-11-29 08:49:55.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:49:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b9932e1d-ccd9-429e-a94e-d62c242117d6 does not exist
Nov 29 03:49:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 337745d9-536c-4aee-82b6-9234f75357fd does not exist
Nov 29 03:49:56 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 38729443-d0a8-47fc-ac4e-311db901605f does not exist
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 25 KiB/s wr, 191 op/s
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:49:56 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:49:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:56.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.019547944 +0000 UTC m=+0.038403382 container create 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 03:49:57 np0005539550 systemd[1]: Started libpod-conmon-68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f.scope.
Nov 29 03:49:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.002387135 +0000 UTC m=+0.021242593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.183727504 +0000 UTC m=+0.202582972 container init 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.19515633 +0000 UTC m=+0.214011768 container start 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:49:57 np0005539550 interesting_borg[396238]: 167 167
Nov 29 03:49:57 np0005539550 systemd[1]: libpod-68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f.scope: Deactivated successfully.
Nov 29 03:49:57 np0005539550 conmon[396238]: conmon 68ec9258067cc79e911a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f.scope/container/memory.events
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.317263216 +0000 UTC m=+0.336118754 container attach 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.321780159 +0000 UTC m=+0.340635637 container died 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:49:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:49:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:49:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-81cb4ffb692042e4c837e0b1bf05228b035efb4e5c91e1ec17cdabf54d035c95-merged.mount: Deactivated successfully.
Nov 29 03:49:57 np0005539550 podman[396222]: 2025-11-29 08:49:57.666926178 +0000 UTC m=+0.685781626 container remove 68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:49:57 np0005539550 systemd[1]: libpod-conmon-68ec9258067cc79e911ac6e5c4578b7db89976bfe61369e312a521eb62bde62f.scope: Deactivated successfully.
Nov 29 03:49:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:57 np0005539550 podman[396264]: 2025-11-29 08:49:57.882196426 +0000 UTC m=+0.026573736 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:49:58 np0005539550 podman[396264]: 2025-11-29 08:49:58.096753627 +0000 UTC m=+0.241130847 container create 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:49:58 np0005539550 systemd[1]: Started libpod-conmon-421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194.scope.
Nov 29 03:49:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:49:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:49:58 np0005539550 podman[396264]: 2025-11-29 08:49:58.220304289 +0000 UTC m=+0.364681519 container init 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 29 03:49:58 np0005539550 podman[396264]: 2025-11-29 08:49:58.226378161 +0000 UTC m=+0.370755381 container start 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:49:58 np0005539550 podman[396264]: 2025-11-29 08:49:58.230157416 +0000 UTC m=+0.374534656 container attach 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:49:58 np0005539550 nova_compute[257631]: 2025-11-29 08:49:58.250 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:49:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:49:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:49:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 153 op/s
Nov 29 03:49:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:49:58.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:58 np0005539550 nova_compute[257631]: 2025-11-29 08:49:58.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:58 np0005539550 nova_compute[257631]: 2025-11-29 08:49:58.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:49:59 np0005539550 happy_diffie[396280]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:49:59 np0005539550 happy_diffie[396280]: --> relative data size: 1.0
Nov 29 03:49:59 np0005539550 happy_diffie[396280]: --> All data devices are unavailable
Nov 29 03:49:59 np0005539550 systemd[1]: libpod-421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194.scope: Deactivated successfully.
Nov 29 03:49:59 np0005539550 podman[396264]: 2025-11-29 08:49:59.088155821 +0000 UTC m=+1.232533041 container died 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:49:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-63c920270f70141a4f40bd06290f278cdd93320ffa0f509039e6a91dbcf8a39f-merged.mount: Deactivated successfully.
Nov 29 03:49:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:49:59
Nov 29 03:49:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:49:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:49:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log']
Nov 29 03:49:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:49:59 np0005539550 podman[396264]: 2025-11-29 08:49:59.502074461 +0000 UTC m=+1.646451681 container remove 421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_diffie, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:49:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:49:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:49:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:49:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:49:59 np0005539550 systemd[1]: libpod-conmon-421666a1530915919a5af120cf153a45e3d800ff5e8393cc591485fe3b6db194.scope: Deactivated successfully.
Nov 29 03:49:59 np0005539550 nova_compute[257631]: 2025-11-29 08:49:59.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:49:59 np0005539550 nova_compute[257631]: 2025-11-29 08:49:59.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.023 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.307 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.309 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.309 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.310 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.314 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.687733468 +0000 UTC m=+0.040538515 container create 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:50:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:00 np0005539550 systemd[1]: Started libpod-conmon-49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26.scope.
Nov 29 03:50:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 130 op/s
Nov 29 03:50:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.75932096 +0000 UTC m=+0.112126027 container init 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.670009695 +0000 UTC m=+0.022814872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.766013708 +0000 UTC m=+0.118818745 container start 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.769342561 +0000 UTC m=+0.122147608 container attach 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:00 np0005539550 confident_ramanujan[396485]: 167 167
Nov 29 03:50:00 np0005539550 systemd[1]: libpod-49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26.scope: Deactivated successfully.
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.771621858 +0000 UTC m=+0.124426905 container died 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:50:00 np0005539550 systemd[1]: var-lib-containers-storage-overlay-754d2a922449f08fe13578508c4838ab4448795dd6f315e26e07e156e14cbc0e-merged.mount: Deactivated successfully.
Nov 29 03:50:00 np0005539550 podman[396469]: 2025-11-29 08:50:00.806178093 +0000 UTC m=+0.158983140 container remove 49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:50:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:00.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:00 np0005539550 systemd[1]: libpod-conmon-49ba44de9e5f7699d9b9b74641854cd11c69023b4cc23db4d143641d0a887f26.scope: Deactivated successfully.
Nov 29 03:50:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029974931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.880 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.978 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:50:00 np0005539550 nova_compute[257631]: 2025-11-29 08:50:00.979 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:01.008724793 +0000 UTC m=+0.059443469 container create a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:50:01 np0005539550 systemd[1]: Started libpod-conmon-a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149.scope.
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:00.978125767 +0000 UTC m=+0.028844493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:01 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:50:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e76559f5d8bb312364782e8af4b291bf2e56eca110dfc985746e59191af293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e76559f5d8bb312364782e8af4b291bf2e56eca110dfc985746e59191af293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e76559f5d8bb312364782e8af4b291bf2e56eca110dfc985746e59191af293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:01 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e76559f5d8bb312364782e8af4b291bf2e56eca110dfc985746e59191af293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:01.101613458 +0000 UTC m=+0.152332164 container init a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:01.110027488 +0000 UTC m=+0.160746164 container start a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:01.113802713 +0000 UTC m=+0.164521389 container attach a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.179 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.181 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3890MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.182 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.182 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.278 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.279 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.279 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.323 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:01 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]: {
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:    "0": [
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:        {
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "devices": [
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "/dev/loop3"
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            ],
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "lv_name": "ceph_lv0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "lv_size": "7511998464",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "name": "ceph_lv0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "tags": {
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.cluster_name": "ceph",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.crush_device_class": "",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.encrypted": "0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.osd_id": "0",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.type": "block",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:                "ceph.vdo": "0"
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            },
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "type": "block",
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:            "vg_name": "ceph_vg0"
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:        }
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]:    ]
Nov 29 03:50:01 np0005539550 flamboyant_gates[396530]: }
Nov 29 03:50:01 np0005539550 systemd[1]: libpod-a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149.scope: Deactivated successfully.
Nov 29 03:50:01 np0005539550 podman[396513]: 2025-11-29 08:50:01.903830797 +0000 UTC m=+0.954549473 container died a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:50:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-55e76559f5d8bb312364782e8af4b291bf2e56eca110dfc985746e59191af293-merged.mount: Deactivated successfully.
Nov 29 03:50:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859638094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.971 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:01 np0005539550 nova_compute[257631]: 2025-11-29 08:50:01.976 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:02 np0005539550 nova_compute[257631]: 2025-11-29 08:50:02.005 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:02 np0005539550 podman[396513]: 2025-11-29 08:50:02.020738764 +0000 UTC m=+1.071457440 container remove a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_gates, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:50:02 np0005539550 systemd[1]: libpod-conmon-a7e8045faa0848226732d54a586cd102f0197784c3b4e533068b69bed37bf149.scope: Deactivated successfully.
Nov 29 03:50:02 np0005539550 nova_compute[257631]: 2025-11-29 08:50:02.040 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:50:02 np0005539550 nova_compute[257631]: 2025-11-29 08:50:02.041 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.675823189 +0000 UTC m=+0.055320334 container create 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 03:50:02 np0005539550 systemd[1]: Started libpod-conmon-9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a.scope.
Nov 29 03:50:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.4 KiB/s wr, 46 op/s
Nov 29 03:50:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.659509872 +0000 UTC m=+0.039007037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:02.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.90876719 +0000 UTC m=+0.288264385 container init 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.92234435 +0000 UTC m=+0.301841495 container start 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:50:02 np0005539550 peaceful_newton[396731]: 167 167
Nov 29 03:50:02 np0005539550 systemd[1]: libpod-9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a.scope: Deactivated successfully.
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.972221618 +0000 UTC m=+0.351718803 container attach 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:50:02 np0005539550 podman[396715]: 2025-11-29 08:50:02.973158192 +0000 UTC m=+0.352655347 container died 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:50:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f61a7944d88898a44f57a084cf097db325b97bddf09ef383b9010b29e990fd50-merged.mount: Deactivated successfully.
Nov 29 03:50:03 np0005539550 podman[396715]: 2025-11-29 08:50:03.025392699 +0000 UTC m=+0.404889844 container remove 9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_newton, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:50:03 np0005539550 systemd[1]: libpod-conmon-9225011eb3d0e06dcb8c62dbdd33a39d28d98152451dfd8a4200b6f25b14a44a.scope: Deactivated successfully.
Nov 29 03:50:03 np0005539550 nova_compute[257631]: 2025-11-29 08:50:03.041 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:03 np0005539550 podman[396755]: 2025-11-29 08:50:03.201784304 +0000 UTC m=+0.045194592 container create 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:50:03 np0005539550 systemd[1]: Started libpod-conmon-8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b.scope.
Nov 29 03:50:03 np0005539550 nova_compute[257631]: 2025-11-29 08:50:03.252 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:50:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af88815b978b13a05561d12f434f8b3d3dcf7576841b82176f6a6ded05b34151/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af88815b978b13a05561d12f434f8b3d3dcf7576841b82176f6a6ded05b34151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af88815b978b13a05561d12f434f8b3d3dcf7576841b82176f6a6ded05b34151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:03 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af88815b978b13a05561d12f434f8b3d3dcf7576841b82176f6a6ded05b34151/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:50:03 np0005539550 podman[396755]: 2025-11-29 08:50:03.183279041 +0000 UTC m=+0.026689349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:50:03 np0005539550 podman[396755]: 2025-11-29 08:50:03.28668785 +0000 UTC m=+0.130098158 container init 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:50:03 np0005539550 podman[396755]: 2025-11-29 08:50:03.295765587 +0000 UTC m=+0.139175895 container start 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:50:03 np0005539550 podman[396755]: 2025-11-29 08:50:03.298905405 +0000 UTC m=+0.142315713 container attach 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:50:03 np0005539550 ovn_controller[148680]: 2025-11-29T08:50:03Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0a:48:c9 10.100.0.8
Nov 29 03:50:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:03.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]: {
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:        "osd_id": 0,
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:        "type": "bluestore"
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]:    }
Nov 29 03:50:04 np0005539550 jovial_mcnulty[396772]: }
Nov 29 03:50:04 np0005539550 systemd[1]: libpod-8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b.scope: Deactivated successfully.
Nov 29 03:50:04 np0005539550 podman[396755]: 2025-11-29 08:50:04.130838189 +0000 UTC m=+0.974248487 container died 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:50:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-af88815b978b13a05561d12f434f8b3d3dcf7576841b82176f6a6ded05b34151-merged.mount: Deactivated successfully.
Nov 29 03:50:04 np0005539550 podman[396755]: 2025-11-29 08:50:04.192931083 +0000 UTC m=+1.036341371 container remove 8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 03:50:04 np0005539550 systemd[1]: libpod-conmon-8b8930dfac4eaf3d3d89146ecc54db725eea1c279c88ddb5d8f5c4ff4aec332b.scope: Deactivated successfully.
Nov 29 03:50:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:50:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:50:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 40243490-492e-4e34-b584-99555c50d45b does not exist
Nov 29 03:50:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5f660022-cfac-4fdf-b25a-9c678b4f1d8d does not exist
Nov 29 03:50:04 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e9cc8af0-f089-4fb6-8ae0-ae17cb21e129 does not exist
Nov 29 03:50:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.3 MiB/s wr, 90 op/s
Nov 29 03:50:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:04.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539550 nova_compute[257631]: 2025-11-29 08:50:05.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:50:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:05 np0005539550 nova_compute[257631]: 2025-11-29 08:50:05.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 102 op/s
Nov 29 03:50:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:06.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:08 np0005539550 nova_compute[257631]: 2025-11-29 08:50:08.254 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:50:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 29 03:50:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:08.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:50:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:50:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:50:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:50:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:50:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:10 np0005539550 nova_compute[257631]: 2025-11-29 08:50:10.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:10 np0005539550 podman[396860]: 2025-11-29 08:50:10.32551102 +0000 UTC m=+0.059544651 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 03:50:10 np0005539550 podman[396859]: 2025-11-29 08:50:10.334835033 +0000 UTC m=+0.072716651 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 03:50:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Nov 29 03:50:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:10.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:10 np0005539550 nova_compute[257631]: 2025-11-29 08:50:10.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:11.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:11.610 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:11 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:11.611 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:11 np0005539550 nova_compute[257631]: 2025-11-29 08:50:11.660 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Nov 29 03:50:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:12.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:13 np0005539550 nova_compute[257631]: 2025-11-29 08:50:13.256 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:13.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 550 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Nov 29 03:50:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:14.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:15 np0005539550 nova_compute[257631]: 2025-11-29 08:50:15.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:15.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 744 KiB/s wr, 31 op/s
Nov 29 03:50:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:17.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:17.613 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:18 np0005539550 nova_compute[257631]: 2025-11-29 08:50:18.257 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 112 KiB/s rd, 735 KiB/s wr, 13 op/s
Nov 29 03:50:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:18.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:18.980 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:18.981 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:18.982 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:19.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:20 np0005539550 nova_compute[257631]: 2025-11-29 08:50:20.030 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005319748336590359 of space, bias 1.0, pg target 1.5959245009771077 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:50:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 13 KiB/s wr, 44 op/s
Nov 29 03:50:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Nov 29 03:50:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:22.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:23 np0005539550 nova_compute[257631]: 2025-11-29 08:50:23.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:23 np0005539550 podman[396956]: 2025-11-29 08:50:23.366953136 +0000 UTC m=+0.095381568 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 03:50:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:23.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Nov 29 03:50:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:24.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:24 np0005539550 nova_compute[257631]: 2025-11-29 08:50:24.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:24 np0005539550 nova_compute[257631]: 2025-11-29 08:50:24.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:50:25 np0005539550 nova_compute[257631]: 2025-11-29 08:50:25.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.019 257641 DEBUG nova.compute.manager [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-changed-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.019 257641 DEBUG nova.compute.manager [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing instance network info cache due to event network-changed-ce1430a7-5573-42ed-950f-6b729838e557. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.019 257641 DEBUG oslo_concurrency.lockutils [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.020 257641 DEBUG oslo_concurrency.lockutils [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.020 257641 DEBUG nova.network.neutron [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Refreshing network info cache for port ce1430a7-5573-42ed-950f-6b729838e557 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.062 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.063 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.063 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.063 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.063 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.064 257641 INFO nova.compute.manager [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Terminating instance#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.065 257641 DEBUG nova.compute.manager [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:50:26 np0005539550 kernel: tapce1430a7-55 (unregistering): left promiscuous mode
Nov 29 03:50:26 np0005539550 NetworkManager[49039]: <info>  [1764406226.2194] device (tapce1430a7-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.228 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:50:26Z|01032|binding|INFO|Releasing lport ce1430a7-5573-42ed-950f-6b729838e557 from this chassis (sb_readonly=0)
Nov 29 03:50:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:50:26Z|01033|binding|INFO|Setting lport ce1430a7-5573-42ed-950f-6b729838e557 down in Southbound
Nov 29 03:50:26 np0005539550 ovn_controller[148680]: 2025-11-29T08:50:26Z|01034|binding|INFO|Removing iface tapce1430a7-55 ovn-installed in OVS
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.231 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.237 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0a:48:c9 10.100.0.8'], port_security=['fa:16:3e:0a:48:c9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e82bbbb4-4776-445f-9d1d-aea28fbbcaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e636ab14fe94059b82b9cbcf8831d87', 'neutron:revision_number': '9', 'neutron:security_group_ids': '96207713-2918-4e0d-9816-6691655bac56', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d402cfd4-158a-4fe2-be8d-72cfa52ed799, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ce1430a7-5573-42ed-950f-6b729838e557) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.238 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ce1430a7-5573-42ed-950f-6b729838e557 in datapath 7e328485-18b8-4dc7-b012-0dd256b9b97f unbound from our chassis#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.241 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e328485-18b8-4dc7-b012-0dd256b9b97f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.242 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d4076e59-95bd-4db6-8ff0-e2f415f966c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.243 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f namespace which is not needed anymore#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.249 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Nov 29 03:50:26 np0005539550 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d000000dc.scope: Consumed 14.928s CPU time.
Nov 29 03:50:26 np0005539550 systemd-machined[216673]: Machine qemu-119-instance-000000dc terminated.
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [NOTICE]   (395858) : haproxy version is 2.8.14-c23fe91
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [NOTICE]   (395858) : path to executable is /usr/sbin/haproxy
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [WARNING]  (395858) : Exiting Master process...
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [WARNING]  (395858) : Exiting Master process...
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [ALERT]    (395858) : Current worker (395860) exited with code 143 (Terminated)
Nov 29 03:50:26 np0005539550 neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f[395854]: [WARNING]  (395858) : All workers exited. Exiting... (0)
Nov 29 03:50:26 np0005539550 systemd[1]: libpod-059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294.scope: Deactivated successfully.
Nov 29 03:50:26 np0005539550 podman[397007]: 2025-11-29 08:50:26.375679684 +0000 UTC m=+0.043845298 container died 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:50:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f3f2e9afd509683bb62293a4dd6eb3f1ef7622e98836979fb76b0898f52a7636-merged.mount: Deactivated successfully.
Nov 29 03:50:26 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294-userdata-shm.mount: Deactivated successfully.
Nov 29 03:50:26 np0005539550 podman[397007]: 2025-11-29 08:50:26.412005153 +0000 UTC m=+0.080170767 container cleanup 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 03:50:26 np0005539550 systemd[1]: libpod-conmon-059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294.scope: Deactivated successfully.
Nov 29 03:50:26 np0005539550 podman[397038]: 2025-11-29 08:50:26.475030331 +0000 UTC m=+0.041501710 container remove 059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.480 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[7a6fd250-3fe6-4246-9d2f-c51e2e521227]: (4, ('Sat Nov 29 08:50:26 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294)\n059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294\nSat Nov 29 08:50:26 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f (059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294)\n059f1b276ca839867b8e7ea83a34212f232971124a079d77c53da72eff369294\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.482 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[4580fc27-0be2-4fec-882b-13b9bffa9a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.483 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e328485-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.485 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 kernel: tap7e328485-10: left promiscuous mode
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.544 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.545 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[01734284-60f6-4fb1-be47-e28d052c83fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.547 257641 INFO nova.virt.libvirt.driver [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Instance destroyed successfully.#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.547 257641 DEBUG nova.objects.instance [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lazy-loading 'resources' on Instance uuid e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.558 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[06af7ed9-bbca-4bf0-b8aa-7817f1a019de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.560 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e20c08-6d67-4523-a5e0-1b8a644f7da7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.572 257641 DEBUG nova.virt.libvirt.vif [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-29T08:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-965324894',display_name='tempest-TestShelveInstance-server-965324894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-965324894',id=220,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFD/+OETyA9Gkljsz2W5PflUlfWT5bS3tM7MSqY4jYxRfqsQn4zIu3YXrN04BT+xMXAEVtwQTmrwJ33WHrUnatG2eD4TuM6i7mJQ/IeECsfPQo6L0sbJAJwESalXFW3BvQ==',key_name='tempest-TestShelveInstance-691552797',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:49:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e636ab14fe94059b82b9cbcf8831d87',ramdisk_id='',reservation_id='r-081uojmw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestShelveInstance-498716578',owner_user_name='tempest-TestShelveInstance-498716578-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:49:50Z,user_data=None,user_id='14d446574294425e9bc89e596ea56dc9',uuid=e82bbbb4-4776-445f-9d1d-aea28fbbcaa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.572 257641 DEBUG nova.network.os_vif_util [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converting VIF {"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.573 257641 DEBUG nova.network.os_vif_util [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.574 257641 DEBUG os_vif [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.576 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce1430a7-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.577 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.578 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[e399ac6c-27b3-4bd1-b08a-58d111f538f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 923722, 'reachable_time': 29344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397064, 'error': None, 'target': 'ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.580 257641 INFO os_vif [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0a:48:c9,bridge_name='br-int',has_traffic_filtering=True,id=ce1430a7-5573-42ed-950f-6b729838e557,network=Network(7e328485-18b8-4dc7-b012-0dd256b9b97f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapce1430a7-55')#033[00m
Nov 29 03:50:26 np0005539550 systemd[1]: run-netns-ovnmeta\x2d7e328485\x2d18b8\x2d4dc7\x2db012\x2d0dd256b9b97f.mount: Deactivated successfully.
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.580 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e328485-18b8-4dc7-b012-0dd256b9b97f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:50:26 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:26.580 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[44be6fec-1a36-496f-bc12-78e73e78e43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 16 KiB/s wr, 76 op/s
Nov 29 03:50:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.840 257641 INFO nova.virt.libvirt.driver [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deleting instance files /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_del#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.841 257641 INFO nova.virt.libvirt.driver [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deletion of /var/lib/nova/instances/e82bbbb4-4776-445f-9d1d-aea28fbbcaa8_del complete#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.906 257641 DEBUG nova.compute.manager [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.906 257641 DEBUG oslo_concurrency.lockutils [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.907 257641 DEBUG oslo_concurrency.lockutils [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.908 257641 DEBUG oslo_concurrency.lockutils [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.908 257641 DEBUG nova.compute.manager [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.908 257641 DEBUG nova.compute.manager [req-52bf73ae-41a4-49fe-96b9-ddfbcbc0ded0 req-664cb871-c5ee-464f-8a80-e6a7f95258be 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-unplugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.943 257641 INFO nova.compute.manager [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.944 257641 DEBUG oslo.service.loopingcall [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.945 257641 DEBUG nova.compute.manager [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:50:26 np0005539550 nova_compute[257631]: 2025-11-29 08:50:26.945 257641 DEBUG nova.network.neutron [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:50:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:27 np0005539550 nova_compute[257631]: 2025-11-29 08:50:27.968 257641 DEBUG nova.network.neutron [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updated VIF entry in instance network info cache for port ce1430a7-5573-42ed-950f-6b729838e557. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:50:27 np0005539550 nova_compute[257631]: 2025-11-29 08:50:27.969 257641 DEBUG nova.network.neutron [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [{"id": "ce1430a7-5573-42ed-950f-6b729838e557", "address": "fa:16:3e:0a:48:c9", "network": {"id": "7e328485-18b8-4dc7-b012-0dd256b9b97f", "bridge": "br-int", "label": "tempest-TestShelveInstance-1062373600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e636ab14fe94059b82b9cbcf8831d87", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapce1430a7-55", "ovs_interfaceid": "ce1430a7-5573-42ed-950f-6b729838e557", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:27 np0005539550 nova_compute[257631]: 2025-11-29 08:50:27.973 257641 DEBUG nova.network.neutron [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:27 np0005539550 nova_compute[257631]: 2025-11-29 08:50:27.995 257641 DEBUG oslo_concurrency.lockutils [req-4880e0a2-54db-492d-83dc-129e138f962b req-0cc8c065-da6e-4695-b07a-e5ebdcdaa760 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.002 257641 INFO nova.compute.manager [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.127 257641 DEBUG nova.compute.manager [req-05d8a916-fbaf-418c-b917-cb76c37e4cef req-1ca048e4-ec8d-49c5-9591-afb5a2178211 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-deleted-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.260 257641 INFO nova.compute.manager [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Took 0.26 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.261 257641 DEBUG nova.compute.manager [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Deleting volume: 77422021-a8a1-4df0-9b4a-24ac0dd34ac3 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.547 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.548 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:28 np0005539550 nova_compute[257631]: 2025-11-29 08:50:28.611 257641 DEBUG oslo_concurrency.processutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 16 KiB/s wr, 77 op/s
Nov 29 03:50:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:28.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.007 257641 DEBUG nova.compute.manager [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.009 257641 DEBUG oslo_concurrency.lockutils [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.013 257641 DEBUG oslo_concurrency.lockutils [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.014 257641 DEBUG oslo_concurrency.lockutils [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.014 257641 DEBUG nova.compute.manager [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] No waiting events found dispatching network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.015 257641 WARNING nova.compute.manager [req-e81bf3a6-b9e1-4709-8d69-2d691420aaca req-cabeb51c-b091-4930-9076-b022450a03f8 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Received unexpected event network-vif-plugged-ce1430a7-5573-42ed-950f-6b729838e557 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 03:50:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/950176848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.095 257641 DEBUG oslo_concurrency.processutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.102 257641 DEBUG nova.compute.provider_tree [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.196 257641 DEBUG nova.scheduler.client.report [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.549 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.580 257641 INFO nova.scheduler.client.report [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Deleted allocations for instance e82bbbb4-4776-445f-9d1d-aea28fbbcaa8#033[00m
Nov 29 03:50:29 np0005539550 nova_compute[257631]: 2025-11-29 08:50:29.684 257641 DEBUG oslo_concurrency.lockutils [None req-11104ede-7bc9-4dd9-96a6-993cf1b92728 14d446574294425e9bc89e596ea56dc9 2e636ab14fe94059b82b9cbcf8831d87 - - default default] Lock "e82bbbb4-4776-445f-9d1d-aea28fbbcaa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:30 np0005539550 nova_compute[257631]: 2025-11-29 08:50:30.037 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 227 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 85 op/s
Nov 29 03:50:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:30.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:31 np0005539550 nova_compute[257631]: 2025-11-29 08:50:31.087 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:31 np0005539550 nova_compute[257631]: 2025-11-29 08:50:31.088 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:50:31 np0005539550 nova_compute[257631]: 2025-11-29 08:50:31.176 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:50:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:31.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:31 np0005539550 nova_compute[257631]: 2025-11-29 08:50:31.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 186 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 84 op/s
Nov 29 03:50:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:32.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:33.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 196 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 587 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 29 03:50:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539550 nova_compute[257631]: 2025-11-29 08:50:35.040 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:35.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:35 np0005539550 nova_compute[257631]: 2025-11-29 08:50:35.865 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:35 np0005539550 nova_compute[257631]: 2025-11-29 08:50:35.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:36 np0005539550 nova_compute[257631]: 2025-11-29 08:50:36.094 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:36 np0005539550 nova_compute[257631]: 2025-11-29 08:50:36.620 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 604 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 29 03:50:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:37.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 506 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Nov 29 03:50:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:38.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:50:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1580023705' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:50:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:50:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1580023705' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:50:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:39.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:40 np0005539550 nova_compute[257631]: 2025-11-29 08:50:40.042 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 446 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Nov 29 03:50:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:41 np0005539550 podman[397165]: 2025-11-29 08:50:41.3256371 +0000 UTC m=+0.069274125 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd)
Nov 29 03:50:41 np0005539550 podman[397166]: 2025-11-29 08:50:41.369319313 +0000 UTC m=+0.103015910 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:50:41 np0005539550 nova_compute[257631]: 2025-11-29 08:50:41.503 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406226.50232, e82bbbb4-4776-445f-9d1d-aea28fbbcaa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:50:41 np0005539550 nova_compute[257631]: 2025-11-29 08:50:41.504 257641 INFO nova.compute.manager [-] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:50:41 np0005539550 nova_compute[257631]: 2025-11-29 08:50:41.526 257641 DEBUG nova.compute.manager [None req-e0c9c0bb-4c1d-459b-a5c0-4a4d95e4ea94 - - - - - -] [instance: e82bbbb4-4776-445f-9d1d-aea28fbbcaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:50:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:41.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:41 np0005539550 nova_compute[257631]: 2025-11-29 08:50:41.621 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 388 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Nov 29 03:50:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:43.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 1.0 MiB/s wr, 54 op/s
Nov 29 03:50:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:44.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539550 nova_compute[257631]: 2025-11-29 08:50:45.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:46 np0005539550 nova_compute[257631]: 2025-11-29 08:50:46.654 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 34 KiB/s wr, 23 op/s
Nov 29 03:50:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:47.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 29 03:50:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:49.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:50 np0005539550 nova_compute[257631]: 2025-11-29 08:50:50.046 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 31 KiB/s wr, 16 op/s
Nov 29 03:50:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.220 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:51.220 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:50:51 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:51.222 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.286377) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251286556, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 253, "total_data_size": 1818635, "memory_usage": 1848256, "flush_reason": "Manual Compaction"}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251302345, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 1796726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67246, "largest_seqno": 68420, "table_properties": {"data_size": 1791102, "index_size": 2954, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12452, "raw_average_key_size": 20, "raw_value_size": 1779702, "raw_average_value_size": 2893, "num_data_blocks": 131, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406151, "oldest_key_time": 1764406151, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 16028 microseconds, and 9191 cpu microseconds.
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.302443) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 1796726 bytes OK
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.302470) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.304447) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.304479) EVENT_LOG_v1 {"time_micros": 1764406251304473, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.304503) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 1813373, prev total WAL file size 1813373, number of live WAL files 2.
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.305617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(1754KB)], [152(11MB)]
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251305689, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 14337679, "oldest_snapshot_seqno": -1}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 10188 keys, 12486684 bytes, temperature: kUnknown
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251380843, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12486684, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12421612, "index_size": 38527, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 269976, "raw_average_key_size": 26, "raw_value_size": 12243452, "raw_average_value_size": 1201, "num_data_blocks": 1458, "num_entries": 10188, "num_filter_entries": 10188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406251, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.381307) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12486684 bytes
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.382973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.3 rd, 165.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(14.9) write-amplify(6.9) OK, records in: 10711, records dropped: 523 output_compression: NoCompression
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.383004) EVENT_LOG_v1 {"time_micros": 1764406251382990, "job": 94, "event": "compaction_finished", "compaction_time_micros": 75348, "compaction_time_cpu_micros": 30768, "output_level": 6, "num_output_files": 1, "total_output_size": 12486684, "num_input_records": 10711, "num_output_records": 10188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251383804, "job": 94, "event": "table_file_deletion", "file_number": 154}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406251388128, "job": 94, "event": "table_file_deletion", "file_number": 152}
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.305424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.388198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.388204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.388207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.388210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:50:51.388213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:50:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:51.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.613 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.614 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.633 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.656 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.739 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.740 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.751 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.751 257641 INFO nova.compute.claims [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 03:50:51 np0005539550 nova_compute[257631]: 2025-11-29 08:50:51.935 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:50:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2396200513' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.406 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.416 257641 DEBUG nova.compute.provider_tree [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.439 257641 DEBUG nova.scheduler.client.report [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.470 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.471 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.534 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.535 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.563 257641 INFO nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.598 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.662 257641 INFO nova.virt.block_device [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Booting with volume 10605c29-d705-4bde-bb14-15733badfd18 at /dev/vda#033[00m
Nov 29 03:50:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 20 KiB/s wr, 16 op/s
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.858 257641 DEBUG nova.policy [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b576a51181b5425aa6e44a0eb0a22803', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 03:50:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:52.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.868 257641 DEBUG os_brick.utils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.871 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.891 268278 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.891 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[703a1745-d8a2-4625-9100-96db3c53b81d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.893 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.903 268278 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.903 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[df95c898-8197-45f9-98e4-ed6e3653386b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:babbd27d8a8', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.905 268278 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.918 268278 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.918 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd5faf3-35ff-4d40-bdea-930f58f933bf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.920 268278 DEBUG oslo.privsep.daemon [-] privsep: reply[dfed1048-f94c-45cb-9850-d5d71ea1cd52]: (4, '9851e351-ef5d-4a0c-9f85-d561f6a4210f') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.920 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.957 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.960 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.960 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.960 257641 DEBUG os_brick.initiator.connectors.lightos [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.961 257641 DEBUG os_brick.utils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:babbd27d8a8', 'do_local_attach': False, 'nvme_hostid': 'a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'system uuid': '9851e351-ef5d-4a0c-9f85-d561f6a4210f', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:a5e1c538-e584-4a91-80ec-e9f1f28a5aed', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Nov 29 03:50:52 np0005539550 nova_compute[257631]: 2025-11-29 08:50:52.961 257641 DEBUG nova.virt.block_device [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating existing volume attachment record: bf0cb26d-07c5-4f16-a847-e44ab361b6a4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Nov 29 03:50:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:53.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:54 np0005539550 podman[397263]: 2025-11-29 08:50:54.337533676 +0000 UTC m=+0.090181599 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:50:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:50:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1609673868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.640 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Successfully created port: ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 03:50:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 17 KiB/s wr, 15 op/s
Nov 29 03:50:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.941 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.942 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.942 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.976 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 03:50:54 np0005539550 nova_compute[257631]: 2025-11-29 08:50:54.976 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.001 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.003 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.004 257641 INFO nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Creating image(s)#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.004 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.005 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Ensure instance console log exists: /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.005 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.005 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.006 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.504 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Successfully updated port: ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.527 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.528 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.528 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 03:50:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:55.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.688 257641 DEBUG nova.compute.manager [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.689 257641 DEBUG nova.compute.manager [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing instance network info cache due to event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:50:55 np0005539550 nova_compute[257631]: 2025-11-29 08:50:55.689 257641 DEBUG oslo_concurrency.lockutils [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:50:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:50:56 np0005539550 nova_compute[257631]: 2025-11-29 08:50:56.636 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 03:50:56 np0005539550 nova_compute[257631]: 2025-11-29 08:50:56.658 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 KiB/s rd, 255 B/s wr, 9 op/s
Nov 29 03:50:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:56.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:56 np0005539550 nova_compute[257631]: 2025-11-29 08:50:56.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:57.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.694 257641 DEBUG nova.network.neutron [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.722 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.723 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Instance network_info: |[{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.723 257641 DEBUG oslo_concurrency.lockutils [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.724 257641 DEBUG nova.network.neutron [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.727 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Start _get_guest_xml network_info=[{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'attachment_id': 'bf0cb26d-07c5-4f16-a847-e44ab361b6a4', 'device_type': 'disk', 'delete_on_termination': False, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-10605c29-d705-4bde-bb14-15733badfd18', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '10605c29-d705-4bde-bb14-15733badfd18', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9249872d-f151-438a-bddd-e6b41e397647', 'attached_at': '', 'detached_at': '', 'volume_id': '10605c29-d705-4bde-bb14-15733badfd18', 'serial': '10605c29-d705-4bde-bb14-15733badfd18'}, 'mount_device': '/dev/vda', 'guest_format': None, 'boot_index': 0, 'disk_bus': 'virtio', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.733 257641 WARNING nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.741 257641 DEBUG nova.virt.libvirt.host [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.742 257641 DEBUG nova.virt.libvirt.host [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.746 257641 DEBUG nova.virt.libvirt.host [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.747 257641 DEBUG nova.virt.libvirt.host [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.748 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.749 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T07:48:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='b4d0f3a6-e3dc-4216-aee8-148280e428cc',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.749 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.749 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.750 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.750 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.750 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.750 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.751 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.751 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.751 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.752 257641 DEBUG nova.virt.hardware [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.789 257641 DEBUG nova.storage.rbd_utils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9249872d-f151-438a-bddd-e6b41e397647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.794 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:57 np0005539550 nova_compute[257631]: 2025-11-29 08:50:57.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:58 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:50:58.226 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:50:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060019845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.252 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:50:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.290 257641 DEBUG nova.virt.libvirt.vif [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2004089128',display_name='tempest-TestVolumeBootPattern-server-2004089128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2004089128',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-qem3gv2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:50:52Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9249872d-f151-438a-bddd-e6b41e397647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.292 257641 DEBUG nova.network.os_vif_util [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.293 257641 DEBUG nova.network.os_vif_util [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.295 257641 DEBUG nova.objects.instance [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'pci_devices' on Instance uuid 9249872d-f151-438a-bddd-e6b41e397647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.310 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] End _get_guest_xml xml=<domain type="kvm">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <uuid>9249872d-f151-438a-bddd-e6b41e397647</uuid>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <name>instance-000000de</name>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <memory>131072</memory>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <vcpu>1</vcpu>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <metadata>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:name>tempest-TestVolumeBootPattern-server-2004089128</nova:name>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:creationTime>2025-11-29 08:50:57</nova:creationTime>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:flavor name="m1.nano">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:memory>128</nova:memory>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:disk>1</nova:disk>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:swap>0</nova:swap>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:vcpus>1</nova:vcpus>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </nova:flavor>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:owner>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:user uuid="b576a51181b5425aa6e44a0eb0a22803">tempest-TestVolumeBootPattern-1614567902-project-member</nova:user>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:project uuid="b7ffcb23bac14ee49474df9aee5f7dae">tempest-TestVolumeBootPattern-1614567902</nova:project>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </nova:owner>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <nova:ports>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <nova:port uuid="ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        </nova:port>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </nova:ports>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </nova:instance>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </metadata>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <sysinfo type="smbios">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <system>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="manufacturer">RDO</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="product">OpenStack Compute</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="serial">9249872d-f151-438a-bddd-e6b41e397647</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="uuid">9249872d-f151-438a-bddd-e6b41e397647</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <entry name="family">Virtual Machine</entry>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </system>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </sysinfo>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <os>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <boot dev="hd"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <smbios mode="sysinfo"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </os>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <features>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <acpi/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <apic/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <vmcoreinfo/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </features>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <clock offset="utc">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <timer name="hpet" present="no"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </clock>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <cpu mode="custom" match="exact">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <model>Nehalem</model>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </cpu>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  <devices>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <disk type="network" device="cdrom">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <driver type="raw" cache="none"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="vms/9249872d-f151-438a-bddd-e6b41e397647_disk.config">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <target dev="sda" bus="sata"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <disk type="network" device="disk">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <source protocol="rbd" name="volumes/volume-10605c29-d705-4bde-bb14-15733badfd18">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.100" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.102" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <host name="192.168.122.101" port="6789"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </source>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <auth username="openstack">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:        <secret type="ceph" uuid="b66774a7-56d9-5535-bd8c-681234404870"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      </auth>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <target dev="vda" bus="virtio"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <serial>10605c29-d705-4bde-bb14-15733badfd18</serial>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </disk>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <interface type="ethernet">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <mac address="fa:16:3e:c0:01:7c"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <mtu size="1442"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <target dev="tapebfc0be1-60"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </interface>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <serial type="pty">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <log file="/var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/console.log" append="off"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </serial>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <video>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <model type="virtio"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </video>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <input type="tablet" bus="usb"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <rng model="virtio">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <backend model="random">/dev/urandom</backend>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </rng>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <controller type="usb" index="0"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    <memballoon model="virtio">
Nov 29 03:50:58 np0005539550 nova_compute[257631]:      <stats period="10"/>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:    </memballoon>
Nov 29 03:50:58 np0005539550 nova_compute[257631]:  </devices>
Nov 29 03:50:58 np0005539550 nova_compute[257631]: </domain>
Nov 29 03:50:58 np0005539550 nova_compute[257631]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.312 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Preparing to wait for external event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.313 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.313 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.314 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.315 257641 DEBUG nova.virt.libvirt.vif [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T08:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2004089128',display_name='tempest-TestVolumeBootPattern-server-2004089128',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2004089128',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-qem3gv2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T08:50:52Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9249872d-f151-438a-bddd-e6b41e397647,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.315 257641 DEBUG nova.network.os_vif_util [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.316 257641 DEBUG nova.network.os_vif_util [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.316 257641 DEBUG os_vif [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.317 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.318 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.318 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.323 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebfc0be1-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.324 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapebfc0be1-60, col_values=(('external_ids', {'iface-id': 'ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:01:7c', 'vm-uuid': '9249872d-f151-438a-bddd-e6b41e397647'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:58 np0005539550 NetworkManager[49039]: <info>  [1764406258.3272] manager: (tapebfc0be1-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.328 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.333 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.334 257641 INFO os_vif [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60')#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.541 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.542 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.542 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] No VIF found with MAC fa:16:3e:c0:01:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.543 257641 INFO nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Using config drive#033[00m
Nov 29 03:50:58 np0005539550 nova_compute[257631]: 2025-11-29 08:50:58.587 257641 DEBUG nova.storage.rbd_utils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9249872d-f151-438a-bddd-e6b41e397647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:50:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:50:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:50:58.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:50:59
Nov 29 03:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.control']
Nov 29 03:50:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:50:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:50:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:50:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:50:59.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.704 257641 INFO nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Creating config drive at /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.710 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxxgekath execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.847 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxxgekath" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.899 257641 DEBUG nova.storage.rbd_utils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] rbd image 9249872d-f151-438a-bddd-e6b41e397647_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.904 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config 9249872d-f151-438a-bddd-e6b41e397647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.953 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.954 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.996 257641 DEBUG nova.network.neutron [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updated VIF entry in instance network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:50:59 np0005539550 nova_compute[257631]: 2025-11-29 08:50:59.997 257641 DEBUG nova.network.neutron [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.039 257641 DEBUG oslo_concurrency.lockutils [req-f7206f64-8bf8-4e92-8c8b-ea9929f4b33b req-399e68cc-98af-44a4-9cef-5432d4062e99 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.050 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.098 257641 DEBUG oslo_concurrency.processutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config 9249872d-f151-438a-bddd-e6b41e397647_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.099 257641 INFO nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Deleting local config drive /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647/disk.config because it was imported into RBD.#033[00m
Nov 29 03:51:00 np0005539550 kernel: tapebfc0be1-60: entered promiscuous mode
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.1643] manager: (tapebfc0be1-60): new Tun device (/org/freedesktop/NetworkManager/Devices/453)
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.230 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:00Z|01035|binding|INFO|Claiming lport ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc for this chassis.
Nov 29 03:51:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:00Z|01036|binding|INFO|ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc: Claiming fa:16:3e:c0:01:7c 10.100.0.5
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.237 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.250 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:01:7c 10.100.0.5'], port_security=['fa:16:3e:c0:01:7c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9249872d-f151-438a-bddd-e6b41e397647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '2', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.252 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 bound to our chassis#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.254 158978 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2#033[00m
Nov 29 03:51:00 np0005539550 systemd-machined[216673]: New machine qemu-120-instance-000000de.
Nov 29 03:51:00 np0005539550 systemd-udevd[397432]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.268 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[112f59a8-09f3-4afa-b21f-4b7b57da734c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.269 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3d510715-d1 in ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.271 267230 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3d510715-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.272 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[0e84786f-a8c0-4f0b-91a8-4ca6b75a1505]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.272 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b34734b3-8261-49f0-9d96-59ef07cc981f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.2778] device (tapebfc0be1-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.2789] device (tapebfc0be1-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.285 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[094cbc2e-58ec-4d3c-9f2b-c23da891b096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 systemd[1]: Started Virtual Machine qemu-120-instance-000000de.
Nov 29 03:51:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:00Z|01037|binding|INFO|Setting lport ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc ovn-installed in OVS
Nov 29 03:51:00 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:00Z|01038|binding|INFO|Setting lport ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc up in Southbound
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.294 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.312 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ead1b036-9487-4e18-ba47-b607e481d73e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.346 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[32f8835d-d164-4e66-9470-76689543078d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.3594] manager: (tap3d510715-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/454)
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.359 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4ad9e5-a462-477a-8ebe-b3c48a066a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.403 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[92353a9e-46cd-4726-a20e-4d35e50eb613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.406 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[2102854e-8ecd-4a94-a3e3-a9db31ba8852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.4345] device (tap3d510715-d0): carrier: link connected
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.443 267470 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd449b8-ad85-43e5-a636-597ec2ecc3e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.462 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[70c6536f-4282-4b73-8679-e6f892829f0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930809, 'reachable_time': 39674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397465, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.484 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c45bc3-bc4f-44c8-a8eb-12804b952fde]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe63:6190'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 930809, 'tstamp': 930809}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397466, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.504 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[ef5b2abb-782d-416f-b22f-3209ff5cf866]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3d510715-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:63:61:90'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930809, 'reachable_time': 39674, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397467, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.541 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9aaddb-17bf-42ae-ada1-4f25d015e20f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.595 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a72781-5085-4025-82fd-fa0e55d147ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.596 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 03:51:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:00.597 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3d510715-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:00 np0005539550 NetworkManager[49039]: <info>  [1764406260.5997] manager: (tap3d510715-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.599 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:51:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:00.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:00 np0005539550 nova_compute[257631]: 2025-11-29 08:51:00.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:01 np0005539550 kernel: tap3d510715-d0: entered promiscuous mode
Nov 29 03:51:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:01.499 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3d510715-d0, col_values=(('external_ids', {'iface-id': '9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.500 257641 DEBUG nova.compute.manager [req-56487661-d056-4f90-ab2e-660e075b57af req-e6ccb746-c31d-429a-832b-7d5abfb26ccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:01 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:01Z|01039|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.502 257641 DEBUG oslo_concurrency.lockutils [req-56487661-d056-4f90-ab2e-660e075b57af req-e6ccb746-c31d-429a-832b-7d5abfb26ccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.502 257641 DEBUG oslo_concurrency.lockutils [req-56487661-d056-4f90-ab2e-660e075b57af req-e6ccb746-c31d-429a-832b-7d5abfb26ccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.503 257641 DEBUG oslo_concurrency.lockutils [req-56487661-d056-4f90-ab2e-660e075b57af req-e6ccb746-c31d-429a-832b-7d5abfb26ccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.503 257641 DEBUG nova.compute.manager [req-56487661-d056-4f90-ab2e-660e075b57af req-e6ccb746-c31d-429a-832b-7d5abfb26ccc 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Processing event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.514 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:01.514 158978 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:01.515 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[94710e3d-8399-47db-9dac-857eb56f0f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:01.516 158978 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: global
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    log         /dev/log local0 debug
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    log-tag     haproxy-metadata-proxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    user        root
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    group       root
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    maxconn     1024
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    pidfile     /var/lib/neutron/external/pids/3d510715-dc99-4870-8ae9-ff599ae1a9c2.pid.haproxy
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    daemon
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: defaults
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    log global
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    mode http
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    option httplog
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    option dontlognull
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    option http-server-close
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    option forwardfor
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    retries                 3
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    timeout http-request    30s
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    timeout connect         30s
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    timeout client          32s
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    timeout server          32s
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    timeout http-keep-alive 30s
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: listen listener
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    bind 169.254.169.254:80
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]:    http-request add-header X-OVN-Network-ID 3d510715-dc99-4870-8ae9-ff599ae1a9c2
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 03:51:01 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:01.517 158978 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'env', 'PROCESS_TAG=haproxy-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3d510715-dc99-4870-8ae9-ff599ae1a9c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.528 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.528 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.529 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.529 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.529 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.889 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406261.8887033, 9249872d-f151-438a-bddd-e6b41e397647 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.890 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] VM Started (Lifecycle Event)#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.893 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.898 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.902 257641 INFO nova.virt.libvirt.driver [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Instance spawned successfully.#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.903 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.928 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.933 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.934 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.934 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.935 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.935 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.935 257641 DEBUG nova.virt.libvirt.driver [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.940 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.976 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.976 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406261.8901916, 9249872d-f151-438a-bddd-e6b41e397647 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.977 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] VM Paused (Lifecycle Event)#033[00m
Nov 29 03:51:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288158530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.994 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.997 257641 DEBUG nova.virt.driver [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] Emitting event <LifecycleEvent: 1764406261.89746, 9249872d-f151-438a-bddd-e6b41e397647 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:51:01 np0005539550 nova_compute[257631]: 2025-11-29 08:51:01.998 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] VM Resumed (Lifecycle Event)#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.006 257641 INFO nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Took 7.00 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.007 257641 DEBUG nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.010 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:02 np0005539550 podman[397562]: 2025-11-29 08:51:01.929883511 +0000 UTC m=+0.052708150 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.033 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.035 257641 DEBUG nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.132 257641 INFO nova.compute.manager [None req-e92455d5-90d8-4a6a-90a1-2d72f6b6cc9b - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.151 257641 INFO nova.compute.manager [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Took 10.46 seconds to build instance.#033[00m
Nov 29 03:51:02 np0005539550 podman[397562]: 2025-11-29 08:51:02.155360385 +0000 UTC m=+0.278185054 container create ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.170 257641 DEBUG oslo_concurrency.lockutils [None req-ae3a31e7-fd68-4ab7-b4a1-8cc232788555 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.171 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.171 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:51:02 np0005539550 systemd[1]: Started libpod-conmon-ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60.scope.
Nov 29 03:51:02 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:02 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd1f7795504e4c2fd61c0dd31f13341f7f204bf9db27aa8012c56510d8c67fe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:02 np0005539550 podman[397562]: 2025-11-29 08:51:02.382785417 +0000 UTC m=+0.505610076 container init ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.388 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.389 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3974MB free_disk=20.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.390 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.390 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:02 np0005539550 podman[397562]: 2025-11-29 08:51:02.392826159 +0000 UTC m=+0.515650788 container start ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:51:02 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [NOTICE]   (397585) : New worker (397587) forked
Nov 29 03:51:02 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [NOTICE]   (397585) : Loading success.
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.486 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9249872d-f151-438a-bddd-e6b41e397647 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.486 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.486 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.505 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.544 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.544 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.563 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.587 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:51:02 np0005539550 nova_compute[257631]: 2025-11-29 08:51:02.646 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:51:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 116 KiB/s rd, 12 KiB/s wr, 5 op/s
Nov 29 03:51:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:02.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:51:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464681879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.076 257641 DEBUG nova.compute.manager [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.077 257641 DEBUG oslo_concurrency.lockutils [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.077 257641 DEBUG oslo_concurrency.lockutils [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.077 257641 DEBUG oslo_concurrency.lockutils [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.078 257641 DEBUG nova.compute.manager [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] No waiting events found dispatching network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.078 257641 WARNING nova.compute.manager [req-df2d41ea-1dc5-4f67-a4cf-b89a05cb615b req-8eecd2b2-9883-43cb-a2b8-44a00a897150 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received unexpected event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc for instance with vm_state active and task_state None.#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.079 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.084 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.105 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.145 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.145 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:03 np0005539550 nova_compute[257631]: 2025-11-29 08:51:03.328 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:04 np0005539550 nova_compute[257631]: 2025-11-29 08:51:04.146 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 53 op/s
Nov 29 03:51:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:04.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:05 np0005539550 nova_compute[257631]: 2025-11-29 08:51:05.052 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 12 KiB/s wr, 53 op/s
Nov 29 03:51:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:06.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:51:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:51:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:07.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ac43e89-aa34-4670-813f-4d2e9723e940 does not exist
Nov 29 03:51:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dc19af87-6608-4d4e-8d3f-46a33fb009b4 does not exist
Nov 29 03:51:07 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2e310e24-9391-4013-bb96-e137862cc353 does not exist
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:51:07 np0005539550 nova_compute[257631]: 2025-11-29 08:51:07.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:07 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:51:08 np0005539550 nova_compute[257631]: 2025-11-29 08:51:08.331 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.503706763 +0000 UTC m=+0.047750426 container create 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:51:08 np0005539550 systemd[1]: Started libpod-conmon-203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423.scope.
Nov 29 03:51:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.483430025 +0000 UTC m=+0.027473698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.598473295 +0000 UTC m=+0.142516978 container init 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.612486486 +0000 UTC m=+0.156530129 container start 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.620146037 +0000 UTC m=+0.164189780 container attach 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:51:08 np0005539550 systemd[1]: libpod-203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423.scope: Deactivated successfully.
Nov 29 03:51:08 np0005539550 NetworkManager[49039]: <info>  [1764406268.6245] manager: (patch-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/456)
Nov 29 03:51:08 np0005539550 nova_compute[257631]: 2025-11-29 08:51:08.624 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:08 np0005539550 NetworkManager[49039]: <info>  [1764406268.6258] manager: (patch-br-int-to-provnet-13a7b82e-0590-40fb-a89e-97ecddababc5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Nov 29 03:51:08 np0005539550 pedantic_rhodes[397911]: 167 167
Nov 29 03:51:08 np0005539550 conmon[397911]: conmon 203843f5f6bc5833cc0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423.scope/container/memory.events
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.634152528 +0000 UTC m=+0.178196171 container died 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9b5a897be5c0b340e22851484d13b9a8e5edcf2b6662aea0675e1260fa50635c-merged.mount: Deactivated successfully.
Nov 29 03:51:08 np0005539550 podman[397894]: 2025-11-29 08:51:08.696401316 +0000 UTC m=+0.240444979 container remove 203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_rhodes, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 03:51:08 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:08Z|01040|binding|INFO|Releasing lport 9b7ae33f-c1c7-4a13-97b3-0ae6cb40a1db from this chassis (sb_readonly=0)
Nov 29 03:51:08 np0005539550 nova_compute[257631]: 2025-11-29 08:51:08.702 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:08 np0005539550 systemd[1]: libpod-conmon-203843f5f6bc5833cc0c9fd287ef948a8786046d2e4e61a708573c1e6dc94423.scope: Deactivated successfully.
Nov 29 03:51:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:51:08 np0005539550 podman[397939]: 2025-11-29 08:51:08.869805446 +0000 UTC m=+0.048499055 container create d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:08.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:08 np0005539550 systemd[1]: Started libpod-conmon-d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454.scope.
Nov 29 03:51:08 np0005539550 podman[397939]: 2025-11-29 08:51:08.846191875 +0000 UTC m=+0.024885544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:08 np0005539550 podman[397939]: 2025-11-29 08:51:08.961719407 +0000 UTC m=+0.140413096 container init d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:51:08 np0005539550 podman[397939]: 2025-11-29 08:51:08.975684286 +0000 UTC m=+0.154377895 container start d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:51:08 np0005539550 podman[397939]: 2025-11-29 08:51:08.979350258 +0000 UTC m=+0.158043907 container attach d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:51:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:51:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:51:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:51:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:51:09 np0005539550 nova_compute[257631]: 2025-11-29 08:51:09.203 257641 DEBUG nova.compute.manager [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:51:09 np0005539550 nova_compute[257631]: 2025-11-29 08:51:09.205 257641 DEBUG nova.compute.manager [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing instance network info cache due to event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:51:09 np0005539550 nova_compute[257631]: 2025-11-29 08:51:09.205 257641 DEBUG oslo_concurrency.lockutils [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:51:09 np0005539550 nova_compute[257631]: 2025-11-29 08:51:09.206 257641 DEBUG oslo_concurrency.lockutils [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:51:09 np0005539550 nova_compute[257631]: 2025-11-29 08:51:09.206 257641 DEBUG nova.network.neutron [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:51:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:09 np0005539550 reverent_bose[397953]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:51:09 np0005539550 reverent_bose[397953]: --> relative data size: 1.0
Nov 29 03:51:09 np0005539550 reverent_bose[397953]: --> All data devices are unavailable
Nov 29 03:51:09 np0005539550 systemd[1]: libpod-d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454.scope: Deactivated successfully.
Nov 29 03:51:09 np0005539550 podman[397939]: 2025-11-29 08:51:09.793035645 +0000 UTC m=+0.971729254 container died d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 03:51:10 np0005539550 nova_compute[257631]: 2025-11-29 08:51:10.054 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:10 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9e65a3f9c14ca2e55ce5f2be1dfb9553ef2bd9b13e49840b31c67ac5cec3ea49-merged.mount: Deactivated successfully.
Nov 29 03:51:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:51:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:10.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:10 np0005539550 podman[397939]: 2025-11-29 08:51:10.898843282 +0000 UTC m=+2.077536891 container remove d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bose, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 03:51:10 np0005539550 nova_compute[257631]: 2025-11-29 08:51:10.935 257641 DEBUG nova.network.neutron [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updated VIF entry in instance network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:51:10 np0005539550 nova_compute[257631]: 2025-11-29 08:51:10.936 257641 DEBUG nova.network.neutron [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:10 np0005539550 nova_compute[257631]: 2025-11-29 08:51:10.962 257641 DEBUG oslo_concurrency.lockutils [req-3639f379-9632-481f-a2d6-af634bf61c06 req-6f74d70f-42d7-4be3-a2d5-9919cef0c3dd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:11 np0005539550 systemd[1]: libpod-conmon-d7189ff886dab4dcf3542d901322ab0046ac55734df52ead27ed1e1618362454.scope: Deactivated successfully.
Nov 29 03:51:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.757082084 +0000 UTC m=+0.104575328 container create d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.688522388 +0000 UTC m=+0.036015672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:11 np0005539550 systemd[1]: Started libpod-conmon-d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9.scope.
Nov 29 03:51:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.847121968 +0000 UTC m=+0.194615242 container init d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.854370279 +0000 UTC m=+0.201863523 container start d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.857289612 +0000 UTC m=+0.204782886 container attach d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:51:11 np0005539550 admiring_cartwright[398151]: 167 167
Nov 29 03:51:11 np0005539550 systemd[1]: libpod-d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9.scope: Deactivated successfully.
Nov 29 03:51:11 np0005539550 conmon[398151]: conmon d485ffc64888b96d8f40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9.scope/container/memory.events
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.862480082 +0000 UTC m=+0.209973326 container died d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:11 np0005539550 podman[398137]: 2025-11-29 08:51:11.871076418 +0000 UTC m=+0.071060390 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Nov 29 03:51:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-580f16414a7fddf9be8223f41e7e441bc44ef4fb7411bf53468eb26cc5b32b96-merged.mount: Deactivated successfully.
Nov 29 03:51:11 np0005539550 podman[398138]: 2025-11-29 08:51:11.891781156 +0000 UTC m=+0.088991959 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 03:51:11 np0005539550 podman[398123]: 2025-11-29 08:51:11.906285569 +0000 UTC m=+0.253778823 container remove d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 03:51:11 np0005539550 systemd[1]: libpod-conmon-d485ffc64888b96d8f40183513322de425f6cb1d2f2bed263c153c7c81f7cfd9.scope: Deactivated successfully.
Nov 29 03:51:12 np0005539550 podman[398200]: 2025-11-29 08:51:12.094644454 +0000 UTC m=+0.054993228 container create 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:12 np0005539550 systemd[1]: Started libpod-conmon-2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31.scope.
Nov 29 03:51:12 np0005539550 podman[398200]: 2025-11-29 08:51:12.069005322 +0000 UTC m=+0.029354136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e113c2f19f16fc842f5bf78e98b7d3b241f158ccacba5d5939e69b5779dad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e113c2f19f16fc842f5bf78e98b7d3b241f158ccacba5d5939e69b5779dad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e113c2f19f16fc842f5bf78e98b7d3b241f158ccacba5d5939e69b5779dad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba4e113c2f19f16fc842f5bf78e98b7d3b241f158ccacba5d5939e69b5779dad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:12 np0005539550 podman[398200]: 2025-11-29 08:51:12.202322209 +0000 UTC m=+0.162670983 container init 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:51:12 np0005539550 podman[398200]: 2025-11-29 08:51:12.210155995 +0000 UTC m=+0.170504749 container start 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:51:12 np0005539550 podman[398200]: 2025-11-29 08:51:12.213496738 +0000 UTC m=+0.173845522 container attach 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:51:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 29 03:51:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:12.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:12 np0005539550 nova_compute[257631]: 2025-11-29 08:51:12.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]: {
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:    "0": [
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:        {
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "devices": [
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "/dev/loop3"
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            ],
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "lv_name": "ceph_lv0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "lv_size": "7511998464",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "name": "ceph_lv0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "tags": {
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.cluster_name": "ceph",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.crush_device_class": "",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.encrypted": "0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.osd_id": "0",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.type": "block",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:                "ceph.vdo": "0"
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            },
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "type": "block",
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:            "vg_name": "ceph_vg0"
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:        }
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]:    ]
Nov 29 03:51:13 np0005539550 wonderful_elbakyan[398216]: }
Nov 29 03:51:13 np0005539550 systemd[1]: libpod-2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31.scope: Deactivated successfully.
Nov 29 03:51:13 np0005539550 podman[398226]: 2025-11-29 08:51:13.121060585 +0000 UTC m=+0.042465524 container died 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:51:13 np0005539550 nova_compute[257631]: 2025-11-29 08:51:13.333 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ba4e113c2f19f16fc842f5bf78e98b7d3b241f158ccacba5d5939e69b5779dad-merged.mount: Deactivated successfully.
Nov 29 03:51:13 np0005539550 podman[398226]: 2025-11-29 08:51:13.447491186 +0000 UTC m=+0.368896145 container remove 2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:51:13 np0005539550 systemd[1]: libpod-conmon-2b8ae41bbee257c9527178b480dd247a71dd27b2bb9f5ae5b2e26de05c8aae31.scope: Deactivated successfully.
Nov 29 03:51:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.035509604 +0000 UTC m=+0.055221343 container create 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 29 03:51:14 np0005539550 systemd[1]: Started libpod-conmon-08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d.scope.
Nov 29 03:51:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.009498703 +0000 UTC m=+0.029210462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.114175423 +0000 UTC m=+0.133887212 container init 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.12084897 +0000 UTC m=+0.140560699 container start 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.123979438 +0000 UTC m=+0.143691217 container attach 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:14 np0005539550 gifted_fermi[398397]: 167 167
Nov 29 03:51:14 np0005539550 podman[398381]: 2025-11-29 08:51:14.126689716 +0000 UTC m=+0.146401435 container died 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:51:14 np0005539550 systemd[1]: libpod-08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d.scope: Deactivated successfully.
Nov 29 03:51:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c495d42e419e4de8192272fdbb32abb04ee95720dd17c341a986065d9ea87a77-merged.mount: Deactivated successfully.
Nov 29 03:51:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 86 op/s
Nov 29 03:51:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:14.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:15Z|00118|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.5
Nov 29 03:51:15 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:15Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c0:01:7c 10.100.0.5
Nov 29 03:51:15 np0005539550 nova_compute[257631]: 2025-11-29 08:51:15.057 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:15 np0005539550 podman[398381]: 2025-11-29 08:51:15.263970061 +0000 UTC m=+1.283681830 container remove 08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_fermi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:51:15 np0005539550 systemd[1]: libpod-conmon-08326c396b64ab26ddffa405bf464cb63b4c5acf3fc0c815c47f140c0abe289d.scope: Deactivated successfully.
Nov 29 03:51:15 np0005539550 podman[398470]: 2025-11-29 08:51:15.506199984 +0000 UTC m=+0.046110625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:51:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:15.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:15 np0005539550 podman[398470]: 2025-11-29 08:51:15.765990557 +0000 UTC m=+0.305901138 container create 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:51:15 np0005539550 systemd[1]: Started libpod-conmon-26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d.scope.
Nov 29 03:51:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:51:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97abd21443813d9ad783d292883809b6b76adeafbb750a06f508ec9ad17d133b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97abd21443813d9ad783d292883809b6b76adeafbb750a06f508ec9ad17d133b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97abd21443813d9ad783d292883809b6b76adeafbb750a06f508ec9ad17d133b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97abd21443813d9ad783d292883809b6b76adeafbb750a06f508ec9ad17d133b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:51:15 np0005539550 podman[398470]: 2025-11-29 08:51:15.933805687 +0000 UTC m=+0.473716268 container init 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:15 np0005539550 podman[398470]: 2025-11-29 08:51:15.953248883 +0000 UTC m=+0.493159454 container start 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:51:15 np0005539550 podman[398470]: 2025-11-29 08:51:15.959572402 +0000 UTC m=+0.499482973 container attach 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:51:16 np0005539550 nova_compute[257631]: 2025-11-29 08:51:16.025 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 880 KiB/s rd, 11 KiB/s wr, 41 op/s
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]: {
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:        "osd_id": 0,
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:        "type": "bluestore"
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]:    }
Nov 29 03:51:16 np0005539550 optimistic_shaw[398486]: }
Nov 29 03:51:16 np0005539550 systemd[1]: libpod-26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d.scope: Deactivated successfully.
Nov 29 03:51:16 np0005539550 podman[398470]: 2025-11-29 08:51:16.829052905 +0000 UTC m=+1.368963446 container died 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:51:16 np0005539550 conmon[398486]: conmon 26a82598787e43ba9324 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d.scope/container/memory.events
Nov 29 03:51:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-97abd21443813d9ad783d292883809b6b76adeafbb750a06f508ec9ad17d133b-merged.mount: Deactivated successfully.
Nov 29 03:51:16 np0005539550 podman[398470]: 2025-11-29 08:51:16.881697243 +0000 UTC m=+1.421607844 container remove 26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:51:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:16.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:16 np0005539550 systemd[1]: libpod-conmon-26a82598787e43ba9324800aa799bd7f8cd078ec0febcf255b514f8f160f5b4d.scope: Deactivated successfully.
Nov 29 03:51:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:51:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:51:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8bfe5432-65e9-42dd-8196-7f39916d4d0d does not exist
Nov 29 03:51:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a58d45b-acfc-481c-81f6-45e741548e84 does not exist
Nov 29 03:51:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e38c1f79-0218-401c-bfad-ade1bff4dccc does not exist
Nov 29 03:51:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:17.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:51:18 np0005539550 nova_compute[257631]: 2025-11-29 08:51:18.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 13 KiB/s wr, 60 op/s
Nov 29 03:51:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:18.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:18Z|00120|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.6 does not match offer 10.100.0.5
Nov 29 03:51:18 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:18Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c0:01:7c 10.100.0.5
Nov 29 03:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:18.982 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:18.983 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:51:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:51:18.984 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:51:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:19.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:20Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:01:7c 10.100.0.5
Nov 29 03:51:20 np0005539550 nova_compute[257631]: 2025-11-29 08:51:20.061 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:20 np0005539550 ovn_controller[148680]: 2025-11-29T08:51:20Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:01:7c 10.100.0.5
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004327009759445375 of space, bias 1.0, pg target 1.2981029278336125 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:51:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 13 KiB/s wr, 44 op/s
Nov 29 03:51:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:20 np0005539550 nova_compute[257631]: 2025-11-29 08:51:20.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:21.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 26 KiB/s wr, 45 op/s
Nov 29 03:51:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:22.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:23 np0005539550 nova_compute[257631]: 2025-11-29 08:51:23.340 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:23.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 26 KiB/s wr, 45 op/s
Nov 29 03:51:24 np0005539550 nova_compute[257631]: 2025-11-29 08:51:24.855 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:24.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:25 np0005539550 nova_compute[257631]: 2025-11-29 08:51:25.064 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:25 np0005539550 podman[398575]: 2025-11-29 08:51:25.413111352 +0000 UTC m=+0.129738738 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:51:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:25.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 15 KiB/s wr, 26 op/s
Nov 29 03:51:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:26.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:27.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:28 np0005539550 nova_compute[257631]: 2025-11-29 08:51:28.343 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 292 KiB/s rd, 19 KiB/s wr, 25 op/s
Nov 29 03:51:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:28.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:29.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:30 np0005539550 nova_compute[257631]: 2025-11-29 08:51:30.066 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 43 KiB/s rd, 24 KiB/s wr, 6 op/s
Nov 29 03:51:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539550 nova_compute[257631]: 2025-11-29 08:51:31.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:31.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s wr, 2 op/s
Nov 29 03:51:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:32.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:33 np0005539550 nova_compute[257631]: 2025-11-29 08:51:33.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:33.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 1 op/s
Nov 29 03:51:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:34.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:35 np0005539550 nova_compute[257631]: 2025-11-29 08:51:35.069 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:35.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:35 np0005539550 nova_compute[257631]: 2025-11-29 08:51:35.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 1 op/s
Nov 29 03:51:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:37.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:38 np0005539550 nova_compute[257631]: 2025-11-29 08:51:38.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 1 op/s
Nov 29 03:51:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:38.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:39.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:40 np0005539550 nova_compute[257631]: 2025-11-29 08:51:40.070 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s wr, 1 op/s
Nov 29 03:51:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:40.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:41.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:42 np0005539550 podman[398662]: 2025-11-29 08:51:42.315410574 +0000 UTC m=+0.047500500 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:42 np0005539550 podman[398661]: 2025-11-29 08:51:42.324207474 +0000 UTC m=+0.056517295 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:51:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.3 KiB/s wr, 4 op/s
Nov 29 03:51:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:42.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Nov 29 03:51:43 np0005539550 nova_compute[257631]: 2025-11-29 08:51:43.406 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:43.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Nov 29 03:51:43 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Nov 29 03:51:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 KiB/s wr, 17 op/s
Nov 29 03:51:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:44.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:45 np0005539550 nova_compute[257631]: 2025-11-29 08:51:45.071 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:45.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 KiB/s wr, 18 op/s
Nov 29 03:51:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:46.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:48 np0005539550 nova_compute[257631]: 2025-11-29 08:51:48.449 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 KiB/s wr, 18 op/s
Nov 29 03:51:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:48.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:50 np0005539550 nova_compute[257631]: 2025-11-29 08:51:50.074 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 KiB/s wr, 42 op/s
Nov 29 03:51:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:50.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:51.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 577 KiB/s rd, 5.4 KiB/s wr, 51 op/s
Nov 29 03:51:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:52.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:53 np0005539550 nova_compute[257631]: 2025-11-29 08:51:53.451 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:53.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 218 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 532 KiB/s rd, 736 KiB/s wr, 52 op/s
Nov 29 03:51:54 np0005539550 nova_compute[257631]: 2025-11-29 08:51:54.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:54 np0005539550 nova_compute[257631]: 2025-11-29 08:51:54.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:51:54 np0005539550 nova_compute[257631]: 2025-11-29 08:51:54.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:51:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:55 np0005539550 nova_compute[257631]: 2025-11-29 08:51:55.079 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:55 np0005539550 nova_compute[257631]: 2025-11-29 08:51:55.546 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:51:55 np0005539550 nova_compute[257631]: 2025-11-29 08:51:55.546 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:51:55 np0005539550 nova_compute[257631]: 2025-11-29 08:51:55.546 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:51:55 np0005539550 nova_compute[257631]: 2025-11-29 08:51:55.546 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9249872d-f151-438a-bddd-e6b41e397647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:51:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:51:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:55.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:51:56 np0005539550 podman[398755]: 2025-11-29 08:51:56.439770814 +0000 UTC m=+0.182145870 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:51:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:51:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 236 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 1.1 MiB/s wr, 44 op/s
Nov 29 03:51:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:56.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:57.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:51:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.454 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.497 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.524 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.525 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:51:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:58 np0005539550 nova_compute[257631]: 2025-11-29 08:51:58.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:51:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:51:58.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:51:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Nov 29 03:51:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Nov 29 03:51:59 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Nov 29 03:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:51:59
Nov 29 03:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'vms', 'backups']
Nov 29 03:51:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:51:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:51:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:51:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:51:59.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:00 np0005539550 nova_compute[257631]: 2025-11-29 08:52:00.080 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Nov 29 03:52:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 248 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 37 op/s
Nov 29 03:52:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Nov 29 03:52:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Nov 29 03:52:00 np0005539550 nova_compute[257631]: 2025-11-29 08:52:00.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:00 np0005539550 nova_compute[257631]: 2025-11-29 08:52:00.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:00 np0005539550 nova_compute[257631]: 2025-11-29 08:52:00.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:52:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:00.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 15K writes, 68K keys, 15K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 15K writes, 15K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1675 writes, 7398 keys, 1675 commit groups, 1.0 writes per commit group, ingest: 10.99 MB, 0.02 MB/s#012Interval WAL: 1675 writes, 1675 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     37.2      2.46              0.30        47    0.052       0      0       0.0       0.0#012  L6      1/0   11.91 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.5     44.0     38.0     13.26              1.47        46    0.288    356K    24K       0.0       0.0#012 Sum      1/0   11.91 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.5     37.1     37.9     15.72              1.77        93    0.169    356K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.5    134.2    136.0      0.59              0.23        12    0.049     62K   3093       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     44.0     38.0     13.26              1.47        46    0.288    356K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     37.2      2.45              0.30        46    0.053       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.089, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.58 GB write, 0.10 MB/s write, 0.57 GB read, 0.10 MB/s read, 15.7 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 62.07 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000695 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3530,59.49 MB,19.5699%) FilterBlock(94,978.36 KB,0.314286%) IndexBlock(94,1.62 MB,0.532637%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Nov 29 03:52:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.954 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:52:01 np0005539550 nova_compute[257631]: 2025-11-29 08:52:01.955 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3197270359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.440 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.532 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.533 257641 DEBUG nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] skipping disk for instance-000000de as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.749 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.751 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3904MB free_disk=20.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.751 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.752 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 254 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 29 03:52:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Nov 29 03:52:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Nov 29 03:52:02 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.849 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Instance 9249872d-f151-438a-bddd-e6b41e397647 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.850 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.850 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:52:02 np0005539550 nova_compute[257631]: 2025-11-29 08:52:02.899 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:52:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:02.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:52:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636064854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.354 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.360 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.404 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.440 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.440 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:03 np0005539550 nova_compute[257631]: 2025-11-29 08:52:03.503 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:04 np0005539550 nova_compute[257631]: 2025-11-29 08:52:04.441 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 273 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 29 03:52:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:04.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:05 np0005539550 nova_compute[257631]: 2025-11-29 08:52:05.084 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 29 03:52:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1372135349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 29 03:52:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:05.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Nov 29 03:52:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Nov 29 03:52:06 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Nov 29 03:52:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 294 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 60 op/s
Nov 29 03:52:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:06.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:07.015 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:07 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:07.016 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:52:07 np0005539550 nova_compute[257631]: 2025-11-29 08:52:07.057 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:07.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:52:08 np0005539550 nova_compute[257631]: 2025-11-29 08:52:08.506 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 303 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.1 MiB/s wr, 68 op/s
Nov 29 03:52:08 np0005539550 nova_compute[257631]: 2025-11-29 08:52:08.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:08.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:52:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:52:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:52:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:52:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:52:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:09.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:10 np0005539550 nova_compute[257631]: 2025-11-29 08:52:10.085 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 316 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.7 MiB/s wr, 147 op/s
Nov 29 03:52:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:10.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:11.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 341 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.2 MiB/s wr, 154 op/s
Nov 29 03:52:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:12.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:13 np0005539550 podman[398837]: 2025-11-29 08:52:13.364907307 +0000 UTC m=+0.089934962 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:52:13 np0005539550 podman[398836]: 2025-11-29 08:52:13.372890127 +0000 UTC m=+0.104355623 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:52:13 np0005539550 nova_compute[257631]: 2025-11-29 08:52:13.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:13.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:13 np0005539550 nova_compute[257631]: 2025-11-29 08:52:13.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 341 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.1 MiB/s wr, 146 op/s
Nov 29 03:52:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:14.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:15 np0005539550 nova_compute[257631]: 2025-11-29 08:52:15.087 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:15.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 341 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Nov 29 03:52:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:17 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:17.019 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:17.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:52:18 np0005539550 nova_compute[257631]: 2025-11-29 08:52:18.557 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7be496db-a6a3-4697-8d12-39f3082651b5 does not exist
Nov 29 03:52:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev afdb9260-b863-4450-a6cc-85875af2c0bc does not exist
Nov 29 03:52:18 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 56d10054-a185-4da2-82b5-616dfea14c5a does not exist
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:52:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:52:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 341 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 113 op/s
Nov 29 03:52:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:18.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:18.982 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:18.982 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:18.983 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:52:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:19 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.260077531 +0000 UTC m=+0.044951626 container create 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:52:19 np0005539550 systemd[1]: Started libpod-conmon-7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa.scope.
Nov 29 03:52:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.24324291 +0000 UTC m=+0.028117025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.348837003 +0000 UTC m=+0.133711108 container init 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.355647503 +0000 UTC m=+0.140521608 container start 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.359202402 +0000 UTC m=+0.144076507 container attach 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:52:19 np0005539550 festive_haibt[399212]: 167 167
Nov 29 03:52:19 np0005539550 systemd[1]: libpod-7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa.scope: Deactivated successfully.
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.384505186 +0000 UTC m=+0.169379281 container died 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 03:52:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7fc9173e9021ce48667f4b6883c54335d84cc018ad852201fe4abc690f5126c9-merged.mount: Deactivated successfully.
Nov 29 03:52:19 np0005539550 podman[399195]: 2025-11-29 08:52:19.433966314 +0000 UTC m=+0.218840399 container remove 7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haibt, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 29 03:52:19 np0005539550 systemd[1]: libpod-conmon-7f051552754218b5d1c13e53105f078584c6b0d1402324d3c5f02d24f2be26fa.scope: Deactivated successfully.
Nov 29 03:52:19 np0005539550 podman[399236]: 2025-11-29 08:52:19.665002207 +0000 UTC m=+0.046878365 container create a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:52:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:19.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:19 np0005539550 systemd[1]: Started libpod-conmon-a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab.scope.
Nov 29 03:52:19 np0005539550 podman[399236]: 2025-11-29 08:52:19.648256997 +0000 UTC m=+0.030133195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:19 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:19 np0005539550 podman[399236]: 2025-11-29 08:52:19.778436636 +0000 UTC m=+0.160312824 container init a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:52:19 np0005539550 podman[399236]: 2025-11-29 08:52:19.786139029 +0000 UTC m=+0.168015217 container start a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 29 03:52:19 np0005539550 podman[399236]: 2025-11-29 08:52:19.79099486 +0000 UTC m=+0.172871108 container attach a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:52:20 np0005539550 nova_compute[257631]: 2025-11-29 08:52:20.090 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.399503303554811e-05 of space, bias 1.0, pg target 0.004198509910664433 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0063099423622743345 of space, bias 1.0, pg target 1.8929827086823003 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Nov 29 03:52:20 np0005539550 hardcore_swartz[399253]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:52:20 np0005539550 hardcore_swartz[399253]: --> relative data size: 1.0
Nov 29 03:52:20 np0005539550 hardcore_swartz[399253]: --> All data devices are unavailable
Nov 29 03:52:20 np0005539550 systemd[1]: libpod-a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab.scope: Deactivated successfully.
Nov 29 03:52:20 np0005539550 podman[399236]: 2025-11-29 08:52:20.659241093 +0000 UTC m=+1.041117291 container died a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:52:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-83b073e3d4f2d11ebb0a6c94fddc8cddfb40ba96259267d51caa188d30f9f1eb-merged.mount: Deactivated successfully.
Nov 29 03:52:20 np0005539550 podman[399236]: 2025-11-29 08:52:20.715190563 +0000 UTC m=+1.097066721 container remove a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 03:52:20 np0005539550 systemd[1]: libpod-conmon-a6d49ab7ba5dd336d45ef0eac7b19e4bdd5ac47d2f986eb670b33fd76bfa4fab.scope: Deactivated successfully.
Nov 29 03:52:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 341 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.2 MiB/s wr, 105 op/s
Nov 29 03:52:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:20.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.591925728 +0000 UTC m=+0.047290095 container create 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:52:21 np0005539550 systemd[1]: Started libpod-conmon-02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761.scope.
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.569701241 +0000 UTC m=+0.025065638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:21.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.714836534 +0000 UTC m=+0.170200921 container init 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.729509571 +0000 UTC m=+0.184873938 container start 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.733797649 +0000 UTC m=+0.189162016 container attach 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:52:21 np0005539550 optimistic_goldstine[399436]: 167 167
Nov 29 03:52:21 np0005539550 systemd[1]: libpod-02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761.scope: Deactivated successfully.
Nov 29 03:52:21 np0005539550 conmon[399436]: conmon 02de531ea55d46d44cb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761.scope/container/memory.events
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.740057195 +0000 UTC m=+0.195421592 container died 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 03:52:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-e5f3e95e91255522393217676ef4a5cc7150ad5796811e1b3ccfc7d294a9e408-merged.mount: Deactivated successfully.
Nov 29 03:52:21 np0005539550 podman[399420]: 2025-11-29 08:52:21.94200596 +0000 UTC m=+0.397370327 container remove 02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:52:21 np0005539550 systemd[1]: libpod-conmon-02de531ea55d46d44cb011736a74531898612a89406d5167e6ce5d091607c761.scope: Deactivated successfully.
Nov 29 03:52:22 np0005539550 podman[399460]: 2025-11-29 08:52:22.198030139 +0000 UTC m=+0.053308366 container create a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:52:22 np0005539550 systemd[1]: Started libpod-conmon-a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772.scope.
Nov 29 03:52:22 np0005539550 podman[399460]: 2025-11-29 08:52:22.174285844 +0000 UTC m=+0.029564091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac6d81b87b1570da417e4b76176e462fbff9e7eb6891dc465e39c2dd5ae0210/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac6d81b87b1570da417e4b76176e462fbff9e7eb6891dc465e39c2dd5ae0210/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac6d81b87b1570da417e4b76176e462fbff9e7eb6891dc465e39c2dd5ae0210/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac6d81b87b1570da417e4b76176e462fbff9e7eb6891dc465e39c2dd5ae0210/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:22 np0005539550 podman[399460]: 2025-11-29 08:52:22.335936379 +0000 UTC m=+0.191214596 container init a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 29 03:52:22 np0005539550 podman[399460]: 2025-11-29 08:52:22.345071698 +0000 UTC m=+0.200349925 container start a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:52:22 np0005539550 podman[399460]: 2025-11-29 08:52:22.349255353 +0000 UTC m=+0.204533670 container attach a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 03:52:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 350 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 64 op/s
Nov 29 03:52:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:22.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]: {
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:    "0": [
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:        {
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "devices": [
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "/dev/loop3"
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            ],
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "lv_name": "ceph_lv0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "lv_size": "7511998464",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "name": "ceph_lv0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "tags": {
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.cluster_name": "ceph",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.crush_device_class": "",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.encrypted": "0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.osd_id": "0",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.type": "block",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:                "ceph.vdo": "0"
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            },
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "type": "block",
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:            "vg_name": "ceph_vg0"
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:        }
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]:    ]
Nov 29 03:52:23 np0005539550 lucid_lalande[399476]: }
Nov 29 03:52:23 np0005539550 systemd[1]: libpod-a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772.scope: Deactivated successfully.
Nov 29 03:52:23 np0005539550 conmon[399476]: conmon a70c6917e6f719ca20b3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772.scope/container/memory.events
Nov 29 03:52:23 np0005539550 podman[399460]: 2025-11-29 08:52:23.211935426 +0000 UTC m=+1.067213663 container died a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:52:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2ac6d81b87b1570da417e4b76176e462fbff9e7eb6891dc465e39c2dd5ae0210-merged.mount: Deactivated successfully.
Nov 29 03:52:23 np0005539550 podman[399460]: 2025-11-29 08:52:23.295945129 +0000 UTC m=+1.151223356 container remove a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_lalande, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 03:52:23 np0005539550 systemd[1]: libpod-conmon-a70c6917e6f719ca20b3c080f5fc41ffe3b35bd0fab6a527644ba242404ff772.scope: Deactivated successfully.
Nov 29 03:52:23 np0005539550 nova_compute[257631]: 2025-11-29 08:52:23.561 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:23.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.081119952 +0000 UTC m=+0.059310696 container create 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:52:24 np0005539550 systemd[1]: Started libpod-conmon-27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765.scope.
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.053619513 +0000 UTC m=+0.031810257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.197187137 +0000 UTC m=+0.175377881 container init 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.209136236 +0000 UTC m=+0.187326940 container start 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.212307015 +0000 UTC m=+0.190497759 container attach 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:52:24 np0005539550 mystifying_kalam[399655]: 167 167
Nov 29 03:52:24 np0005539550 systemd[1]: libpod-27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765.scope: Deactivated successfully.
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.221384833 +0000 UTC m=+0.199575577 container died 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:52:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-64e241b69ec4ef4c5f7344948e8a0013106d1269e4cce9e0fbade410d4a6edbb-merged.mount: Deactivated successfully.
Nov 29 03:52:24 np0005539550 podman[399638]: 2025-11-29 08:52:24.275485847 +0000 UTC m=+0.253676591 container remove 27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:52:24 np0005539550 systemd[1]: libpod-conmon-27e73c140f7a6acb7d78c613f628fe97b308a9db61938e0f12f22e1116538765.scope: Deactivated successfully.
Nov 29 03:52:24 np0005539550 podman[399679]: 2025-11-29 08:52:24.529550786 +0000 UTC m=+0.053855889 container create 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 03:52:24 np0005539550 systemd[1]: Started libpod-conmon-81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122.scope.
Nov 29 03:52:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:52:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7049625f0bcc998ee01981418564beb970bd58a199db900d43a92860d0c703a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:24 np0005539550 podman[399679]: 2025-11-29 08:52:24.503910474 +0000 UTC m=+0.028215657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:52:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7049625f0bcc998ee01981418564beb970bd58a199db900d43a92860d0c703a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7049625f0bcc998ee01981418564beb970bd58a199db900d43a92860d0c703a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7049625f0bcc998ee01981418564beb970bd58a199db900d43a92860d0c703a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:52:24 np0005539550 podman[399679]: 2025-11-29 08:52:24.608311907 +0000 UTC m=+0.132617020 container init 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:52:24 np0005539550 podman[399679]: 2025-11-29 08:52:24.626549604 +0000 UTC m=+0.150854697 container start 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 29 03:52:24 np0005539550 podman[399679]: 2025-11-29 08:52:24.630204085 +0000 UTC m=+0.154509268 container attach 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:52:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 355 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 500 KiB/s wr, 55 op/s
Nov 29 03:52:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:24.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:25 np0005539550 nova_compute[257631]: 2025-11-29 08:52:25.093 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:25 np0005539550 loving_cray[399696]: {
Nov 29 03:52:25 np0005539550 loving_cray[399696]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:52:25 np0005539550 loving_cray[399696]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:52:25 np0005539550 loving_cray[399696]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:52:25 np0005539550 loving_cray[399696]:        "osd_id": 0,
Nov 29 03:52:25 np0005539550 loving_cray[399696]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:52:25 np0005539550 loving_cray[399696]:        "type": "bluestore"
Nov 29 03:52:25 np0005539550 loving_cray[399696]:    }
Nov 29 03:52:25 np0005539550 loving_cray[399696]: }
Nov 29 03:52:25 np0005539550 systemd[1]: libpod-81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122.scope: Deactivated successfully.
Nov 29 03:52:25 np0005539550 podman[399679]: 2025-11-29 08:52:25.501185376 +0000 UTC m=+1.025490469 container died 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:52:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c7049625f0bcc998ee01981418564beb970bd58a199db900d43a92860d0c703a-merged.mount: Deactivated successfully.
Nov 29 03:52:25 np0005539550 podman[399679]: 2025-11-29 08:52:25.584567123 +0000 UTC m=+1.108872226 container remove 81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:52:25 np0005539550 systemd[1]: libpod-conmon-81b815740d0245b99a3e38f5fd03bb521b5a69cd2542bcda74cd16bafd7c9122.scope: Deactivated successfully.
Nov 29 03:52:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:52:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:52:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8cb472c0-a1b0-4814-9b93-7e89a227d16a does not exist
Nov 29 03:52:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ee32ed0d-132f-4080-abd3-0664a0cde346 does not exist
Nov 29 03:52:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e6763c43-f670-4c91-b662-ad348255ded5 does not exist
Nov 29 03:52:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:52:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 355 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 500 KiB/s wr, 55 op/s
Nov 29 03:52:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:26.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:27 np0005539550 podman[399783]: 2025-11-29 08:52:27.375732905 +0000 UTC m=+0.106050915 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:27.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:28 np0005539550 nova_compute[257631]: 2025-11-29 08:52:28.565 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 570 KiB/s wr, 61 op/s
Nov 29 03:52:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:28.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:29.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:30 np0005539550 nova_compute[257631]: 2025-11-29 08:52:30.095 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 580 KiB/s wr, 77 op/s
Nov 29 03:52:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:31.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 592 KiB/s wr, 128 op/s
Nov 29 03:52:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:32.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:33 np0005539550 nova_compute[257631]: 2025-11-29 08:52:33.568 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:33.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 161 KiB/s wr, 99 op/s
Nov 29 03:52:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:34.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:35 np0005539550 nova_compute[257631]: 2025-11-29 08:52:35.099 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:35.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 96 KiB/s wr, 77 op/s
Nov 29 03:52:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:36.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:37.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:38 np0005539550 nova_compute[257631]: 2025-11-29 08:52:38.572 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 359 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 96 KiB/s wr, 77 op/s
Nov 29 03:52:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:38.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:39.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:40 np0005539550 nova_compute[257631]: 2025-11-29 08:52:40.103 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 376 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Nov 29 03:52:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:40.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:41.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 398 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 114 op/s
Nov 29 03:52:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:42.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:43 np0005539550 nova_compute[257631]: 2025-11-29 08:52:43.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:43.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:44 np0005539550 podman[399871]: 2025-11-29 08:52:44.318011587 +0000 UTC m=+0.051031598 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 03:52:44 np0005539550 podman[399870]: 2025-11-29 08:52:44.328361906 +0000 UTC m=+0.065007068 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:52:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Nov 29 03:52:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:44.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:45 np0005539550 nova_compute[257631]: 2025-11-29 08:52:45.104 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:45.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Nov 29 03:52:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:52:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:46.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:52:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:47.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:48 np0005539550 nova_compute[257631]: 2025-11-29 08:52:48.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 397 KiB/s rd, 2.2 MiB/s wr, 77 op/s
Nov 29 03:52:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:49.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:50 np0005539550 nova_compute[257631]: 2025-11-29 08:52:50.107 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 405 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 502 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Nov 29 03:52:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:51.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768132898' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/768132898' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:52:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 1 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 295 active+clean; 370 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 140 KiB/s wr, 87 op/s
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/61645589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 29 03:52:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/61645589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 29 03:52:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:52.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:53 np0005539550 nova_compute[257631]: 2025-11-29 08:52:53.628 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Nov 29 03:52:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Nov 29 03:52:54 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Nov 29 03:52:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 3 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 287 active+clean; 293 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 409 KiB/s rd, 25 KiB/s wr, 140 op/s
Nov 29 03:52:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:54.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:55 np0005539550 nova_compute[257631]: 2025-11-29 08:52:55.109 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:52:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 3 active+clean+snaptrim, 15 active+clean+snaptrim_wait, 287 active+clean; 207 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 27 KiB/s wr, 184 op/s
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.972 257641 DEBUG nova.compute.manager [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.973 257641 DEBUG nova.compute.manager [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing instance network info cache due to event network-changed-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.973 257641 DEBUG oslo_concurrency.lockutils [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.974 257641 DEBUG oslo_concurrency.lockutils [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:56 np0005539550 nova_compute[257631]: 2025-11-29 08:52:56.974 257641 DEBUG nova.network.neutron [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Refreshing network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 03:52:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:52:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:56.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.022 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.022 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.023 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.023 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.023 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.025 257641 INFO nova.compute.manager [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Terminating instance#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.026 257641 DEBUG nova.compute.manager [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 03:52:57 np0005539550 kernel: tapebfc0be1-60 (unregistering): left promiscuous mode
Nov 29 03:52:57 np0005539550 NetworkManager[49039]: <info>  [1764406377.0944] device (tapebfc0be1-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 03:52:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:52:57Z|01041|binding|INFO|Releasing lport ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc from this chassis (sb_readonly=0)
Nov 29 03:52:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:52:57Z|01042|binding|INFO|Setting lport ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc down in Southbound
Nov 29 03:52:57 np0005539550 ovn_controller[148680]: 2025-11-29T08:52:57Z|01043|binding|INFO|Removing iface tapebfc0be1-60 ovn-installed in OVS
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.147 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:01:7c 10.100.0.5'], port_security=['fa:16:3e:c0:01:7c 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9249872d-f151-438a-bddd-e6b41e397647', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ffcb23bac14ee49474df9aee5f7dae', 'neutron:revision_number': '4', 'neutron:security_group_ids': '94ca66f4-d521-4114-adbd-83f0454e0911', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2432be5b-087b-4981-ab5e-ea6b1be12111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>], logical_port=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fdd1238c8b0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.149 158978 INFO neutron.agent.ovn.metadata.agent [-] Port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc in datapath 3d510715-dc99-4870-8ae9-ff599ae1a9c2 unbound from our chassis#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.150 158978 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3d510715-dc99-4870-8ae9-ff599ae1a9c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.151 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[aaf498e9-3860-4b92-9d95-fed0c71486e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.152 158978 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 namespace which is not needed anymore#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.159 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d000000de.scope: Deactivated successfully.
Nov 29 03:52:57 np0005539550 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d000000de.scope: Consumed 19.169s CPU time.
Nov 29 03:52:57 np0005539550 systemd-machined[216673]: Machine qemu-120-instance-000000de terminated.
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.224 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.247 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.252 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.265 257641 INFO nova.virt.libvirt.driver [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Instance destroyed successfully.#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.266 257641 DEBUG nova.objects.instance [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lazy-loading 'resources' on Instance uuid 9249872d-f151-438a-bddd-e6b41e397647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.289 257641 DEBUG nova.virt.libvirt.vif [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T08:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2004089128',display_name='tempest-TestVolumeBootPattern-server-2004089128',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2004089128',id=222,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEHlo38mb7s9ySq/cdi6P777CZR2z+Afm9Wa+lTfRoAMzUkpqw+8CUb0JbGjLvJJBhmx7BRYWkJB9ViGobLhvgEEMVD0rXS3of0skum5gZvlaPu98ryoqeuiqaHIJoj7oQ==',key_name='tempest-TestVolumeBootPattern-597611796',keypairs=<?>,launch_index=0,launched_at=2025-11-29T08:51:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b7ffcb23bac14ee49474df9aee5f7dae',ramdisk_id='',reservation_id='r-qem3gv2n',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1614567902',owner_user_name='tempest-TestVolumeBootPattern-1614567902-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T08:51:02Z,user_data=None,user_id='b576a51181b5425aa6e44a0eb0a22803',uuid=9249872d-f151-438a-bddd-e6b41e397647,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.290 257641 DEBUG nova.network.os_vif_util [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converting VIF {"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.291 257641 DEBUG nova.network.os_vif_util [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.292 257641 DEBUG os_vif [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.295 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.295 257641 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebfc0be1-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.300 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.304 257641 INFO os_vif [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:01:7c,bridge_name='br-int',has_traffic_filtering=True,id=ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc,network=Network(3d510715-dc99-4870-8ae9-ff599ae1a9c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapebfc0be1-60')#033[00m
Nov 29 03:52:57 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [NOTICE]   (397585) : haproxy version is 2.8.14-c23fe91
Nov 29 03:52:57 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [NOTICE]   (397585) : path to executable is /usr/sbin/haproxy
Nov 29 03:52:57 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [WARNING]  (397585) : Exiting Master process...
Nov 29 03:52:57 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [ALERT]    (397585) : Current worker (397587) exited with code 143 (Terminated)
Nov 29 03:52:57 np0005539550 neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2[397581]: [WARNING]  (397585) : All workers exited. Exiting... (0)
Nov 29 03:52:57 np0005539550 systemd[1]: libpod-ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60.scope: Deactivated successfully.
Nov 29 03:52:57 np0005539550 podman[399990]: 2025-11-29 08:52:57.387205107 +0000 UTC m=+0.126769804 container died ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.421 257641 DEBUG nova.compute.manager [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-unplugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.421 257641 DEBUG oslo_concurrency.lockutils [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.421 257641 DEBUG oslo_concurrency.lockutils [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.421 257641 DEBUG oslo_concurrency.lockutils [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.421 257641 DEBUG nova.compute.manager [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] No waiting events found dispatching network-vif-unplugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.422 257641 DEBUG nova.compute.manager [req-d0b06484-0af5-4eed-9181-706b09171594 req-7bf4493f-f427-4c17-8ba7-b98670ed5db4 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-unplugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 03:52:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60-userdata-shm.mount: Deactivated successfully.
Nov 29 03:52:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-acd1f7795504e4c2fd61c0dd31f13341f7f204bf9db27aa8012c56510d8c67fe-merged.mount: Deactivated successfully.
Nov 29 03:52:57 np0005539550 podman[399990]: 2025-11-29 08:52:57.584776962 +0000 UTC m=+0.324341669 container cleanup ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:52:57 np0005539550 systemd[1]: libpod-conmon-ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60.scope: Deactivated successfully.
Nov 29 03:52:57 np0005539550 podman[400056]: 2025-11-29 08:52:57.66259275 +0000 UTC m=+0.051000938 container remove ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.667 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[6b82dbcb-e74b-466d-9b52-83e2775fb35b]: (4, ('Sat Nov 29 08:52:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60)\nab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60\nSat Nov 29 08:52:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 (ab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60)\nab183f757f7649cbc633eb7a44a48ccbe69bb2069283abac3c9baecaf106ba60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.669 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[3b612a72-0573-4c8b-9b8a-9eb0f2a0a48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.670 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3d510715-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 kernel: tap3d510715-d0: left promiscuous mode
Nov 29 03:52:57 np0005539550 podman[400041]: 2025-11-29 08:52:57.687562665 +0000 UTC m=+0.124571129 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.689 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.693 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[853ac9e0-9e60-47ed-8855-cd877712d973]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.696 257641 INFO nova.virt.libvirt.driver [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Deleting instance files /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647_del#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.697 257641 INFO nova.virt.libvirt.driver [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Deletion of /var/lib/nova/instances/9249872d-f151-438a-bddd-e6b41e397647_del complete#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.704 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[45a7437c-7e29-44d2-8422-4413b4ade35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.705 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[b1240966-92cb-4ad6-aa14-45321723fe33]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.722 267230 DEBUG oslo.privsep.daemon [-] privsep: reply[afeb0445-b3d0-447f-9142-8f78d64f0db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930800, 'reachable_time': 33015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400089, 'error': None, 'target': 'ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.725 159091 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3d510715-dc99-4870-8ae9-ff599ae1a9c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 03:52:57 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:52:57.725 159091 DEBUG oslo.privsep.daemon [-] privsep: reply[16ecedb8-878c-4b42-8e6d-8f9861bff644]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 03:52:57 np0005539550 systemd[1]: run-netns-ovnmeta\x2d3d510715\x2ddc99\x2d4870\x2d8ae9\x2dff599ae1a9c2.mount: Deactivated successfully.
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.754 257641 INFO nova.compute.manager [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.754 257641 DEBUG oslo.service.loopingcall [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.755 257641 DEBUG nova.compute.manager [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 03:52:57 np0005539550 nova_compute[257631]: 2025-11-29 08:52:57.755 257641 DEBUG nova.network.neutron [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 03:52:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:52:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:52:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 245 KiB/s rd, 8.7 KiB/s wr, 160 op/s
Nov 29 03:52:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:52:58.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.478 257641 DEBUG nova.network.neutron [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:52:59
Nov 29 03:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'backups', 'cephfs.cephfs.meta', 'images']
Nov 29 03:52:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.499 257641 INFO nova.compute.manager [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Took 1.74 seconds to deallocate network for instance.#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.513 257641 DEBUG nova.compute.manager [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.514 257641 DEBUG oslo_concurrency.lockutils [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Acquiring lock "9249872d-f151-438a-bddd-e6b41e397647-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.514 257641 DEBUG oslo_concurrency.lockutils [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.514 257641 DEBUG oslo_concurrency.lockutils [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.515 257641 DEBUG nova.compute.manager [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] No waiting events found dispatching network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.515 257641 WARNING nova.compute.manager [req-7acf3a7b-ec1f-40da-9d26-4c4620bf0753 req-aa68bd5e-970e-41b3-abd7-fcf9574f2ac0 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received unexpected event network-vif-plugged-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc for instance with vm_state active and task_state deleting.#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.553 257641 DEBUG nova.compute.manager [req-99a94b68-56e2-48bc-a299-e35967af5713 req-b11f3170-c1a8-4772-85c7-5ee319932fcd 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Received event network-vif-deleted-ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.716 257641 INFO nova.compute.manager [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Nov 29 03:52:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:52:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:52:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:52:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.934 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.934 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.936 257641 DEBUG nova.network.neutron [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updated VIF entry in instance network info cache for port ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.936 257641 DEBUG nova.network.neutron [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [{"id": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "address": "fa:16:3e:c0:01:7c", "network": {"id": "3d510715-dc99-4870-8ae9-ff599ae1a9c2", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-1804740577-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b7ffcb23bac14ee49474df9aee5f7dae", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapebfc0be1-60", "ovs_interfaceid": "ebfc0be1-60fa-4986-9d8d-5eda3cebd1fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.966 257641 DEBUG oslo_concurrency.lockutils [req-42980eeb-f5ce-4469-9afd-2712f9d30a47 req-7c9fbb6a-3056-4437-ba17-e09a67ce63cb 322529fd0a444bda9d365f78b23f9b7c ecbeee0b8f0d4eb4a70c29ecb044cf2a - - default default] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.967 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquired lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.967 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 03:52:59 np0005539550 nova_compute[257631]: 2025-11-29 08:52:59.967 257641 DEBUG nova.objects.instance [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9249872d-f151-438a-bddd-e6b41e397647 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.126 257641 DEBUG oslo_concurrency.processutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.275 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.286 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598995755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.590 257641 DEBUG oslo_concurrency.processutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.598 257641 DEBUG nova.compute.provider_tree [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.623 257641 DEBUG nova.scheduler.client.report [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.652 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.723 257641 DEBUG nova.network.neutron [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.752 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Releasing lock "refresh_cache-9249872d-f151-438a-bddd-e6b41e397647" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.753 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.754 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.813 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:00.814 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:53:00 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:00.815 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:53:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 151 KiB/s rd, 7.1 KiB/s wr, 154 op/s
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.816 257641 INFO nova.scheduler.client.report [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Deleted allocations for instance 9249872d-f151-438a-bddd-e6b41e397647#033[00m
Nov 29 03:53:00 np0005539550 nova_compute[257631]: 2025-11-29 08:53:00.911 257641 DEBUG oslo_concurrency.lockutils [None req-b8bb94ff-d5e1-4120-996b-7449040fea82 b576a51181b5425aa6e44a0eb0a22803 b7ffcb23bac14ee49474df9aee5f7dae - - default default] Lock "9249872d-f151-438a-bddd-e6b41e397647" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:00.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Nov 29 03:53:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Nov 29 03:53:01 np0005539550 ceph-mon[74435]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Nov 29 03:53:01 np0005539550 nova_compute[257631]: 2025-11-29 08:53:01.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:01 np0005539550 nova_compute[257631]: 2025-11-29 08:53:01.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:53:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:01.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.299 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 201 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 4.9 KiB/s wr, 119 op/s
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.955 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.956 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.956 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.957 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:53:02 np0005539550 nova_compute[257631]: 2025-11-29 08:53:02.957 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:03.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417641255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.477 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.756 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.758 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4065MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:03 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:03.817 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.840 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.841 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:53:03 np0005539550 nova_compute[257631]: 2025-11-29 08:53:03.867 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:53:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:53:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644846243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:53:04 np0005539550 nova_compute[257631]: 2025-11-29 08:53:04.436 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:53:04 np0005539550 nova_compute[257631]: 2025-11-29 08:53:04.443 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:53:04 np0005539550 nova_compute[257631]: 2025-11-29 08:53:04.468 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:53:04 np0005539550 nova_compute[257631]: 2025-11-29 08:53:04.502 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:53:04 np0005539550 nova_compute[257631]: 2025-11-29 08:53:04.502 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 164 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 2.4 KiB/s wr, 71 op/s
Nov 29 03:53:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:05.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:05 np0005539550 nova_compute[257631]: 2025-11-29 08:53:05.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:06 np0005539550 nova_compute[257631]: 2025-11-29 08:53:06.503 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 153 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Nov 29 03:53:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:07.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:07 np0005539550 nova_compute[257631]: 2025-11-29 08:53:07.302 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:07.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:53:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 1023 B/s wr, 34 op/s
Nov 29 03:53:08 np0005539550 nova_compute[257631]: 2025-11-29 08:53:08.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:09.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:53:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:53:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:53:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:53:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:53:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:09.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:10 np0005539550 nova_compute[257631]: 2025-11-29 08:53:10.281 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 1023 B/s wr, 21 op/s
Nov 29 03:53:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:11.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:12 np0005539550 nova_compute[257631]: 2025-11-29 08:53:12.261 257641 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764406377.2602422, 9249872d-f151-438a-bddd-e6b41e397647 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 03:53:12 np0005539550 nova_compute[257631]: 2025-11-29 08:53:12.262 257641 INFO nova.compute.manager [-] [instance: 9249872d-f151-438a-bddd-e6b41e397647] VM Stopped (Lifecycle Event)#033[00m
Nov 29 03:53:12 np0005539550 nova_compute[257631]: 2025-11-29 08:53:12.285 257641 DEBUG nova.compute.manager [None req-f29bea39-f179-4b63-a097-106c7eb3a164 - - - - - -] [instance: 9249872d-f151-438a-bddd-e6b41e397647] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 03:53:12 np0005539550 nova_compute[257631]: 2025-11-29 08:53:12.346 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 644 B/s wr, 18 op/s
Nov 29 03:53:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:13.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Nov 29 03:53:14 np0005539550 nova_compute[257631]: 2025-11-29 08:53:14.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:15.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:15 np0005539550 nova_compute[257631]: 2025-11-29 08:53:15.286 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:15 np0005539550 podman[400170]: 2025-11-29 08:53:15.33670962 +0000 UTC m=+0.069418169 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible)
Nov 29 03:53:15 np0005539550 podman[400171]: 2025-11-29 08:53:15.337333696 +0000 UTC m=+0.068385143 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 03:53:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:15.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 255 B/s wr, 2 op/s
Nov 29 03:53:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:17.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:17 np0005539550 nova_compute[257631]: 2025-11-29 08:53:17.349 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:17.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Nov 29 03:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:18.983 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:18.983 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:53:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:53:18.983 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:53:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:19.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:20 np0005539550 nova_compute[257631]: 2025-11-29 08:53:20.288 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:53:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:21.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:22 np0005539550 nova_compute[257631]: 2025-11-29 08:53:22.353 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:22 np0005539550 nova_compute[257631]: 2025-11-29 08:53:22.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:23.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:25.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:25 np0005539550 nova_compute[257631]: 2025-11-29 08:53:25.291 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:25.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 22a43844-0cee-4816-93de-e5d00f30cdbc does not exist
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c04ba172-bcd5-4332-ae4a-29f53afb843a does not exist
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4580f775-ecbd-4be7-8768-bc1537913752 does not exist
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:53:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:53:27 np0005539550 nova_compute[257631]: 2025-11-29 08:53:27.355 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.833102243 +0000 UTC m=+0.041721736 container create 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:27 np0005539550 systemd[1]: Started libpod-conmon-96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4.scope.
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.816300272 +0000 UTC m=+0.024919785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:27 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.947474505 +0000 UTC m=+0.156094078 container init 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.956896881 +0000 UTC m=+0.165516404 container start 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.962949323 +0000 UTC m=+0.171568896 container attach 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:53:27 np0005539550 nice_stonebraker[400555]: 167 167
Nov 29 03:53:27 np0005539550 systemd[1]: libpod-96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4.scope: Deactivated successfully.
Nov 29 03:53:27 np0005539550 podman[400538]: 2025-11-29 08:53:27.967847255 +0000 UTC m=+0.176466788 container died 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:28 np0005539550 systemd[1]: var-lib-containers-storage-overlay-979307d50a6314782758fe5b12dbf4fb930d8f3ab3c3b0407d3c0ff0d1adc40b-merged.mount: Deactivated successfully.
Nov 29 03:53:28 np0005539550 podman[400538]: 2025-11-29 08:53:28.033034397 +0000 UTC m=+0.241653900 container remove 96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_stonebraker, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:28 np0005539550 podman[400552]: 2025-11-29 08:53:28.034464973 +0000 UTC m=+0.148416236 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 03:53:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:53:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:53:28 np0005539550 systemd[1]: libpod-conmon-96c8944a6fdaff61310021a985f4af13b23efef1e114f6c66e9a0291101b2ce4.scope: Deactivated successfully.
Nov 29 03:53:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:28 np0005539550 podman[400605]: 2025-11-29 08:53:28.270206263 +0000 UTC m=+0.071633704 container create 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:28 np0005539550 systemd[1]: Started libpod-conmon-46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8.scope.
Nov 29 03:53:28 np0005539550 podman[400605]: 2025-11-29 08:53:28.24531039 +0000 UTC m=+0.046737911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:28 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:28 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:28 np0005539550 podman[400605]: 2025-11-29 08:53:28.389704094 +0000 UTC m=+0.191131605 container init 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:28 np0005539550 podman[400605]: 2025-11-29 08:53:28.404025733 +0000 UTC m=+0.205453204 container start 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 03:53:28 np0005539550 podman[400605]: 2025-11-29 08:53:28.409155781 +0000 UTC m=+0.210583312 container attach 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 03:53:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:29 np0005539550 vigilant_ramanujan[400622]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:53:29 np0005539550 vigilant_ramanujan[400622]: --> relative data size: 1.0
Nov 29 03:53:29 np0005539550 vigilant_ramanujan[400622]: --> All data devices are unavailable
Nov 29 03:53:29 np0005539550 systemd[1]: libpod-46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8.scope: Deactivated successfully.
Nov 29 03:53:29 np0005539550 podman[400605]: 2025-11-29 08:53:29.266708316 +0000 UTC m=+1.068135787 container died 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-9b1b20a2d432610fccf423cc57877feddf446fea7e458a4eba120d0ecbd56742-merged.mount: Deactivated successfully.
Nov 29 03:53:29 np0005539550 podman[400605]: 2025-11-29 08:53:29.378219207 +0000 UTC m=+1.179646668 container remove 46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ramanujan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:29 np0005539550 systemd[1]: libpod-conmon-46c3acb3ccdfef82d611ed25dd44970671a29d6f1f3fb45baa20d890c0e196e8.scope: Deactivated successfully.
Nov 29 03:53:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.210787726 +0000 UTC m=+0.053099820 container create f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:53:30 np0005539550 systemd[1]: Started libpod-conmon-f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e.scope.
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.188335154 +0000 UTC m=+0.030647298 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:30 np0005539550 nova_compute[257631]: 2025-11-29 08:53:30.294 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.311950607 +0000 UTC m=+0.154262721 container init f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.318999984 +0000 UTC m=+0.161312108 container start f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.323135487 +0000 UTC m=+0.165447591 container attach f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:53:30 np0005539550 clever_chaum[400809]: 167 167
Nov 29 03:53:30 np0005539550 systemd[1]: libpod-f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e.scope: Deactivated successfully.
Nov 29 03:53:30 np0005539550 conmon[400809]: conmon f414ffd7b2142bd2c0ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e.scope/container/memory.events
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.326815509 +0000 UTC m=+0.169127633 container died f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:53:30 np0005539550 systemd[1]: var-lib-containers-storage-overlay-80c11e6bea1d632b6b6d2ed34c56b1ef112e62a15d7e03712e5cdbe8f295ad77-merged.mount: Deactivated successfully.
Nov 29 03:53:30 np0005539550 podman[400793]: 2025-11-29 08:53:30.376704858 +0000 UTC m=+0.219016982 container remove f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:53:30 np0005539550 systemd[1]: libpod-conmon-f414ffd7b2142bd2c0adc13f914a8d0a204247397e7b871361b312200519d99e.scope: Deactivated successfully.
Nov 29 03:53:30 np0005539550 podman[400832]: 2025-11-29 08:53:30.577153695 +0000 UTC m=+0.039974291 container create 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 29 03:53:30 np0005539550 systemd[1]: Started libpod-conmon-751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a.scope.
Nov 29 03:53:30 np0005539550 podman[400832]: 2025-11-29 08:53:30.56014743 +0000 UTC m=+0.022968066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b351b01302be665e55421c2b1de87ee0ee941cd8d503a6258c07f33f804894bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b351b01302be665e55421c2b1de87ee0ee941cd8d503a6258c07f33f804894bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b351b01302be665e55421c2b1de87ee0ee941cd8d503a6258c07f33f804894bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b351b01302be665e55421c2b1de87ee0ee941cd8d503a6258c07f33f804894bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:30 np0005539550 podman[400832]: 2025-11-29 08:53:30.678135053 +0000 UTC m=+0.140955659 container init 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:30 np0005539550 podman[400832]: 2025-11-29 08:53:30.695131698 +0000 UTC m=+0.157952294 container start 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 03:53:30 np0005539550 podman[400832]: 2025-11-29 08:53:30.699082997 +0000 UTC m=+0.161903703 container attach 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:31.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:31 np0005539550 silly_swirles[400849]: {
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:    "0": [
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:        {
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "devices": [
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "/dev/loop3"
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            ],
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "lv_name": "ceph_lv0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "lv_size": "7511998464",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "name": "ceph_lv0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "tags": {
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.cluster_name": "ceph",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.crush_device_class": "",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.encrypted": "0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.osd_id": "0",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.type": "block",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:                "ceph.vdo": "0"
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            },
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "type": "block",
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:            "vg_name": "ceph_vg0"
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:        }
Nov 29 03:53:31 np0005539550 silly_swirles[400849]:    ]
Nov 29 03:53:31 np0005539550 silly_swirles[400849]: }
Nov 29 03:53:31 np0005539550 systemd[1]: libpod-751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a.scope: Deactivated successfully.
Nov 29 03:53:31 np0005539550 podman[400832]: 2025-11-29 08:53:31.509912593 +0000 UTC m=+0.972733199 container died 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 03:53:31 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b351b01302be665e55421c2b1de87ee0ee941cd8d503a6258c07f33f804894bb-merged.mount: Deactivated successfully.
Nov 29 03:53:31 np0005539550 podman[400832]: 2025-11-29 08:53:31.578085499 +0000 UTC m=+1.040906135 container remove 751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_swirles, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 03:53:31 np0005539550 systemd[1]: libpod-conmon-751883d90054740dd254b13ca2cf0401d920255486ad21e790818b0804e65f2a.scope: Deactivated successfully.
Nov 29 03:53:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.330471141 +0000 UTC m=+0.038430473 container create 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:53:32 np0005539550 systemd[1]: Started libpod-conmon-047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e.scope.
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.313364613 +0000 UTC m=+0.021323975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:32 np0005539550 nova_compute[257631]: 2025-11-29 08:53:32.413 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.456743442 +0000 UTC m=+0.164702804 container init 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.463722807 +0000 UTC m=+0.171682149 container start 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.466702831 +0000 UTC m=+0.174662173 container attach 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:32 np0005539550 objective_swartz[401025]: 167 167
Nov 29 03:53:32 np0005539550 systemd[1]: libpod-047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e.scope: Deactivated successfully.
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.46983384 +0000 UTC m=+0.177793192 container died 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:53:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-cad3676b3de8907a594827821538b631cf71926111d0718fa8853d0a0cbf3f61-merged.mount: Deactivated successfully.
Nov 29 03:53:32 np0005539550 podman[401009]: 2025-11-29 08:53:32.507633646 +0000 UTC m=+0.215592988 container remove 047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:53:32 np0005539550 systemd[1]: libpod-conmon-047277debb19bf262f3157784ea426813207f6e9b9916c4074eb2eeeff6f6a8e.scope: Deactivated successfully.
Nov 29 03:53:32 np0005539550 podman[401048]: 2025-11-29 08:53:32.736760901 +0000 UTC m=+0.080204809 container create d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:53:32 np0005539550 systemd[1]: Started libpod-conmon-d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e.scope.
Nov 29 03:53:32 np0005539550 podman[401048]: 2025-11-29 08:53:32.702949475 +0000 UTC m=+0.046393443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:53:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:53:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468c22d6c98c4a488b27670b477e06cbb831d9cb048c90e8b628fdec8ed9e486/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468c22d6c98c4a488b27670b477e06cbb831d9cb048c90e8b628fdec8ed9e486/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468c22d6c98c4a488b27670b477e06cbb831d9cb048c90e8b628fdec8ed9e486/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:32 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/468c22d6c98c4a488b27670b477e06cbb831d9cb048c90e8b628fdec8ed9e486/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:53:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:32 np0005539550 podman[401048]: 2025-11-29 08:53:32.845480142 +0000 UTC m=+0.188924120 container init d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:53:32 np0005539550 podman[401048]: 2025-11-29 08:53:32.864496608 +0000 UTC m=+0.207940496 container start d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 03:53:32 np0005539550 podman[401048]: 2025-11-29 08:53:32.868323464 +0000 UTC m=+0.211767452 container attach d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:53:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:33.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:33 np0005539550 infallible_wing[401065]: {
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:        "osd_id": 0,
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:        "type": "bluestore"
Nov 29 03:53:33 np0005539550 infallible_wing[401065]:    }
Nov 29 03:53:33 np0005539550 infallible_wing[401065]: }
Nov 29 03:53:33 np0005539550 systemd[1]: libpod-d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e.scope: Deactivated successfully.
Nov 29 03:53:33 np0005539550 podman[401048]: 2025-11-29 08:53:33.839627825 +0000 UTC m=+1.183071763 container died d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:53:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-468c22d6c98c4a488b27670b477e06cbb831d9cb048c90e8b628fdec8ed9e486-merged.mount: Deactivated successfully.
Nov 29 03:53:33 np0005539550 podman[401048]: 2025-11-29 08:53:33.914801306 +0000 UTC m=+1.258245194 container remove d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:53:33 np0005539550 systemd[1]: libpod-conmon-d4923bbbb774f5006107dec91540731932d9abdbafef4fddf1e3f3a3db731d0e.scope: Deactivated successfully.
Nov 29 03:53:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:53:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:33.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:53:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d482214b-f54f-4c91-9db3-ca6e61a6828c does not exist
Nov 29 03:53:34 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 39b53445-77f2-4f39-b51a-bae520830152 does not exist
Nov 29 03:53:34 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 26ac260f-6a30-4c31-8f97-3e9d79a461eb does not exist
Nov 29 03:53:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:53:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:35.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:35 np0005539550 nova_compute[257631]: 2025-11-29 08:53:35.296 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:35.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:37.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:37 np0005539550 nova_compute[257631]: 2025-11-29 08:53:37.415 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:37.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:39.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:39.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:40 np0005539550 nova_compute[257631]: 2025-11-29 08:53:40.301 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:41.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:41.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:42 np0005539550 nova_compute[257631]: 2025-11-29 08:53:42.419 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:43.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:45.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:45 np0005539550 nova_compute[257631]: 2025-11-29 08:53:45.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:46.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:46 np0005539550 podman[401205]: 2025-11-29 08:53:46.39403304 +0000 UTC m=+0.104848976 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 03:53:46 np0005539550 podman[401206]: 2025-11-29 08:53:46.410609585 +0000 UTC m=+0.121425681 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 03:53:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:47.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:47 np0005539550 nova_compute[257631]: 2025-11-29 08:53:47.422 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:47 np0005539550 ovn_controller[148680]: 2025-11-29T08:53:47Z|01044|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 03:53:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:48.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:49.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:50.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:50 np0005539550 nova_compute[257631]: 2025-11-29 08:53:50.337 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:51.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:52 np0005539550 nova_compute[257631]: 2025-11-29 08:53:52.477 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:53.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:55 np0005539550 nova_compute[257631]: 2025-11-29 08:53:55.338 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:53:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:57.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:57 np0005539550 nova_compute[257631]: 2025-11-29 08:53:57.480 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:53:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:53:58.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:53:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:53:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:53:58 np0005539550 podman[401299]: 2025-11-29 08:53:58.346445837 +0000 UTC m=+0.088753343 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:53:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:53:58 np0005539550 nova_compute[257631]: 2025-11-29 08:53:58.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:53:58 np0005539550 nova_compute[257631]: 2025-11-29 08:53:58.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:53:58 np0005539550 nova_compute[257631]: 2025-11-29 08:53:58.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:53:58 np0005539550 nova_compute[257631]: 2025-11-29 08:53:58.984 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:53:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:53:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:53:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:53:59.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:53:59
Nov 29 03:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'images', 'volumes', '.mgr']
Nov 29 03:53:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:53:59 np0005539550 nova_compute[257631]: 2025-11-29 08:53:59.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:01.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.003000076s ======
Nov 29 03:54:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:01.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Nov 29 03:54:01 np0005539550 nova_compute[257631]: 2025-11-29 08:54:01.273 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:01 np0005539550 nova_compute[257631]: 2025-11-29 08:54:01.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.482 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.988 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.989 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.989 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.990 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:54:02 np0005539550 nova_compute[257631]: 2025-11-29 08:54:02.991 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:03.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241420780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.460 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.689 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.691 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4085MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.691 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.692 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.942 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.943 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:54:03 np0005539550 nova_compute[257631]: 2025-11-29 08:54:03.959 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:54:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:54:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2772437377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:54:04 np0005539550 nova_compute[257631]: 2025-11-29 08:54:04.421 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:54:04 np0005539550 nova_compute[257631]: 2025-11-29 08:54:04.428 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:54:04 np0005539550 nova_compute[257631]: 2025-11-29 08:54:04.448 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:54:04 np0005539550 nova_compute[257631]: 2025-11-29 08:54:04.450 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:54:04 np0005539550 nova_compute[257631]: 2025-11-29 08:54:04.451 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:05.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:05 np0005539550 nova_compute[257631]: 2025-11-29 08:54:05.452 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:05 np0005539550 nova_compute[257631]: 2025-11-29 08:54:05.453 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:05 np0005539550 nova_compute[257631]: 2025-11-29 08:54:05.453 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:05 np0005539550 nova_compute[257631]: 2025-11-29 08:54:05.453 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:54:06 np0005539550 nova_compute[257631]: 2025-11-29 08:54:06.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:07.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:07 np0005539550 nova_compute[257631]: 2025-11-29 08:54:07.484 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:54:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:09.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:54:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:54:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:54:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:54:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:54:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:09.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:10 np0005539550 nova_compute[257631]: 2025-11-29 08:54:10.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:11.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:11 np0005539550 nova_compute[257631]: 2025-11-29 08:54:11.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:11.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:12 np0005539550 nova_compute[257631]: 2025-11-29 08:54:12.487 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:13.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:13 np0005539550 systemd-logind[788]: New session 75 of user zuul.
Nov 29 03:54:13 np0005539550 systemd[1]: Started Session 75 of User zuul.
Nov 29 03:54:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:14 np0005539550 nova_compute[257631]: 2025-11-29 08:54:14.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:54:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:15.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:15.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:16 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45218 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:16 np0005539550 nova_compute[257631]: 2025-11-29 08:54:16.277 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:16 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37008 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:16 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45227 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:16 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37014 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:17.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:17.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195149578' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:17 np0005539550 podman[401658]: 2025-11-29 08:54:17.381149482 +0000 UTC m=+0.103615984 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:54:17 np0005539550 podman[401657]: 2025-11-29 08:54:17.39225682 +0000 UTC m=+0.118962819 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 03:54:17 np0005539550 nova_compute[257631]: 2025-11-29 08:54:17.490 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:17 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44890 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:18 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44896 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:54:18.984 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:54:18.985 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:54:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:54:18.986 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:54:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:19.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:19.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:20 np0005539550 ovs-vsctl[401755]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 03:54:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:21.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:21 np0005539550 nova_compute[257631]: 2025-11-29 08:54:21.278 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:21.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:21 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 03:54:21 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 03:54:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:21 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:54:21 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45254 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:22 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45260 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:22 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: cache status {prefix=cache status} (starting...)
Nov 29 03:54:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 03:54:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 03:54:22 np0005539550 nova_compute[257631]: 2025-11-29 08:54:22.493 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:22 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: client ls {prefix=client ls} (starting...)
Nov 29 03:54:22 np0005539550 lvm[402117]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 03:54:22 np0005539550 lvm[402117]: VG ceph_vg0 finished
Nov 29 03:54:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:22 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37038 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:23.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:23 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:23 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:23 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:23.110+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 03:54:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:23.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 03:54:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046719137' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 03:54:23 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37053 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 03:54:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837126215' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:23 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 03:54:24 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45320 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1652903836' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37083 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:24 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:24.189+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44923 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45335 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: ops {prefix=ops} (starting...)
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172454413' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44938 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509020408' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 03:54:24 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 03:54:24 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37122 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:25 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: session ls {prefix=session ls} (starting...)
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497575223' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:25.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:25 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: status {prefix=status} (starting...)
Nov 29 03:54:25 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37134 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:25 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44965 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:25 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:25 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:25.320+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120820191' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:25 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45380 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:25 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:25 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:25.739+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347958471' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2249883875' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.44983 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.195700) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466195850, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 2219, "num_deletes": 254, "total_data_size": 3941546, "memory_usage": 4003936, "flush_reason": "Manual Compaction"}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466220047, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 3872042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68421, "largest_seqno": 70639, "table_properties": {"data_size": 3861851, "index_size": 6492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21680, "raw_average_key_size": 20, "raw_value_size": 3841300, "raw_average_value_size": 3707, "num_data_blocks": 282, "num_entries": 1036, "num_filter_entries": 1036, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406252, "oldest_key_time": 1764406252, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 24427 microseconds, and 13375 cpu microseconds.
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.220154) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 3872042 bytes OK
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.220183) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.222595) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.222631) EVENT_LOG_v1 {"time_micros": 1764406466222623, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.222653) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 3932407, prev total WAL file size 3932407, number of live WAL files 2.
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.224710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(3781KB)], [155(11MB)]
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466224798, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 16358726, "oldest_snapshot_seqno": -1}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896130725' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2604986550' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:26 np0005539550 nova_compute[257631]: 2025-11-29 08:54:26.280 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 10699 keys, 14433350 bytes, temperature: kUnknown
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466311027, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 14433350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14363164, "index_size": 42361, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26757, "raw_key_size": 281790, "raw_average_key_size": 26, "raw_value_size": 14174362, "raw_average_value_size": 1324, "num_data_blocks": 1616, "num_entries": 10699, "num_filter_entries": 10699, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.311289) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 14433350 bytes
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.312521) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.6 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.9 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 11224, records dropped: 525 output_compression: NoCompression
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.312537) EVENT_LOG_v1 {"time_micros": 1764406466312529, "job": 96, "event": "compaction_finished", "compaction_time_micros": 86285, "compaction_time_cpu_micros": 35947, "output_level": 6, "num_output_files": 1, "total_output_size": 14433350, "num_input_records": 11224, "num_output_records": 10699, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466313157, "job": 96, "event": "table_file_deletion", "file_number": 157}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406466315156, "job": 96, "event": "table_file_deletion", "file_number": 155}
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.224559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.315191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.315196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.315198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.315199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:54:26.315201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45001 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45404 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37173 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:26.642+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2333399874' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:26 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45416 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 03:54:26 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954974487' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 03:54:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:27.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451275160' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45428 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:27.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37209 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:27 np0005539550 nova_compute[257631]: 2025-11-29 08:54:27.495 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3119828697' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45440 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37215 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 03:54:27 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1096647099' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45049 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:27 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:27.998+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45452 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37230 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45461 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 03:54:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1840916141' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2cbfa5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337289216 unmapped: 42893312 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db50000 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2b3e0c00 session 0x556d2dec8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2adf8400 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2b523c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2b522960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2dcd6400 session 0x556d2da68000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db50000 session 0x556d2dfc9e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 43073536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2a023a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 43073536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3840724 data_alloc: 251658240 data_used: 40755200
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a8dfa000/0x0/0x1bfc00000, data 0x4af5ebf/0x4cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2adf8400 session 0x556d2d81b860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 43073536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2ad31680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 338157568 unmapped: 42024960 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a8df9000/0x0/0x1bfc00000, data 0x4af5ecf/0x4cc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337108992 unmapped: 43073536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ad52000 session 0x556d2cbfaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2b627400 session 0x556d2c9e3e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 44056576 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabfe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 44056576 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3783416 data_alloc: 251658240 data_used: 41545728
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 44056576 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 44056576 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.726502419s of 10.823136330s, submitted: 107
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2dcd6400 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2d68a800 session 0x556d2c872000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db43000 session 0x556d2dec8f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336125952 unmapped: 44056576 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a99e2000/0x0/0x1bfc00000, data 0x430eebf/0x44dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db48400 session 0x556d2c9e2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a99e5000/0x0/0x1bfc00000, data 0x430ee3d/0x44d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789102 data_alloc: 251658240 data_used: 42598400
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a99e5000/0x0/0x1bfc00000, data 0x430ee3d/0x44d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3789934 data_alloc: 251658240 data_used: 42610688
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336134144 unmapped: 44048384 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2dd71000 session 0x556d2c9832c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2d68a800 session 0x556d2d901680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db43000 session 0x556d2d9043c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db48400 session 0x556d2c72ba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2c872b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 43974656 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 heartbeat osd_stat(store_statfs(0x1a8ed3000/0x0/0x1bfc00000, data 0x4e20e3d/0x4feb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 43974656 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 43974656 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.500480652s of 12.812397003s, submitted: 25
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 336207872 unmapped: 43974656 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3871316 data_alloc: 251658240 data_used: 42614784
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2d68a800 session 0x556d2de8fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db43000 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2db48400 session 0x556d2c983a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2dd71000 session 0x556d2b5583c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337862656 unmapped: 42319872 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 heartbeat osd_stat(store_statfs(0x1a7f22000/0x0/0x1bfc00000, data 0x5dd1e3d/0x5f9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337870848 unmapped: 42311680 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337756160 unmapped: 42426368 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 heartbeat osd_stat(store_statfs(0x1a7f0f000/0x0/0x1bfc00000, data 0x5de2b83/0x5faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 42418176 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 heartbeat osd_stat(store_statfs(0x1a7f0f000/0x0/0x1bfc00000, data 0x5de2b83/0x5faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 ms_handle_reset con 0x556d2d68a800 session 0x556d2dabef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 42418176 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4001897 data_alloc: 251658240 data_used: 42909696
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 ms_handle_reset con 0x556d2db48400 session 0x556d2c9e2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 ms_handle_reset con 0x556d2dcd6400 session 0x556d2dfc8960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 337764352 unmapped: 42418176 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 heartbeat osd_stat(store_statfs(0x1a7f0f000/0x0/0x1bfc00000, data 0x5de2b83/0x5faf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 338493440 unmapped: 41689088 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 ms_handle_reset con 0x556d2db41000 session 0x556d2dabfc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 340148224 unmapped: 40034304 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 heartbeat osd_stat(store_statfs(0x1a7f0e000/0x0/0x1bfc00000, data 0x5de2b93/0x5fb0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x11d3f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 39886848 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 349 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 350 ms_handle_reset con 0x556d2ab33000 session 0x556d2cad72c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 340295680 unmapped: 39886848 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4092070 data_alloc: 268435456 data_used: 53297152
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.543397427s of 10.097340584s, submitted: 116
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 351 ms_handle_reset con 0x556d2d68a800 session 0x556d2dec8780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 351 ms_handle_reset con 0x556d2dcd6400 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 351 ms_handle_reset con 0x556d2db48400 session 0x556d2b563860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 351 ms_handle_reset con 0x556d3589e000 session 0x556d2ab94000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 345079808 unmapped: 35102720 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 352 ms_handle_reset con 0x556d2ab33000 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 35594240 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2dcd6400 session 0x556d2d10ed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2d68a800 session 0x556d2cbfa5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2db48400 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 344588288 unmapped: 35594240 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d3589e000 session 0x556d2d9041e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2ab33000 session 0x556d2d290d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346685440 unmapped: 33497088 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2d68a800 session 0x556d2cbfa3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 heartbeat osd_stat(store_statfs(0x1a6464000/0x0/0x1bfc00000, data 0x66e5181/0x68b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346718208 unmapped: 33464320 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4113358 data_alloc: 268435456 data_used: 53506048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346742784 unmapped: 33439744 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d300dc400 session 0x556d2c872000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2db48400 session 0x556d2da692c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2dcd6400 session 0x556d2ad30000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2ab33000 session 0x556d2c982f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2d68a800 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 ms_handle_reset con 0x556d2db48400 session 0x556d2dce5e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346750976 unmapped: 33431552 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 32915456 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 32833536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 heartbeat osd_stat(store_statfs(0x1a64e5000/0x0/0x1bfc00000, data 0x6667171/0x6839000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 347348992 unmapped: 32833536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4180266 data_alloc: 268435456 data_used: 54099968
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.346259117s of 10.630858421s, submitted: 102
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 347848704 unmapped: 32333824 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d300dc400 session 0x556d2dce4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348315648 unmapped: 31866880 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6416000/0x0/0x1bfc00000, data 0x6733d9a/0x6907000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 31858688 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2debcc00 session 0x556d2cfefc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6411000/0x0/0x1bfc00000, data 0x6738d9a/0x690c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348323840 unmapped: 31858688 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6411000/0x0/0x1bfc00000, data 0x6738d9a/0x690c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 31850496 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4192872 data_alloc: 268435456 data_used: 54104064
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348332032 unmapped: 31850496 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2cad70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348340224 unmapped: 31842304 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2d68a800 session 0x556d2d9054a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2ca30b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349528064 unmapped: 30654464 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d300dc400 session 0x556d2ab88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db43000 session 0x556d2cbfba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349544448 unmapped: 30638080 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2adfac00 session 0x556d2cbfb2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a63ef000/0x0/0x1bfc00000, data 0x675adaa/0x692f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349962240 unmapped: 30220288 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4210754 data_alloc: 268435456 data_used: 58662912
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.935436726s of 10.123374939s, submitted: 66
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d300dc400 session 0x556d2d0bd0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a5faa000/0x0/0x1bfc00000, data 0x6b9edd3/0x6d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351379456 unmapped: 28803072 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c915c00 session 0x556d2ad31680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351084544 unmapped: 29097984 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6630000/0x0/0x1bfc00000, data 0x5c87e0c/0x5e5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 29089792 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351092736 unmapped: 29089792 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2cfd6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2ca31e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6ebf000/0x0/0x1bfc00000, data 0x5c88de9/0x5e5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 28917760 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4094074 data_alloc: 268435456 data_used: 52629504
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2c9821e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 28917760 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2b8cd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2d10e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351264768 unmapped: 28917760 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2c872000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2b563860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 33857536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 33857536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2dce5860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 33857536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4042026 data_alloc: 268435456 data_used: 48828416
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c915c00 session 0x556d2c982f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a707c000/0x0/0x1bfc00000, data 0x5acdde9/0x5ca2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346324992 unmapped: 33857536 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d300dc400 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.819220543s of 10.708851814s, submitted: 68
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2b563c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 33890304 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346292224 unmapped: 33890304 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2b5583c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a7057000/0x0/0x1bfc00000, data 0x5af1df9/0x5cc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2d1cdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012101 data_alloc: 268435456 data_used: 49479680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a7312000/0x0/0x1bfc00000, data 0x56add87/0x5881000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2e1a2800 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabfc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346406912 unmapped: 33775616 heap: 380182528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012101 data_alloc: 268435456 data_used: 49479680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2d10f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2dec90e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2de8e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2cc43800 session 0x556d2d103c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2dabfc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2b563c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346644480 unmapped: 37740544 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030832291s of 10.306634903s, submitted: 89
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 37732352 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 37732352 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a67d5000/0x0/0x1bfc00000, data 0x6373df9/0x6549000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346652672 unmapped: 37732352 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6433000/0x0/0x1bfc00000, data 0x670fdf9/0x68e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 346349568 unmapped: 38035456 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4161047 data_alloc: 268435456 data_used: 50044928
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 347267072 unmapped: 37117952 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2adfa800 session 0x556d2dce5860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2b563860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2c872000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2b8cd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2deba000 session 0x556d2cfd6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2debb800 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2ad31680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2cbfb2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348700672 unmapped: 35684352 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a604e000/0x0/0x1bfc00000, data 0x6af9e09/0x6cd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2d904b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 35676160 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 348708864 unmapped: 35676160 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2cbfa960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c915c00 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a6029000/0x0/0x1bfc00000, data 0x6b1de2c/0x6cf5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352567296 unmapped: 31817728 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4299018 data_alloc: 268435456 data_used: 62689280
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2b56a780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2b5234a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2c873a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2b627000 session 0x556d2b559a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c8a6800 session 0x556d2b56ad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 31809536 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2ac08f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2c9e3e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 31809536 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e3a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.926271439s of 11.271222115s, submitted: 93
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 31809536 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352575488 unmapped: 31809536 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2b951000 session 0x556d2ca30000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 357040128 unmapped: 27344896 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4374191 data_alloc: 285212672 data_used: 70701056
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a5e1d000/0x0/0x1bfc00000, data 0x6d27e5f/0x6f01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 22962176 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c8a6800 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47400 session 0x556d2cfef0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2acaa1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 361422848 unmapped: 22962176 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2b951000 session 0x556d2d0bd0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 361562112 unmapped: 22822912 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c8a6800 session 0x556d2de8fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 361603072 unmapped: 22781952 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db49000 session 0x556d2d905e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 361603072 unmapped: 22781952 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4408458 data_alloc: 285212672 data_used: 74493952
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db47800 session 0x556d2cbfb4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2b951000 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c8a6800 session 0x556d2cbfb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2da69c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2cad70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db48400 session 0x556d2d904d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369557504 unmapped: 14827520 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a50be000/0x0/0x1bfc00000, data 0x7a78eb1/0x7c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [0,0,0,0,3])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2b951000 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2ab34c00 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369565696 unmapped: 14819328 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.309836388s of 10.019254684s, submitted: 177
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 heartbeat osd_stat(store_statfs(0x1a50cc000/0x0/0x1bfc00000, data 0x7a54eb1/0x7c2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370352128 unmapped: 14032896 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371744768 unmapped: 12640256 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2db49000 session 0x556d2c873a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 ms_handle_reset con 0x556d2c8a6800 session 0x556d2ab95860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 365625344 unmapped: 18759680 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4380409 data_alloc: 285212672 data_used: 66576384
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2b4e8800 session 0x556d2dce4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2debec00 session 0x556d2b454d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2dd71000 session 0x556d2dce4960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 363372544 unmapped: 21012480 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce5860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2b4e8800 session 0x556d2d901680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 heartbeat osd_stat(store_statfs(0x1a5b46000/0x0/0x1bfc00000, data 0x6ffebb6/0x71d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x12edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 362840064 unmapped: 21544960 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2deba000 session 0x556d2ab88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2debec00 session 0x556d2df89680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 heartbeat osd_stat(store_statfs(0x1a6c93000/0x0/0x1bfc00000, data 0x4ee4b44/0x50bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354279424 unmapped: 30105600 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 30146560 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2adf8400 session 0x556d2cbfb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 ms_handle_reset con 0x556d2adf8400 session 0x556d2dabe1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 30146560 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3975229 data_alloc: 251658240 data_used: 45010944
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 30146560 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354238464 unmapped: 30146560 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 heartbeat osd_stat(store_statfs(0x1a6c93000/0x0/0x1bfc00000, data 0x4ee4b11/0x50b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.917956352s of 10.072295189s, submitted: 240
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 30195712 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2ab33000 session 0x556d2b9f0000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 heartbeat osd_stat(store_statfs(0x1a7855000/0x0/0x1bfc00000, data 0x4ee4b11/0x50b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 30212096 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2dec2c00 session 0x556d2de9f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2deba000 session 0x556d2b5125a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0bd4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2adf8400 session 0x556d2b9f10e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2db51c00 session 0x556d2cbfa5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2b9f1e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354172928 unmapped: 30212096 heap: 384385024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3860727 data_alloc: 251658240 data_used: 36802560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 ms_handle_reset con 0x556d2dec2c00 session 0x556d2b8ccb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 356 handle_osd_map epochs [357,357], i have 357, src has [1,357]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfa1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 350609408 unmapped: 45842432 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2adf8400 session 0x556d2cbfa960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2c983c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2db51c00 session 0x556d2b559a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2b4e8800 session 0x556d2b56af00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 46735360 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 46735360 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 46735360 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2ab33000 session 0x556d2b56a780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 heartbeat osd_stat(store_statfs(0x1a78d7000/0x0/0x1bfc00000, data 0x4e60448/0x5036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 349716480 unmapped: 46735360 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906319 data_alloc: 251658240 data_used: 36806656
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 ms_handle_reset con 0x556d2adf8400 session 0x556d2b56b680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356851712 unmapped: 39600128 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 357 handle_osd_map epochs [358,358], i have 358, src has [1,358]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356810752 unmapped: 39641088 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2ab892c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7080000/0x0/0x1bfc00000, data 0x56b6051/0x588d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,1,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.893651962s of 10.257349014s, submitted: 169
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353615872 unmapped: 42835968 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353771520 unmapped: 42680320 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 42131456 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3993574 data_alloc: 251658240 data_used: 37081088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 42237952 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a6f4a000/0x0/0x1bfc00000, data 0x57eb061/0x59c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 42237952 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab34c00 session 0x556d2dec81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b951000 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 42229760 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354222080 unmapped: 42229760 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a6f4a000/0x0/0x1bfc00000, data 0x57eb051/0x59c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,6])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 43343872 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3904484 data_alloc: 251658240 data_used: 36855808
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7afe000/0x0/0x1bfc00000, data 0x4c39051/0x4e10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353107968 unmapped: 43343872 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7afe000/0x0/0x1bfc00000, data 0x4c39051/0x4e10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 43212800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7add000/0x0/0x1bfc00000, data 0x4c5a051/0x4e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 43212800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.946757793s of 10.685288429s, submitted: 52
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 43212800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7adc000/0x0/0x1bfc00000, data 0x4c5b051/0x4e32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 43212800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3906144 data_alloc: 251658240 data_used: 36855808
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353239040 unmapped: 43212800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45470 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7add000/0x0/0x1bfc00000, data 0x4c5b041/0x4e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [0,0,0,0,0,0,1,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c872780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3913367 data_alloc: 251658240 data_used: 37965824
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7ad2000/0x0/0x1bfc00000, data 0x4c66041/0x4e3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x132ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.496929646s of 10.376410484s, submitted: 22
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353247232 unmapped: 43204608 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adf8400 session 0x556d2b50e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353255424 unmapped: 43196416 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3915709 data_alloc: 251658240 data_used: 37978112
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2dabe1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353468416 unmapped: 42983424 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2dfc8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2d10f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adf8400 session 0x556d2dfc92c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b951000 session 0x556d2d10ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2c873a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a81f3000/0x0/0x1bfc00000, data 0x558406a/0x575b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351723520 unmapped: 44728320 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a87ea000/0x0/0x1bfc00000, data 0x4c83093/0x4e59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c8d6c00 session 0x556d2d0bd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351739904 unmapped: 44711936 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2da68b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 44703744 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adf8400 session 0x556d2cfd74a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b951000 session 0x556d2dce52c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2ca314a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351748096 unmapped: 44703744 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3887589 data_alloc: 251658240 data_used: 33914880
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c8a6800 session 0x556d2cad6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1ccf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2b454d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adf8400 session 0x556d2dfc81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 44793856 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48400 session 0x556d2ab94000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2d18b400 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351657984 unmapped: 44793856 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351600640 unmapped: 44851200 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8ae8000/0x0/0x1bfc00000, data 0x4c91070/0x4e66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.928224564s of 10.046876907s, submitted: 88
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2da681e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 351617024 unmapped: 44834816 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adf8400 session 0x556d2b454d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dbbe800 session 0x556d2b56af00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352043008 unmapped: 44408832 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3962563 data_alloc: 251658240 data_used: 42315776
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8a00000/0x0/0x1bfc00000, data 0x4d79070/0x4f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3963203 data_alloc: 251658240 data_used: 42356736
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352059392 unmapped: 44392448 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 44384256 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8a00000/0x0/0x1bfc00000, data 0x4d79070/0x4f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [0,0,0,0,0,0,0,3,21,7])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.607605934s of 10.002253532s, submitted: 104
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354902016 unmapped: 41549824 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 41844736 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4012011 data_alloc: 251658240 data_used: 42405888
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2de9e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354615296 unmapped: 41836544 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 41820160 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2a80b000 session 0x556d2de8ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354320384 unmapped: 42131456 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2a80b000 session 0x556d2df881e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a83d1000/0x0/0x1bfc00000, data 0x53a6080/0x557c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 42123264 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a83d1000/0x0/0x1bfc00000, data 0x53a6080/0x557c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2b8cc960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4122971 data_alloc: 251658240 data_used: 44703744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a790e000/0x0/0x1bfc00000, data 0x5e69080/0x603f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354312192 unmapped: 42139648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4122971 data_alloc: 251658240 data_used: 44703744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48800 session 0x556d2cbfbc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd7000 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2a80b000 session 0x556d2b56b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9832c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.025803566s of 12.161929131s, submitted: 80
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355180544 unmapped: 41271296 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2ad31680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48800 session 0x556d2da68960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b56ad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d10ed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2a80b000 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 42270720 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 42270720 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b951000 session 0x556d2de9fe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a76fa000/0x0/0x1bfc00000, data 0x607d08e/0x6254000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354181120 unmapped: 42270720 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b951c00 session 0x556d2b56ba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354189312 unmapped: 42262528 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4219939 data_alloc: 268435456 data_used: 55308288
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354680832 unmapped: 41771008 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355336192 unmapped: 41115648 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7328000/0x0/0x1bfc00000, data 0x644708e/0x661e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355344384 unmapped: 41107456 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7328000/0x0/0x1bfc00000, data 0x644708e/0x661e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354328576 unmapped: 42123264 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2acaa3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354336768 unmapped: 42115072 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4086517 data_alloc: 268435456 data_used: 47022080
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354213888 unmapped: 42237952 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.148336411s of 10.982084274s, submitted: 121
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354115584 unmapped: 42336256 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354951168 unmapped: 41500672 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354492416 unmapped: 41959424 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7e62000/0x0/0x1bfc00000, data 0x591602c/0x5aec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2cbfb860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2d900f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 41869312 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4055289 data_alloc: 268435456 data_used: 48091136
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2cbfa5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354590720 unmapped: 41861120 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8101000/0x0/0x1bfc00000, data 0x503dfca/0x5213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 41852928 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48400 session 0x556d2c974960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8101000/0x0/0x1bfc00000, data 0x503dfca/0x5213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a873b000/0x0/0x1bfc00000, data 0x503dfca/0x5213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 41844736 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 41697280 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b9f14a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48400 session 0x556d2cbfb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2dfc9c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dec6000 session 0x556d2ac08f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2d9054a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 41689088 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4057119 data_alloc: 268435456 data_used: 48091136
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8719000/0x0/0x1bfc00000, data 0x505ffca/0x5235000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 41689088 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 354762752 unmapped: 41689088 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.293984413s of 10.618840218s, submitted: 85
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355999744 unmapped: 40452096 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356016128 unmapped: 40435712 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8151000/0x0/0x1bfc00000, data 0x561ffca/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 39321600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4119715 data_alloc: 251658240 data_used: 48717824
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 357130240 unmapped: 39321600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 39362560 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 357089280 unmapped: 39362560 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 43409408 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2cbfbc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2acaa3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2dfc8960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 43409408 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3953185 data_alloc: 234881024 data_used: 37863424
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8cb7000/0x0/0x1bfc00000, data 0x4ac1fca/0x4c97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 43409408 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353042432 unmapped: 43409408 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.950197220s of 10.087405205s, submitted: 82
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2d1021e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353206272 unmapped: 43245568 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353091584 unmapped: 43360256 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8c92000/0x0/0x1bfc00000, data 0x4ae5fed/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 43606016 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3980298 data_alloc: 251658240 data_used: 40701952
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8c92000/0x0/0x1bfc00000, data 0x4ae5fed/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352845824 unmapped: 43606016 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 43589632 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 43589632 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 352862208 unmapped: 43589632 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8c92000/0x0/0x1bfc00000, data 0x4ae5fed/0x4cbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 43417600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3983664 data_alloc: 251658240 data_used: 40706048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 43417600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8bc0000/0x0/0x1bfc00000, data 0x4bb7fed/0x4d8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 43417600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 43417600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.030935287s of 10.926078796s, submitted: 19
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353304576 unmapped: 43147264 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8b7e000/0x0/0x1bfc00000, data 0x4bf9fed/0x4dd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x122af9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353034240 unmapped: 43417600 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3988700 data_alloc: 251658240 data_used: 40759296
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353787904 unmapped: 42663936 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 353796096 unmapped: 42655744 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355975168 unmapped: 40476672 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2b9f0000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355975168 unmapped: 40476672 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a716b000/0x0/0x1bfc00000, data 0x546cfed/0x5643000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356843520 unmapped: 39608320 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4068226 data_alloc: 251658240 data_used: 40873984
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 39460864 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e21e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2dce5e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356990976 unmapped: 39460864 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a7149000/0x0/0x1bfc00000, data 0x548efed/0x5665000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355278848 unmapped: 41172992 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.612613201s of 10.247053146s, submitted: 80
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355287040 unmapped: 41164800 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48800 session 0x556d2c9e23c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2ca314a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355483648 unmapped: 40968192 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4017930 data_alloc: 234881024 data_used: 37576704
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfbe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 41009152 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 41009152 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355442688 unmapped: 41009152 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48800 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a74b3000/0x0/0x1bfc00000, data 0x5124fed/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 41000960 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dce43c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 40992768 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4018511 data_alloc: 234881024 data_used: 37580800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a74b3000/0x0/0x1bfc00000, data 0x5124fed/0x52fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 41050112 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8241000/0x0/0x1bfc00000, data 0x4396f8b/0x456c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 41041920 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2c9752c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 41041920 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 41041920 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debe800 session 0x556d2cfef0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.966403008s of 11.471721649s, submitted: 82
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2cbfb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a8242000/0x0/0x1bfc00000, data 0x4396f68/0x456b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 41041920 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3929397 data_alloc: 251658240 data_used: 40067072
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2d102d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 41033728 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 41033728 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a858b000/0x0/0x1bfc00000, data 0x404ef68/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 41033728 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 41025536 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 41025536 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3881727 data_alloc: 234881024 data_used: 36827136
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a858b000/0x0/0x1bfc00000, data 0x404ef68/0x4223000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dfc85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 41025536 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db48800 session 0x556d2d901a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 356491264 unmapped: 39960576 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2b9f14a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debe800 session 0x556d2de9e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2de8ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2b8cc960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 358563840 unmapped: 37888000 heap: 396451840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2da68960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2b4d94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debe800 session 0x556d2c9e30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c72b0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ac19a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 41148416 heap: 400654336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 40869888 heap: 400654336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4062060 data_alloc: 234881024 data_used: 37126144
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.011410713s of 10.509191513s, submitted: 107
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a6e79000/0x0/0x1bfc00000, data 0x5760f68/0x5935000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a6e79000/0x0/0x1bfc00000, data 0x5760f68/0x5935000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,1,0,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 40919040 heap: 400654336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2dec81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2ab95a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2da69e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2c666b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 40976384 heap: 400654336 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2d0b5c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c9e30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2b4d94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd6800 session 0x556d2da68960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2de8ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370515968 unmapped: 38019072 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2b558960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2de9fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd6800 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd6800 session 0x556d2ab95860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2d9050e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b9f10e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2de9e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db4e000 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cfd65a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 48750592 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd6800 session 0x556d2d102d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2cfef0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d300eac00 session 0x556d2c9752c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2db51c00 session 0x556d2cbfb4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e25a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2df883c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dcd6800 session 0x556d2b5585a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2b56b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4271470 data_alloc: 234881024 data_used: 37117952
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 48906240 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a5328000/0x0/0x1bfc00000, data 0x72aefea/0x7486000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dce43c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 48873472 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 48857088 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 364380160 unmapped: 44154880 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 39632896 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b950800 session 0x556d2c983860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dd70400 session 0x556d2d901e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4477277 data_alloc: 268435456 data_used: 63299584
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 39632896 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.346193314s of 10.113143921s, submitted: 80
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dec0000 session 0x556d2de8e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 39616512 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dec0000 session 0x556d2de9ed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a5304000/0x0/0x1bfc00000, data 0x72d101d/0x74aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 39624704 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dd70400 session 0x556d2c983c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2debec00 session 0x556d2de9f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368910336 unmapped: 39624704 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374128640 unmapped: 34406400 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4578071 data_alloc: 268435456 data_used: 74043392
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376766464 unmapped: 31768576 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376766464 unmapped: 31768576 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a52d7000/0x0/0x1bfc00000, data 0x72fb03d/0x74d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376766464 unmapped: 31768576 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377389056 unmapped: 31145984 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381444096 unmapped: 27090944 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4680496 data_alloc: 285212672 data_used: 74498048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381976576 unmapped: 26558464 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.913048744s of 10.114478111s, submitted: 127
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a458f000/0x0/0x1bfc00000, data 0x803c03d/0x8217000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,0,0,8])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382427136 unmapped: 26107904 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a4575000/0x0/0x1bfc00000, data 0x805003d/0x822b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384286720 unmapped: 24248320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a4575000/0x0/0x1bfc00000, data 0x805003d/0x822b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384294912 unmapped: 24240128 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c947000 session 0x556d2ca314a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2deb3000 session 0x556d2b563c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384311296 unmapped: 24223744 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a457a000/0x0/0x1bfc00000, data 0x805703d/0x8232000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,4])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4725244 data_alloc: 285212672 data_used: 76087296
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 385114112 unmapped: 23420928 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a426a000/0x0/0x1bfc00000, data 0x836a02d/0x8544000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 385712128 unmapped: 22822912 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380370944 unmapped: 28164096 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380379136 unmapped: 28155904 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c947000 session 0x556d2a023a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adfa400 session 0x556d2d10eb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2b5583c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380272640 unmapped: 28262400 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a5466000/0x0/0x1bfc00000, data 0x716ff98/0x7347000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d32e9a800 session 0x556d2d904d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dd70400 session 0x556d2cad6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adfa400 session 0x556d2da68d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2b9f1e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c947000 session 0x556d2df89680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4545238 data_alloc: 268435456 data_used: 60837888
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dd70400 session 0x556d2ab892c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381771776 unmapped: 26763264 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d32e9a800 session 0x556d2d900f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adfa400 session 0x556d2c982000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.789817333s of 10.055060387s, submitted: 183
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381771776 unmapped: 26763264 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2acab860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c947000 session 0x556d2d10e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381771776 unmapped: 26763264 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2dd70400 session 0x556d2b50fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2deb3000 session 0x556d2c8723c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381771776 unmapped: 26763264 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2adfa400 session 0x556d2dabdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381771776 unmapped: 26763264 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 heartbeat osd_stat(store_statfs(0x1a4f1d000/0x0/0x1bfc00000, data 0x76b601a/0x7890000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4547625 data_alloc: 268435456 data_used: 60837888
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381788160 unmapped: 26746880 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2b626400 session 0x556d2d1ccf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381968384 unmapped: 26566656 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381968384 unmapped: 26566656 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381984768 unmapped: 26550272 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381984768 unmapped: 26550272 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 handle_osd_map epochs [359,359], i have 359, src has [1,359]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 358 ms_handle_reset con 0x556d2c947000 session 0x556d2dfc9a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a4eef000/0x0/0x1bfc00000, data 0x76e1d60/0x78be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4593769 data_alloc: 268435456 data_used: 63873024
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2dd70400 session 0x556d2da68780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381984768 unmapped: 26550272 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2debec00 session 0x556d2dabcd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.134259701s of 10.176558495s, submitted: 18
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381984768 unmapped: 26550272 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2b626400 session 0x556d2cfd74a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a4eef000/0x0/0x1bfc00000, data 0x76e1d60/0x78be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1344f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382001152 unmapped: 26533888 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382001152 unmapped: 26533888 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2adfa400 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 26517504 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4508022 data_alloc: 268435456 data_used: 58544128
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 26517504 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 26517504 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2b34a400 session 0x556d2dfc9680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 26517504 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a504d000/0x0/0x1bfc00000, data 0x7177cbb/0x7350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382017536 unmapped: 26517504 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 ms_handle_reset con 0x556d2dcda000 session 0x556d2dfc85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 26509312 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 heartbeat osd_stat(store_statfs(0x1a504d000/0x0/0x1bfc00000, data 0x7177cbb/0x7350000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4509496 data_alloc: 268435456 data_used: 58544128
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 26509312 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 26509312 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 26509312 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.826899529s of 11.879174232s, submitted: 46
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 382025728 unmapped: 26509312 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 heartbeat osd_stat(store_statfs(0x1a504a000/0x0/0x1bfc00000, data 0x7179a32/0x7353000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 ms_handle_reset con 0x556d2debb000 session 0x556d2d10e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4518123 data_alloc: 268435456 data_used: 58552320
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 ms_handle_reset con 0x556d2b34a400 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 heartbeat osd_stat(store_statfs(0x1a5023000/0x0/0x1bfc00000, data 0x719ea94/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 heartbeat osd_stat(store_statfs(0x1a5023000/0x0/0x1bfc00000, data 0x719ea94/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 heartbeat osd_stat(store_statfs(0x1a5023000/0x0/0x1bfc00000, data 0x719ea94/0x7379000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4531651 data_alloc: 268435456 data_used: 59445248
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380190720 unmapped: 28344320 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 360 handle_osd_map epochs [361,361], i have 361, src has [1,361]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2dd33e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.706720352s of 11.431601524s, submitted: 44
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2dcda000 session 0x556d2dabe780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a501d000/0x0/0x1bfc00000, data 0x71a469d/0x7380000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b951800 session 0x556d2c9834a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2deb2c00 session 0x556d2c9e3860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2c975860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b34a400 session 0x556d2b4d3860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b951800 session 0x556d2b512b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2dcda000 session 0x556d2d9052c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2cc43800 session 0x556d2dabfe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2b56ba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b34a400 session 0x556d2de8f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4566401 data_alloc: 268435456 data_used: 59449344
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c872f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b950800 session 0x556d2df88b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380297216 unmapped: 28237824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a4c97000/0x0/0x1bfc00000, data 0x752a6ad/0x7707000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b951800 session 0x556d2da685a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372588544 unmapped: 35946496 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375357440 unmapped: 33177600 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375382016 unmapped: 33153024 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a5adc000/0x0/0x1bfc00000, data 0x66e069d/0x68bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,5,3,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4390142 data_alloc: 251658240 data_used: 46559232
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375595008 unmapped: 32940032 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5d000 session 0x556d2acaa000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375595008 unmapped: 32940032 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db51c00 session 0x556d2ac09680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2dcd6800 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db43000 session 0x556d2de8e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db43000 session 0x556d2d901e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2d1ccf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375472128 unmapped: 33062912 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2dcd6800 session 0x556d2ac08f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375472128 unmapped: 33062912 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b34a400 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a5aa2000/0x0/0x1bfc00000, data 0x672065e/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375414784 unmapped: 33120256 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.125961304s of 10.496381760s, submitted: 180
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b950800 session 0x556d2c9e23c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4409278 data_alloc: 251658240 data_used: 49905664
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2d10e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375422976 unmapped: 33112064 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a5aa2000/0x0/0x1bfc00000, data 0x672065e/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375422976 unmapped: 33112064 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b34a400 session 0x556d2b5581e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db43000 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a5aa2000/0x0/0x1bfc00000, data 0x672065e/0x68fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375447552 unmapped: 33087488 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375447552 unmapped: 33087488 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b626400 session 0x556d2ac19a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2adfa400 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375447552 unmapped: 33087488 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2de8e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2adfa400 session 0x556d2df89860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b626400 session 0x556d2df89860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db43000 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2dcd6800 session 0x556d2ac19a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4344849 data_alloc: 251658240 data_used: 49672192
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375472128 unmapped: 33062912 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 heartbeat osd_stat(store_statfs(0x1a620a000/0x0/0x1bfc00000, data 0x5fb865e/0x6194000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2adfa400 session 0x556d2d10e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2b626400 session 0x556d2ac09680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 ms_handle_reset con 0x556d2db43000 session 0x556d2da685a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2dcda000 session 0x556d2de8f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2b34a400 session 0x556d2dabdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2b4d3860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2adfa400 session 0x556d2c9834a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2b626400 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375504896 unmapped: 33030144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2db43000 session 0x556d2dec85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2dec8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 heartbeat osd_stat(store_statfs(0x1a61e6000/0x0/0x1bfc00000, data 0x5fd83f3/0x61b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375504896 unmapped: 33030144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375504896 unmapped: 33030144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375504896 unmapped: 33030144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2ab33000 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.764363289s of 10.239460945s, submitted: 106
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4405829 data_alloc: 251658240 data_used: 51589120
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 30285824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2dec1400 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378249216 unmapped: 30285824 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2dec6800 session 0x556d2d1ea780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379535360 unmapped: 28999680 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 ms_handle_reset con 0x556d2d75ac00 session 0x556d2cfd7860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 ms_handle_reset con 0x556d2d75ac00 session 0x556d2cfd7680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5eed000/0x0/0x1bfc00000, data 0x62c8416/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 ms_handle_reset con 0x556d2dbbf800 session 0x556d2dabd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 ms_handle_reset con 0x556d2dcd7400 session 0x556d2d0bd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379543552 unmapped: 28991488 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5eed000/0x0/0x1bfc00000, data 0x62c8416/0x64a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1eb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 29097984 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4361794 data_alloc: 251658240 data_used: 50282496
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 29097984 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5f12000/0x0/0x1bfc00000, data 0x62880f8/0x6466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5f12000/0x0/0x1bfc00000, data 0x62880f8/0x6466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379437056 unmapped: 29097984 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5f12000/0x0/0x1bfc00000, data 0x62880f8/0x6466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 29081600 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 heartbeat osd_stat(store_statfs(0x1a5f12000/0x0/0x1bfc00000, data 0x62880f8/0x6466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 29081600 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4801.4 total, 600.0 interval#012Cumulative writes: 47K writes, 178K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 47K writes, 17K syncs, 2.74 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 35K keys, 10K commit groups, 1.0 writes per commit group, ingest: 36.49 MB, 0.06 MB/s#012Interval WAL: 10K writes, 4136 syncs, 2.44 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379453440 unmapped: 29081600 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.035510063s of 10.120718002s, submitted: 126
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ad5cc00 session 0x556d2d1cdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4367090 data_alloc: 251658240 data_used: 50290688
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380551168 unmapped: 27983872 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 27959296 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 27959296 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a5f32000/0x0/0x1bfc00000, data 0x6289d01/0x6469000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 27959296 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2ac08f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380583936 unmapped: 27951104 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4405672 data_alloc: 251658240 data_used: 50524160
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383770624 unmapped: 24764416 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2d75ac00 session 0x556d2b50fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 27910144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 27910144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 27910144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a5c0c000/0x0/0x1bfc00000, data 0x65b1d11/0x6792000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380624896 unmapped: 27910144 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4399272 data_alloc: 251658240 data_used: 50524160
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380633088 unmapped: 27901952 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380633088 unmapped: 27901952 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dbbf800 session 0x556d2ca31a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a5c0c000/0x0/0x1bfc00000, data 0x65b1d11/0x6792000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.699443817s of 12.002340317s, submitted: 27
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dcd7400 session 0x556d2d9010e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380641280 unmapped: 27893760 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dec1400 session 0x556d2b5221e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2c947000 session 0x556d2c982d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380657664 unmapped: 27877376 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dec1400 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2d9014a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2d75ac00 session 0x556d2c873e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dbbf800 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce5e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380657664 unmapped: 27877376 heap: 408535040 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2c947000 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2b34a400 session 0x556d2b5585a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4487931 data_alloc: 251658240 data_used: 50532352
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380674048 unmapped: 36257792 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 36413440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dcd7400 session 0x556d2cbfa960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 36413440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4ff9000/0x0/0x1bfc00000, data 0x71c3d21/0x73a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dec6800 session 0x556d2dabd4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 36413440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4ff9000/0x0/0x1bfc00000, data 0x71c3d21/0x73a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dec6800 session 0x556d2c6661e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2ca31c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 36413440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2b34a400 session 0x556d2d0bd4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2c947000 session 0x556d2cfefc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dcd7400 session 0x556d2acab860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4515365 data_alloc: 251658240 data_used: 50544640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380518400 unmapped: 36413440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380542976 unmapped: 36388864 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4ff7000/0x0/0x1bfc00000, data 0x71c3d44/0x73a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.498466492s of 10.389091492s, submitted: 33
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380575744 unmapped: 36356096 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dec6800 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380690432 unmapped: 36241408 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2c8d6800 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4ff7000/0x0/0x1bfc00000, data 0x71c4d44/0x73a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ad5d400 session 0x556d2cfd6d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380821504 unmapped: 36110336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4654163 data_alloc: 268435456 data_used: 63193088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380821504 unmapped: 36110336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380821504 unmapped: 36110336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 381894656 unmapped: 35037184 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4672000/0x0/0x1bfc00000, data 0x7b3bd44/0x7d1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383098880 unmapped: 33832960 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383172608 unmapped: 33759232 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a465d000/0x0/0x1bfc00000, data 0x7b48d44/0x7d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4689071 data_alloc: 268435456 data_used: 63275008
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383172608 unmapped: 33759232 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383172608 unmapped: 33759232 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383172608 unmapped: 33759232 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.234232903s of 10.256481171s, submitted: 155
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383893504 unmapped: 33038336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a42c3000/0x0/0x1bfc00000, data 0x7ef8d44/0x80db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2dcd6000 session 0x556d2dabda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383893504 unmapped: 33038336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce4f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4733831 data_alloc: 268435456 data_used: 63406080
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2ad5d400 session 0x556d2b8cc960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383893504 unmapped: 33038336 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 ms_handle_reset con 0x556d2c8d6800 session 0x556d2dabfa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383909888 unmapped: 33021952 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384516096 unmapped: 32415744 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 heartbeat osd_stat(store_statfs(0x1a4013000/0x0/0x1bfc00000, data 0x81a7d54/0x838b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384524288 unmapped: 32407552 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 ms_handle_reset con 0x556d2ddf4000 session 0x556d2d900d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384598016 unmapped: 32333824 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 ms_handle_reset con 0x556d2dec6800 session 0x556d2d0bd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 ms_handle_reset con 0x556d2debc400 session 0x556d2c9e3680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4774268 data_alloc: 268435456 data_used: 68022272
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384663552 unmapped: 32268288 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfa780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384679936 unmapped: 32251904 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384679936 unmapped: 32251904 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384679936 unmapped: 32251904 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 heartbeat osd_stat(store_statfs(0x1a4010000/0x0/0x1bfc00000, data 0x81a9bb9/0x838d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 ms_handle_reset con 0x556d2ad5d400 session 0x556d2d0b4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.736770630s of 11.665035248s, submitted: 61
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384770048 unmapped: 32161792 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 366 heartbeat osd_stat(store_statfs(0x1a4010000/0x0/0x1bfc00000, data 0x81a9bb9/0x838d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 366 ms_handle_reset con 0x556d2c8d6800 session 0x556d2a023a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4699597 data_alloc: 268435456 data_used: 63410176
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2ddf4000 session 0x556d2ab95a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 385073152 unmapped: 31858688 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9752c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2ad5d400 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 385286144 unmapped: 31645696 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 385302528 unmapped: 31629312 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2adfa400 session 0x556d2dec9860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2b626400 session 0x556d2dec9c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 heartbeat osd_stat(store_statfs(0x1a45ba000/0x0/0x1bfc00000, data 0x7bfe6b3/0x7de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384262144 unmapped: 32669696 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2c8d6800 session 0x556d2dec9a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 32661504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2ab33000 session 0x556d2c873a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4517652 data_alloc: 251658240 data_used: 52228096
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 32661504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 32661504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2dec0000 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2dd70400 session 0x556d2cbfad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384270336 unmapped: 32661504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 heartbeat osd_stat(store_statfs(0x1a6242000/0x0/0x1bfc00000, data 0x5f77705/0x615c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 36823040 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 ms_handle_reset con 0x556d2ad5d400 session 0x556d2c8734a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 380108800 unmapped: 36823040 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.186067581s of 10.667894363s, submitted: 304
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2b34a400 session 0x556d2b56b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4352672 data_alloc: 251658240 data_used: 50057216
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370278400 unmapped: 46653440 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a623e000/0x0/0x1bfc00000, data 0x5f79323/0x615e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,6])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2b34a400 session 0x556d2b8cc960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370245632 unmapped: 46686208 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a74aa000/0x0/0x1bfc00000, data 0x4d10300/0x4ef4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,4,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ab33000 session 0x556d2acab860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46571520 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2db48400 session 0x556d2ac19c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46571520 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ad5d400 session 0x556d2c872780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46571520 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2dd70400 session 0x556d2da69680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ab33000 session 0x556d2dd35860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4119194 data_alloc: 234881024 data_used: 37269504
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ad5d400 session 0x556d2de8f0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370360320 unmapped: 46571520 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2b34a400 session 0x556d2d1034a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ac08960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2db51c00 session 0x556d2cfd65a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2db48400 session 0x556d2b50e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2b951000 session 0x556d2dce5a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370540544 unmapped: 46391296 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0bc780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 heartbeat osd_stat(store_statfs(0x1a781f000/0x0/0x1bfc00000, data 0x49922f2/0x4b75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370540544 unmapped: 46391296 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 369 ms_handle_reset con 0x556d2ad5d400 session 0x556d2b562d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370556928 unmapped: 46374912 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 369 ms_handle_reset con 0x556d2ab33000 session 0x556d2ac192c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 369 ms_handle_reset con 0x556d2ad5d400 session 0x556d2b558960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 369 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c8730e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387342336 unmapped: 29589504 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 ms_handle_reset con 0x556d2b951000 session 0x556d2cfee960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.705849648s of 10.003185272s, submitted: 174
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 heartbeat osd_stat(store_statfs(0x1a6c87000/0x0/0x1bfc00000, data 0x552fd9c/0x5716000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 ms_handle_reset con 0x556d2adf8400 session 0x556d2c872b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 ms_handle_reset con 0x556d2dbbe800 session 0x556d2dabf860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4198728 data_alloc: 251658240 data_used: 41922560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377176064 unmapped: 39755776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2db48400 session 0x556d2d0b4780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b9f1e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377176064 unmapped: 39755776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2ad5d400 session 0x556d2ac09e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c9e25a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2db48400 session 0x556d2ca310e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377176064 unmapped: 39755776 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 39731200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 heartbeat osd_stat(store_statfs(0x1a6c5a000/0x0/0x1bfc00000, data 0x555bacb/0x5742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377200640 unmapped: 39731200 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2b951000 session 0x556d2ab94000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 heartbeat osd_stat(store_statfs(0x1a6c5a000/0x0/0x1bfc00000, data 0x555bacb/0x5742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4190957 data_alloc: 251658240 data_used: 47149056
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377257984 unmapped: 39673856 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 ms_handle_reset con 0x556d2b34a400 session 0x556d2cbfb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 heartbeat osd_stat(store_statfs(0x1a7c14000/0x0/0x1bfc00000, data 0x45a4a98/0x4789000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.635681152s of 10.139612198s, submitted: 109
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4069593 data_alloc: 234881024 data_used: 36585472
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a7c11000/0x0/0x1bfc00000, data 0x45a66a1/0x478c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372981760 unmapped: 43950080 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373268480 unmapped: 43663360 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4145585 data_alloc: 234881024 data_used: 37007360
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373268480 unmapped: 43663360 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372850688 unmapped: 44081152 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a71fb000/0x0/0x1bfc00000, data 0x4fbd6a1/0x51a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372850688 unmapped: 44081152 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372809728 unmapped: 44122112 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a71f7000/0x0/0x1bfc00000, data 0x4fc16a1/0x51a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372809728 unmapped: 44122112 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4146291 data_alloc: 234881024 data_used: 37085184
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372817920 unmapped: 44113920 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372817920 unmapped: 44113920 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a71f7000/0x0/0x1bfc00000, data 0x4fc16a1/0x51a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372817920 unmapped: 44113920 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a71f7000/0x0/0x1bfc00000, data 0x4fc16a1/0x51a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372817920 unmapped: 44113920 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2dbbe800 session 0x556d2dd33e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ab33000 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.689316750s of 14.531596184s, submitted: 94
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3816857196' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3975967 data_alloc: 234881024 data_used: 30543872
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d905680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a8213000/0x0/0x1bfc00000, data 0x3fa56a1/0x418b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a8213000/0x0/0x1bfc00000, data 0x3fa56a1/0x418b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b34a400 session 0x556d2ac19a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b951000 session 0x556d2d905a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2db48400 session 0x556d2b512b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3985757 data_alloc: 234881024 data_used: 30543872
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ab33000 session 0x556d2b9f03c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 48807936 heap: 416931840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d102000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b34a400 session 0x556d2dabd0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b951000 session 0x556d2b50fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2db48400 session 0x556d2dfc8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ab33000 session 0x556d2b4d8000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389865472 unmapped: 34324480 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b34a400 session 0x556d2cfef680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2b951000 session 0x556d2c8730e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a7495000/0x0/0x1bfc00000, data 0x4d226b1/0x4f09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379486208 unmapped: 44703744 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2dbbe800 session 0x556d2dabde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2adfa400 session 0x556d2ab95a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ab33000 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cfefc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 44695552 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 heartbeat osd_stat(store_statfs(0x1a7495000/0x0/0x1bfc00000, data 0x4d226b1/0x4f09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2dec1400 session 0x556d2cbfb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.377917290s of 10.092967987s, submitted: 40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2d75ac00 session 0x556d2c9e2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2d75ac00 session 0x556d2d0bc780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379494400 unmapped: 44695552 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2adfa400 session 0x556d2c872780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4055194 data_alloc: 234881024 data_used: 35262464
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379322368 unmapped: 44867584 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 ms_handle_reset con 0x556d2dec1400 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 ms_handle_reset con 0x556d2dd70400 session 0x556d2d0bcb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabf680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8725000/0x0/0x1bfc00000, data 0x3a9142b/0x3c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8725000/0x0/0x1bfc00000, data 0x3a9142b/0x3c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 heartbeat osd_stat(store_statfs(0x1a8725000/0x0/0x1bfc00000, data 0x3a9142b/0x3c78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3977814 data_alloc: 234881024 data_used: 29868032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 55361536 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.162723541s of 10.435199738s, submitted: 51
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 ms_handle_reset con 0x556d2b34a400 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 ms_handle_reset con 0x556d2b951000 session 0x556d2b523a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3980796 data_alloc: 234881024 data_used: 29880320
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55410688 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a8722000/0x0/0x1bfc00000, data 0x3a93034/0x3c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55410688 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55410688 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55410688 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 heartbeat osd_stat(store_statfs(0x1a8722000/0x0/0x1bfc00000, data 0x3a93034/0x3c7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 55410688 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 ms_handle_reset con 0x556d2adfa400 session 0x556d2b512f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4026046 data_alloc: 234881024 data_used: 29892608
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 55345152 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367042560 unmapped: 57147392 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367042560 unmapped: 57147392 heap: 424189952 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 ms_handle_reset con 0x556d2dec0000 session 0x556d2b9f1e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367042560 unmapped: 65544192 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dce43c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 ms_handle_reset con 0x556d2dec1400 session 0x556d2ac183c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2adfa400 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 65306624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2b34a400 session 0x556d2d0bd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.472581863s of 10.060535431s, submitted: 63
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a78e9000/0x0/0x1bfc00000, data 0x48cc044/0x4ab5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1385f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4114388 data_alloc: 234881024 data_used: 30302208
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367378432 unmapped: 65208320 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367427584 unmapped: 65159168 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 65183744 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a74d6000/0x0/0x1bfc00000, data 0x48cdd67/0x4ab8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 367542272 unmapped: 65044480 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2b626400 session 0x556d2d900960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372441088 unmapped: 60145664 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ab33000 session 0x556d2cfee960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2adfa400 session 0x556d2ac08960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4166699 data_alloc: 251658240 data_used: 39325696
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372858880 unmapped: 59727872 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2b626400 session 0x556d2b4d2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2b34a400 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373039104 unmapped: 59547648 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373039104 unmapped: 59547648 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a7449000/0x0/0x1bfc00000, data 0x488eda6/0x4a79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfb2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373039104 unmapped: 59547648 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c9e25a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a7449000/0x0/0x1bfc00000, data 0x488eda6/0x4a79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2adfa400 session 0x556d2b563a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373039104 unmapped: 59547648 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2b626400 session 0x556d2c9832c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2dec1400 session 0x556d2dec8d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.465644360s of 10.009433746s, submitted: 436
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ab33000 session 0x556d2ab8d0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dec85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2adfa400 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4163986 data_alloc: 251658240 data_used: 39317504
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373063680 unmapped: 59523072 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373063680 unmapped: 59523072 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2d75ac00 session 0x556d2d905e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2dd70400 session 0x556d2cfef2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373071872 unmapped: 59514880 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374988800 unmapped: 57597952 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a7c83000/0x0/0x1bfc00000, data 0x4122d86/0x430b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2ad5d000 session 0x556d2acaa000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371056640 unmapped: 61530112 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082462 data_alloc: 234881024 data_used: 33456128
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371056640 unmapped: 61530112 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 ms_handle_reset con 0x556d2adfa400 session 0x556d2df88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a78cf000/0x0/0x1bfc00000, data 0x44d5de8/0x46bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4088574 data_alloc: 234881024 data_used: 33452032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.093915939s of 11.725696564s, submitted: 120
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371482624 unmapped: 61104128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 heartbeat osd_stat(store_statfs(0x1a78cf000/0x0/0x1bfc00000, data 0x44d5de8/0x46bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371359744 unmapped: 61227008 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2d75ac00 session 0x556d2d1034a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370016256 unmapped: 62570496 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4119943 data_alloc: 234881024 data_used: 33345536
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370728960 unmapped: 61857792 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370425856 unmapped: 62160896 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a7532000/0x0/0x1bfc00000, data 0x486fb6d/0x4a5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370434048 unmapped: 62152704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370434048 unmapped: 62152704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a74b7000/0x0/0x1bfc00000, data 0x48ebb6d/0x4ad7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 61890560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4125283 data_alloc: 234881024 data_used: 33570816
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 61890560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a74b7000/0x0/0x1bfc00000, data 0x48ebb6d/0x4ad7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 61890560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 61890560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.320468903s of 11.244595528s, submitted: 76
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370696192 unmapped: 61890560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370827264 unmapped: 61759488 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4125179 data_alloc: 234881024 data_used: 33574912
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a7494000/0x0/0x1bfc00000, data 0x490eb6d/0x4afa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 61743104 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370843648 unmapped: 61743104 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2dec1400 session 0x556d2acab860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370933760 unmapped: 61652992 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2b626400 session 0x556d2d905c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370933760 unmapped: 61652992 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a6f3e000/0x0/0x1bfc00000, data 0x4e63bcf/0x5050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370933760 unmapped: 61652992 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2b626400 session 0x556d2cfd7e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4165281 data_alloc: 234881024 data_used: 33574912
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370933760 unmapped: 61652992 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0bd0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 heartbeat osd_stat(store_statfs(0x1a6f3e000/0x0/0x1bfc00000, data 0x4e63bcf/0x5050000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2adfa400 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2d75ac00 session 0x556d2ac192c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 ms_handle_reset con 0x556d2debc400 session 0x556d2c9834a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370933760 unmapped: 61652992 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2dec1400 session 0x556d2ca30b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370950144 unmapped: 61636608 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2b951000 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370966528 unmapped: 61620224 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.159174919s of 10.961967468s, submitted: 32
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2b626400 session 0x556d2de8fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce41e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374407168 unmapped: 58179584 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2d75ac00 session 0x556d2cad72c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4112717 data_alloc: 234881024 data_used: 34172928
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369106944 unmapped: 63479808 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369106944 unmapped: 63479808 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 heartbeat osd_stat(store_statfs(0x1a7827000/0x0/0x1bfc00000, data 0x45798f4/0x4767000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 63528960 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2b626400 session 0x556d2da69e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 63528960 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2b951000 session 0x556d2c9e21e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2dec1400 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 ms_handle_reset con 0x556d2db53000 session 0x556d2c72b0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 63528960 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4115195 data_alloc: 234881024 data_used: 34156544
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369057792 unmapped: 63528960 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 369074176 unmapped: 63512576 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a7822000/0x0/0x1bfc00000, data 0x457b50d/0x476b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368156672 unmapped: 64430080 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a7822000/0x0/0x1bfc00000, data 0x457b50d/0x476b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 63905792 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.666988373s of 10.768574715s, submitted: 31
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373309440 unmapped: 59277312 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4229041 data_alloc: 251658240 data_used: 40095744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373055488 unmapped: 59531264 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6f1e000/0x0/0x1bfc00000, data 0x4e8050d/0x5070000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371367936 unmapped: 61218816 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2d75b000 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b626400 session 0x556d2c9e25a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371367936 unmapped: 61218816 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6ef6000/0x0/0x1bfc00000, data 0x4ea756f/0x5098000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4241584 data_alloc: 251658240 data_used: 40591360
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6ef6000/0x0/0x1bfc00000, data 0x4ea756f/0x5098000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b951000 session 0x556d2b4d2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 371376128 unmapped: 61210624 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2db53000 session 0x556d2cfee960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372572160 unmapped: 60014592 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6cb4000/0x0/0x1bfc00000, data 0x50e956f/0x52da000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,19])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dec1400 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2f943400 session 0x556d2d0bd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372875264 unmapped: 59711488 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.386208534s of 10.341666222s, submitted: 73
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4263474 data_alloc: 251658240 data_used: 40640512
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372482048 unmapped: 60104704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6c75000/0x0/0x1bfc00000, data 0x512256f/0x5313000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372482048 unmapped: 60104704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372482048 unmapped: 60104704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 60121088 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 60121088 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4276784 data_alloc: 251658240 data_used: 40722432
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 60121088 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 60121088 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6c55000/0x0/0x1bfc00000, data 0x514056f/0x5331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6c55000/0x0/0x1bfc00000, data 0x514056f/0x5331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 372465664 unmapped: 60121088 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 59203584 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373383168 unmapped: 59203584 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4272332 data_alloc: 251658240 data_used: 40722432
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 59195392 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 59195392 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373391360 unmapped: 59195392 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.222734451s of 13.356945038s, submitted: 54
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b626400 session 0x556d2dfc9a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b951000 session 0x556d2d9012c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2db53000 session 0x556d2da68f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dec1400 session 0x556d2dce45a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dd43000 session 0x556d2dce54a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6c5c000/0x0/0x1bfc00000, data 0x514056f/0x5331000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373563392 unmapped: 59023360 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374898688 unmapped: 57688064 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6241000/0x0/0x1bfc00000, data 0x5b535d1/0x5d45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4355588 data_alloc: 251658240 data_used: 41078784
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 58318848 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b626400 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 58318848 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b951000 session 0x556d2c872960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6236000/0x0/0x1bfc00000, data 0x5b665d1/0x5d58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374267904 unmapped: 58318848 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2db53000 session 0x556d2d9045a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dec1400 session 0x556d2ca303c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374276096 unmapped: 58310656 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374284288 unmapped: 58302464 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4392889 data_alloc: 251658240 data_used: 44761088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375406592 unmapped: 57180160 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375406592 unmapped: 57180160 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6233000/0x0/0x1bfc00000, data 0x5b67604/0x5d5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375406592 unmapped: 57180160 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dcdd800 session 0x556d2d0b4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375406592 unmapped: 57180160 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6233000/0x0/0x1bfc00000, data 0x5b67604/0x5d5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375406592 unmapped: 57180160 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.337758064s of 11.698511124s, submitted: 96
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4400471 data_alloc: 251658240 data_used: 44752896
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375422976 unmapped: 57163776 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dcdd800 session 0x556d2cfd6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375431168 unmapped: 57155584 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a622f000/0x0/0x1bfc00000, data 0x5b69614/0x5d5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2ab33000 session 0x556d2b50fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d300dd800 session 0x556d2d0bc780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2dcdd400 session 0x556d2d901c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 heartbeat osd_stat(store_statfs(0x1a6230000/0x0/0x1bfc00000, data 0x5b695b2/0x5d5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x13c6f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375439360 unmapped: 57147392 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 ms_handle_reset con 0x556d2b626400 session 0x556d2c8734a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375439360 unmapped: 57147392 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 379 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0bd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393764864 unmapped: 38821888 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 379 ms_handle_reset con 0x556d2b626400 session 0x556d2d10f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4499854 data_alloc: 251658240 data_used: 45404160
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378716160 unmapped: 53870592 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 380 ms_handle_reset con 0x556d2dcdd400 session 0x556d2d290d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379330560 unmapped: 53256192 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 380 heartbeat osd_stat(store_statfs(0x1a40d8000/0x0/0x1bfc00000, data 0x6b1f04c/0x6d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 ms_handle_reset con 0x556d300dd800 session 0x556d2d1ea3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379404288 unmapped: 53182464 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 ms_handle_reset con 0x556d2dcdd800 session 0x556d2de8e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 ms_handle_reset con 0x556d2ab33000 session 0x556d2cbfba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 ms_handle_reset con 0x556d2b626400 session 0x556d2c872780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 ms_handle_reset con 0x556d2dcdd400 session 0x556d2da68b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379469824 unmapped: 53116928 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 heartbeat osd_stat(store_statfs(0x1a48c4000/0x0/0x1bfc00000, data 0x632fe31/0x6527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 53108736 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 heartbeat osd_stat(store_statfs(0x1a48c4000/0x0/0x1bfc00000, data 0x632fe31/0x6527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 heartbeat osd_stat(store_statfs(0x1a48c4000/0x0/0x1bfc00000, data 0x632fe31/0x6527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4467843 data_alloc: 251658240 data_used: 42131456
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379478016 unmapped: 53108736 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.663994789s of 11.150954247s, submitted: 173
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 ms_handle_reset con 0x556d300dd800 session 0x556d2da69860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a48c4000/0x0/0x1bfc00000, data 0x632fe31/0x6527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 ms_handle_reset con 0x556d2b951000 session 0x556d2cfeed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375521280 unmapped: 57065472 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375521280 unmapped: 57065472 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 heartbeat osd_stat(store_statfs(0x1a50f1000/0x0/0x1bfc00000, data 0x5b04b7c/0x5cfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375529472 unmapped: 57057280 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 ms_handle_reset con 0x556d2ab33000 session 0x556d2d9041e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375529472 unmapped: 57057280 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4345614 data_alloc: 234881024 data_used: 33566720
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375545856 unmapped: 57040896 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2dec8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d300ddc00 session 0x556d2b4d30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a50ed000/0x0/0x1bfc00000, data 0x5b067cd/0x5d00000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375554048 unmapped: 57032704 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2b626400 session 0x556d2c983860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375578624 unmapped: 57008128 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d300dd800 session 0x556d2dce45a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab33000 session 0x556d2b4d2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2b626400 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375652352 unmapped: 56934400 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dcdd400 session 0x556d2cfeeb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d300dd800 session 0x556d2de8fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375676928 unmapped: 56909824 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2f943400 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab33000 session 0x556d2c872960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4356174 data_alloc: 234881024 data_used: 29687808
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dcdd400 session 0x556d2dd33e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375676928 unmapped: 56909824 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.268640518s of 10.315971375s, submitted: 113
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 375676928 unmapped: 56909824 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2adfa400 session 0x556d2dce4f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d103e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a4b8b000/0x0/0x1bfc00000, data 0x606c738/0x6263000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab33000 session 0x556d2b56ad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370892800 unmapped: 61693952 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2c9e2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a5611000/0x0/0x1bfc00000, data 0x4bb36e6/0x4daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2f943400 session 0x556d2cfd6d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2b626400 session 0x556d2d9045a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec1400 session 0x556d2de9fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370909184 unmapped: 61677568 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dcdd400 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2b9f14a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec0800 session 0x556d2dabe1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab33000 session 0x556d2d905e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 54951936 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c983a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4275698 data_alloc: 234881024 data_used: 30470144
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377634816 unmapped: 54951936 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dcdd400 session 0x556d2c9e3a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dec1400 session 0x556d2a0225a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 heartbeat osd_stat(store_statfs(0x1a565c000/0x0/0x1bfc00000, data 0x559c6d6/0x5792000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377528320 unmapped: 55058432 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377536512 unmapped: 55050240 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2f943400 session 0x556d2dce5860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cbfb4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373669888 unmapped: 58916864 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 ms_handle_reset con 0x556d2ab35400 session 0x556d2c9752c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 ms_handle_reset con 0x556d2dec0800 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4205076 data_alloc: 234881024 data_used: 33820672
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.210289955s of 10.094809532s, submitted: 97
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 ms_handle_reset con 0x556d2db4c000 session 0x556d2dabd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a68da000/0x0/0x1bfc00000, data 0x431f337/0x4514000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cbfa960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a68db000/0x0/0x1bfc00000, data 0x431f2d5/0x4513000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x14e0f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202754 data_alloc: 234881024 data_used: 33828864
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 370270208 unmapped: 62316544 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 373547008 unmapped: 59039744 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 374595584 unmapped: 57991168 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376832000 unmapped: 55754752 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 heartbeat osd_stat(store_statfs(0x1a4ffa000/0x0/0x1bfc00000, data 0x4a582d5/0x4c4c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4287414 data_alloc: 234881024 data_used: 35008512
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376832000 unmapped: 55754752 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376832000 unmapped: 55754752 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4ff6000/0x0/0x1bfc00000, data 0x4a59ede/0x4c4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.949145317s of 14.257130623s, submitted: 111
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4281946 data_alloc: 234881024 data_used: 35012608
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2dec0800 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2f943400 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2b34a800 session 0x556d2dabdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2b951400 session 0x556d2dabde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2b34a800 session 0x556d2ab894a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376840192 unmapped: 55746560 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 55730176 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cfd72c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 55730176 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a4dbf000/0x0/0x1bfc00000, data 0x4c99ede/0x4e8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4303122 data_alloc: 234881024 data_used: 35012608
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d300ddc00 session 0x556d2da68f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2db53000 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376856576 unmapped: 55730176 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2ab35400 session 0x556d2d901860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2ab33000 session 0x556d2b9f14a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2dcdd400 session 0x556d2b512b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 ms_handle_reset con 0x556d2ab35400 session 0x556d2cbfa3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a5815000/0x0/0x1bfc00000, data 0x4244ece/0x4439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4187437 data_alloc: 234881024 data_used: 30294016
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 376864768 unmapped: 55721984 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a5815000/0x0/0x1bfc00000, data 0x4244ece/0x4439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377806848 unmapped: 54779904 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377806848 unmapped: 54779904 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4202797 data_alloc: 234881024 data_used: 32456704
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377806848 unmapped: 54779904 heap: 432586752 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a5815000/0x0/0x1bfc00000, data 0x4244ece/0x4439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.682620049s of 15.909779549s, submitted: 44
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377806848 unmapped: 63176704 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2dec9a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a5014000/0x0/0x1bfc00000, data 0x4a44ef1/0x4c3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dec0800 session 0x556d2ab88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 62627840 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e2b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2ab8d0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cfef860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 62627840 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2cfef2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2f943400 session 0x556d2c72ba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1021e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2cfd6960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378421248 unmapped: 62562304 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4399445 data_alloc: 234881024 data_used: 33529856
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 62496768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a3ce0000/0x0/0x1bfc00000, data 0x5963cf8/0x5b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 61702144 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d2905a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383811584 unmapped: 57171968 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2b50e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384851968 unmapped: 56131584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2dec9c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2b8cd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2ca31c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389365760 unmapped: 51617792 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a3c95000/0x0/0x1bfc00000, data 0x59aecf8/0x5ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,3,0,1,0,15,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2cfee960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456051 data_alloc: 251658240 data_used: 42823680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386326528 unmapped: 54657024 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.058428764s of 10.070032120s, submitted: 99
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d900960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2ab8d0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386351104 unmapped: 54632448 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2db51400 session 0x556d2b8cc000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2de8e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2acaaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393306112 unmapped: 47677440 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393338880 unmapped: 47644672 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dbbe000 session 0x556d2cfd7680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cfef680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 42377216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2cad6780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d0bc3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfef860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a31e0000/0x0/0x1bfc00000, data 0x6462d21/0x665e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690952 data_alloc: 251658240 data_used: 53731328
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393846784 unmapped: 47136768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691244 data_alloc: 251658240 data_used: 53731328
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a2555000/0x0/0x1bfc00000, data 0x70edd5a/0x72e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2db51400 session 0x556d2d9045a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dbbe000 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.364130020s of 12.096626282s, submitted: 100
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2de9f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfd7e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396918784 unmapped: 44064768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1b6e000/0x0/0x1bfc00000, data 0x7ad2d8d/0x7cd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396943360 unmapped: 44040192 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4787403 data_alloc: 268435456 data_used: 55635968
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 397819904 unmapped: 43163648 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1aed000/0x0/0x1bfc00000, data 0x7b53d8d/0x7d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1aed000/0x0/0x1bfc00000, data 0x7b53d8d/0x7d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d30619c00 session 0x556d2df88960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2debc000 session 0x556d2b50e3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814324 data_alloc: 268435456 data_used: 59850752
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dec7800 session 0x556d2c72ba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2ca31e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d30619c00 session 0x556d2b4d90e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2debc000 session 0x556d2cbfb860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 39903232 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.222202778s of 10.293131828s, submitted: 128
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 37011456 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1acc000/0x0/0x1bfc00000, data 0x7b73def/0x7d72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 31662080 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4911206 data_alloc: 285212672 data_used: 72708096
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 31637504 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 31629312 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409370624 unmapped: 31612928 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2ab88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a16e9000/0x0/0x1bfc00000, data 0x7f48def/0x8147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410607616 unmapped: 30375936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b34a800 session 0x556d2cfefc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 30834688 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ca87400 session 0x556d2dabef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabfa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a16ed000/0x0/0x1bfc00000, data 0x7f4eb74/0x814f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [3,2,3,1,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4968211 data_alloc: 285212672 data_used: 78028800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422912000 unmapped: 18071552 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416800768 unmapped: 24182784 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d30619c00 session 0x556d2cfd74a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2debc000 session 0x556d2d9054a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418021376 unmapped: 22962176 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a0a6e000/0x0/0x1bfc00000, data 0x8bccbd6/0x8dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.058282375s of 10.044799805s, submitted: 101
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422068224 unmapped: 18915328 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 387 handle_osd_map epochs [388,388], i have 388, src has [1,388]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2ab35400 session 0x556d2cfefc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2db53000 session 0x556d2ac19a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421019648 unmapped: 19963904 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 388 heartbeat osd_stat(store_statfs(0x19f85f000/0x0/0x1bfc00000, data 0x9ddd904/0x9fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5217874 data_alloc: 285212672 data_used: 84238336
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 19931136 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2ab33000 session 0x556d2df881e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2b34a800 session 0x556d2c9e23c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 heartbeat osd_stat(store_statfs(0x19fdec000/0x0/0x1bfc00000, data 0x8ee2643/0x90e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2ab35400 session 0x556d2de8f4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 16515072 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2db53000 session 0x556d2cfeeb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5068775 data_alloc: 285212672 data_used: 74317824
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 16007168 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 16007168 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a04b1000/0x0/0x1bfc00000, data 0x9189275/0x938c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2debc000 session 0x556d2da685a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d2905a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2db42400 session 0x556d2d1cdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfef860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411475968 unmapped: 29507584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ab35400 session 0x556d2da69860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.420536995s of 10.264038086s, submitted: 352
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411475968 unmapped: 29507584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2b34a800 session 0x556d2b454d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411484160 unmapped: 29499392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a1a01000/0x0/0x1bfc00000, data 0x7c3af7a/0x7e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4799495 data_alloc: 251658240 data_used: 51978240
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2dec85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ad5d000 session 0x556d2de9fe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ca87400 session 0x556d2d0bde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820199 data_alloc: 268435456 data_used: 55132160
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a19fe000/0x0/0x1bfc00000, data 0x7c3df7a/0x7e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 29483008 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a19fa000/0x0/0x1bfc00000, data 0x7c3fb83/0x7e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 29483008 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db42400 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db53000 session 0x556d2c9821e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.621694565s of 10.127067566s, submitted: 42
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419889152 unmapped: 21094400 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db53000 session 0x556d2b5583c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 21086208 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4944626 data_alloc: 268435456 data_used: 68145152
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 21086208 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418865152 unmapped: 22118400 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 21020672 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a0942000/0x0/0x1bfc00000, data 0x8cf1be5/0x8ef6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 ms_handle_reset con 0x556d2ca87400 session 0x556d2d9043c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4933270 data_alloc: 268435456 data_used: 62816256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421126144 unmapped: 19857408 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1536000/0x0/0x1bfc00000, data 0x81038fa/0x8308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421134336 unmapped: 19849216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1536000/0x0/0x1bfc00000, data 0x81038fa/0x8308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421134336 unmapped: 19849216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.330789566s of 10.588541031s, submitted: 151
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4927694 data_alloc: 268435456 data_used: 62820352
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1515000/0x0/0x1bfc00000, data 0x81248fa/0x8329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1515000/0x0/0x1bfc00000, data 0x81248fa/0x8329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4938804 data_alloc: 268435456 data_used: 63123456
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1511000/0x0/0x1bfc00000, data 0x8126503/0x832c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4938964 data_alloc: 268435456 data_used: 63127552
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.563270569s of 13.047464371s, submitted: 30
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a150f000/0x0/0x1bfc00000, data 0x8129503/0x832f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4937964 data_alloc: 268435456 data_used: 63139840
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0bd4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a14f9000/0x0/0x1bfc00000, data 0x813f503/0x8345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dd45c00 session 0x556d2d291680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dec3000 session 0x556d2cbfb860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a14f9000/0x0/0x1bfc00000, data 0x813f503/0x8345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cbfa780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5008471 data_alloc: 268435456 data_used: 63135744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ca87400 session 0x556d2de9fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.587997437s of 10.050396919s, submitted: 37
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db53000 session 0x556d2c72b0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421707776 unmapped: 23478272 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a0c51000/0x0/0x1bfc00000, data 0x89e84f3/0x8bed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421707776 unmapped: 23478272 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dd45c00 session 0x556d2ab892c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421715968 unmapped: 23470080 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5018024 data_alloc: 268435456 data_used: 64372736
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422305792 unmapped: 22880256 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a0c4c000/0x0/0x1bfc00000, data 0x89ed4f3/0x8bf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 21815296 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db42400 session 0x556d2d10f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db53000 session 0x556d2de9e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15ca000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4943682 data_alloc: 268435456 data_used: 64090112
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab35400 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab33000 session 0x556d2c982000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15cb000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.307431221s of 10.017060280s, submitted: 61
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2b34a000 session 0x556d2dabf4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab32400 session 0x556d2d900960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 30056448 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15cb000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab33000 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415137792 unmapped: 30048256 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab35400 session 0x556d2dec85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4837114 data_alloc: 268435456 data_used: 60567552
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a20f8000/0x0/0x1bfc00000, data 0x75443bd/0x7745000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 27942912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 32571392 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1aff000/0x0/0x1bfc00000, data 0x7b3e3bd/0x7d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 32129024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4890160 data_alloc: 268435456 data_used: 61841408
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1a7c000/0x0/0x1bfc00000, data 0x7bc13bd/0x7dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 32129024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1a7c000/0x0/0x1bfc00000, data 0x7bc13bd/0x7dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d300ddc00 session 0x556d2b9f1c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db42400 session 0x556d2b562000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.311465263s of 11.517497063s, submitted: 143
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a21a1000/0x0/0x1bfc00000, data 0x749c3bd/0x769d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4826815 data_alloc: 268435456 data_used: 59699200
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2cfef680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab32400 session 0x556d2c9e3a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 32178176 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab35400 session 0x556d2dce4f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2179000/0x0/0x1bfc00000, data 0x74c10e0/0x76c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 32178176 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d300ddc00 session 0x556d2c9e3680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 32169984 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d2ab32400 session 0x556d2dec92c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2175000/0x0/0x1bfc00000, data 0x74c2e57/0x76c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4832126 data_alloc: 268435456 data_used: 59707392
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 32169984 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2172000/0x0/0x1bfc00000, data 0x74c8df5/0x76cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4829554 data_alloc: 268435456 data_used: 59707392
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2171000/0x0/0x1bfc00000, data 0x74c9df5/0x76cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.398031235s of 11.863055229s, submitted: 72
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabeb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 32145408 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db53000 session 0x556d2da69e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dd45800 session 0x556d2cfef860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2da69860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2d102000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 32145408 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2d9041e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dce4d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db53000 session 0x556d2de9e3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2dabda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabe780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4905470 data_alloc: 268435456 data_used: 59715584
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2d900d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2c872780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2dec8f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce43c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2b8cc000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 29212672 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dec7800 session 0x556d2d9001e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dd45800 session 0x556d2d905e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2de9eb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2dd33e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2de9f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414081024 unmapped: 31105024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a168a000/0x0/0x1bfc00000, data 0x7fb09fe/0x81b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4932596 data_alloc: 268435456 data_used: 59719680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414081024 unmapped: 31105024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2cfd7860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.765192986s of 11.799226761s, submitted: 81
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d1ccd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2b56ad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a168b000/0x0/0x1bfc00000, data 0x7fb09cb/0x81b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 29270016 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4989851 data_alloc: 268435456 data_used: 64364544
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416342016 unmapped: 28844032 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2b5585a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 32047104 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2c9e2f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dec7800 session 0x556d2b50e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 36151296 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2da69680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2d901860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2cfd6960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2de8f4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a44e7000/0x0/0x1bfc00000, data 0x5155969/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4485045 data_alloc: 251658240 data_used: 43249664
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a44e7000/0x0/0x1bfc00000, data 0x5155969/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.329918861s of 10.000563622s, submitted: 145
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 ms_handle_reset con 0x556d2dec7800 session 0x556d2dfc9860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 39657472 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a445b000/0x0/0x1bfc00000, data 0x51e06bd/0x53e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406577152 unmapped: 38608896 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a3fa3000/0x0/0x1bfc00000, data 0x52876bd/0x5488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4510129 data_alloc: 251658240 data_used: 36737024
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a3fa3000/0x0/0x1bfc00000, data 0x52876bd/0x5488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 ms_handle_reset con 0x556d38d0dc00 session 0x556d2ab8d0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4544035 data_alloc: 251658240 data_used: 36745216
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39f4000/0x0/0x1bfc00000, data 0x58372c6/0x5a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406839296 unmapped: 38346752 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.460285187s of 10.763207436s, submitted: 123
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4580727 data_alloc: 251658240 data_used: 36823040
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3647000/0x0/0x1bfc00000, data 0x5be42d6/0x5de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 38019072 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9830e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db51400 session 0x556d2de9ed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2b512b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d1eb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2cfd70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2d9001e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4928000/0x0/0x1bfc00000, data 0x45c42d6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45085 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4382559 data_alloc: 234881024 data_used: 33337344
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9f2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 50962432 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 50962432 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.770096779s of 10.134560585s, submitted: 58
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394199040 unmapped: 50987008 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394199040 unmapped: 50987008 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2de8e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabcd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4333232 data_alloc: 234881024 data_used: 31100928
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386596864 unmapped: 58589184 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d9041e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a51c7000/0x0/0x1bfc00000, data 0x40642d6/0x4267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4152146 data_alloc: 234881024 data_used: 21069824
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.270173073s of 12.335928917s, submitted: 25
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4190044 data_alloc: 234881024 data_used: 21090304
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391307264 unmapped: 53878784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57c5000/0x0/0x1bfc00000, data 0x3a662d6/0x3c69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2dec85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391528448 unmapped: 53657600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5724000/0x0/0x1bfc00000, data 0x3b072d6/0x3d0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241134 data_alloc: 234881024 data_used: 22396928
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391528448 unmapped: 53657600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d904000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d900960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dabf4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2c982000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2de9f2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2df88780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d9001e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2d1eb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 53592064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9ed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 53592064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c9830e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2b50e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2acaad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a49ac000/0x0/0x1bfc00000, data 0x487c3aa/0x4a82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 56664064 heap: 448864256 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d1ccd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2de9f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dd33e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9eb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2da69680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b558960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 57925632 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2da692c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2de8fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dfc8d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dfc81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444547 data_alloc: 234881024 data_used: 22429696
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 58441728 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2d0b4780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.393129349s of 11.138072968s, submitted: 184
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2cc42000 session 0x556d2de9e1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 58441728 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392060928 unmapped: 58425344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392060928 unmapped: 58425344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e64000/0x0/0x1bfc00000, data 0x54023dd/0x560a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392069120 unmapped: 58417152 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4471720 data_alloc: 234881024 data_used: 25657344
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392069120 unmapped: 58417152 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2de9fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e59000/0x0/0x1bfc00000, data 0x540d3dd/0x5615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392077312 unmapped: 58408960 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2d1cde00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2dabf680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2b56b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 56664064 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2cbfa5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 56664064 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2c9e2780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390135808 unmapped: 60350464 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4452843 data_alloc: 234881024 data_used: 32940032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391036928 unmapped: 59449344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.089821815s of 10.173573494s, submitted: 29
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393633792 unmapped: 56852480 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a580e000/0x0/0x1bfc00000, data 0x4a5038b/0x4c58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393699328 unmapped: 56786944 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394764288 unmapped: 55721984 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395567104 unmapped: 54919168 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4563269 data_alloc: 234881024 data_used: 37036032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d291680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2d9054a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395583488 unmapped: 54902784 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2dfc94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395599872 unmapped: 54886400 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a51b9000/0x0/0x1bfc00000, data 0x49c92f6/0x4bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2d0bd4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2b50f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4428925 data_alloc: 234881024 data_used: 33460224
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2d291e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c48000/0x0/0x1bfc00000, data 0x46212f6/0x4826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,1,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 53616640 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.676429749s of 10.251730919s, submitted: 194
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398491648 unmapped: 51994624 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398499840 unmapped: 51986432 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a538b000/0x0/0x1bfc00000, data 0x4ede2f6/0x50e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,2,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398196736 unmapped: 52289536 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dec81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2ab95860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393633792 unmapped: 56852480 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4418939 data_alloc: 234881024 data_used: 29736960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x45c22f6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x45c22f6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2dd32780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5ca8000/0x0/0x1bfc00000, data 0x45c22e6/0x47c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4415291 data_alloc: 234881024 data_used: 29761536
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c87000/0x0/0x1bfc00000, data 0x45e32e6/0x47e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.973278999s of 13.644609451s, submitted: 82
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230151 data_alloc: 234881024 data_used: 21344256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2da68960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6cde000/0x0/0x1bfc00000, data 0x358c2e6/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230151 data_alloc: 234881024 data_used: 21344256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6cd8000/0x0/0x1bfc00000, data 0x35922e6/0x3796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.106501579s of 10.134304047s, submitted: 13
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4283103 data_alloc: 234881024 data_used: 21344256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc8780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389128192 unmapped: 65028096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389128192 unmapped: 65028096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306839 data_alloc: 234881024 data_used: 21344256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2c8730e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389152768 unmapped: 65003520 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389152768 unmapped: 65003520 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4386532 data_alloc: 234881024 data_used: 28667904
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.490860939s of 10.591947556s, submitted: 13
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620c000/0x0/0x1bfc00000, data 0x405d2e6/0x4261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4387392 data_alloc: 234881024 data_used: 28667904
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6207000/0x0/0x1bfc00000, data 0x40632e6/0x4267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ac183c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2c9832c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4465576 data_alloc: 234881024 data_used: 28741632
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5806000/0x0/0x1bfc00000, data 0x4a63348/0x4c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5803000/0x0/0x1bfc00000, data 0x4a66348/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2dfc9e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2d291a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2de9e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b563a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.726837158s of 11.113820076s, submitted: 39
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dabf680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2aafd800 session 0x556d2dce5680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2de9fe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dfc83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2b5134a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489706 data_alloc: 234881024 data_used: 28745728
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2cfef0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a55a6000/0x0/0x1bfc00000, data 0x4cc137b/0x4ec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2c9e21e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a55a6000/0x0/0x1bfc00000, data 0x4cc137b/0x4ec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2b559a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2cad70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce5a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2ac183c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2c8730e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4346826 data_alloc: 234881024 data_used: 26660864
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a644c000/0x0/0x1bfc00000, data 0x3e1b37b/0x4022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2dd32780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ab95860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.578354836s of 10.390926361s, submitted: 44
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dec81e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393510912 unmapped: 64847872 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 64684032 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2d1021e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec2000 session 0x556d2d9003c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 64684032 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4366596 data_alloc: 234881024 data_used: 25120768
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cbfba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390529024 unmapped: 67829760 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394231808 unmapped: 64126976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392298496 unmapped: 66060288 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4912000/0x0/0x1bfc00000, data 0x47b735b/0x49bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4912000/0x0/0x1bfc00000, data 0x47b735b/0x49bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 66125824 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 66125824 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a48eb000/0x0/0x1bfc00000, data 0x47dd35b/0x49e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4462371 data_alloc: 234881024 data_used: 25239552
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a48eb000/0x0/0x1bfc00000, data 0x47dd35b/0x49e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394657792 unmapped: 63700992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.439727783s of 12.403476715s, submitted: 96
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394657792 unmapped: 63700992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4607603 data_alloc: 234881024 data_used: 35815424
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396345344 unmapped: 62013440 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3a8b000/0x0/0x1bfc00000, data 0x563e35b/0x5843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,6])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2de9e3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 62930944 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 62930944 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395681792 unmapped: 62676992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4653122 data_alloc: 234881024 data_used: 36085760
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396009472 unmapped: 62349312 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.237027168s of 10.552100182s, submitted: 74
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2cbfb4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2da69c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2c9e3c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4656518 data_alloc: 234881024 data_used: 36139008
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 397377536 unmapped: 60981248 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d3b587400 session 0x556d2da690e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398327808 unmapped: 60030976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3277000/0x0/0x1bfc00000, data 0x5e4a35b/0x604f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398327808 unmapped: 60030976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3277000/0x0/0x1bfc00000, data 0x5e4a35b/0x604f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,12])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 56393728 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2ecc000/0x0/0x1bfc00000, data 0x61f436b/0x63fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398450688 unmapped: 59908096 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2e93000/0x0/0x1bfc00000, data 0x623536b/0x643b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4727187 data_alloc: 234881024 data_used: 36171776
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398458880 unmapped: 59899904 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406519808 unmapped: 51838976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2de9eb40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2b454d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 59138048 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 59121664 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 59113472 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2b5132c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4b800 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4795751 data_alloc: 234881024 data_used: 36540416
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4b800 session 0x556d2d904960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.026745796s of 10.590575218s, submitted: 91
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 59105280 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a25b6000/0x0/0x1bfc00000, data 0x6b1036b/0x6d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2cfd65a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a25b6000/0x0/0x1bfc00000, data 0x6b1036b/0x6d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401416192 unmapped: 56942592 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dce5c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2dabf4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4840481 data_alloc: 251658240 data_used: 41115648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 54697984 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a1384000/0x0/0x1bfc00000, data 0x6ba337b/0x6daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 54697984 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a1146000/0x0/0x1bfc00000, data 0x6ddf57b/0x6fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dbbe400 session 0x556d2c9e30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4857271 data_alloc: 251658240 data_used: 41127936
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403693568 unmapped: 54665216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.675286293s of 10.106954575s, submitted: 72
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 400 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403701760 unmapped: 54657024 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403701760 unmapped: 54657024 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 400 ms_handle_reset con 0x556d2db4b800 session 0x556d2b562d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 54640640 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2dbbe400 session 0x556d2b563860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2dd44c00 session 0x556d2d81af00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2b34a800 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2debf000 session 0x556d2c9e3e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a1132000/0x0/0x1bfc00000, data 0x6df2300/0x6ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2db11000 session 0x556d2df89e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2ab33000 session 0x556d2c8732c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428711936 unmapped: 41689088 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 402 ms_handle_reset con 0x556d2b34a800 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 402 ms_handle_reset con 0x556d2db4b800 session 0x556d2d81a780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198280 data_alloc: 268435456 data_used: 60989440
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428744704 unmapped: 41656320 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2d901c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4b800 session 0x556d2ad30f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425000960 unmapped: 45400064 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2b4d3860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2df88000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2dc08800 session 0x556d2d81a1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabf4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2dce5c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7fc000/0x0/0x1bfc00000, data 0x9721bad/0x9932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 44204032 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2dfc85a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4b800 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2da69c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7c6000/0x0/0x1bfc00000, data 0x9756bbd/0x9968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5256784 data_alloc: 268435456 data_used: 62124032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.167340279s of 10.341951370s, submitted: 175
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.4 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.70 writes per sync, written: 0.20 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8979 writes, 32K keys, 8979 commit groups, 1.0 writes per commit group, ingest: 34.71 MB, 0.06 MB/s#012Interval WAL: 8979 writes, 3547 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2d9003c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d1021e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7c6000/0x0/0x1bfc00000, data 0x9756bbd/0x9968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db40400 session 0x556d2d1ea000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2cad6f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2dd32f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2ac183c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d905e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 44179456 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4d000 session 0x556d2cbfaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2df885a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2d0b4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2cfd63c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 44179456 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d291e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2b4d30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db4d000 session 0x556d2b56a780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2b5221e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2dce5e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 44171264 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5260674 data_alloc: 268435456 data_used: 62132224
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 44171264 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b4543c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2dec2000 session 0x556d2dabd0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2ca30000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0b4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19e7be000/0x0/0x1bfc00000, data 0x975a838/0x996f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 44154880 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cfd63c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d300dd400 session 0x556d2c982000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2ad31680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2ca303c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426287104 unmapped: 44113920 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0b4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2ca310e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2dec2000 session 0x556d2d1030e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fbf0000/0x0/0x1bfc00000, data 0x832a828/0x853e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b50f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426303488 unmapped: 44097536 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2a0225a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x832a818/0x853d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2d900b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426311680 unmapped: 44089344 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5075474 data_alloc: 268435456 data_used: 58626048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426319872 unmapped: 44081152 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: mgrc ms_handle_reset ms_handle_reset con 0x556d2ad5c400
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: mgrc handle_mgr_configure stats_period=5
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2c8d9400 session 0x556d2de9fa40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426459136 unmapped: 43941888 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.647686958s of 10.896045685s, submitted: 84
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db42000 session 0x556d2ac183c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426860544 unmapped: 43540480 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5047087 data_alloc: 268435456 data_used: 68329472
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a800 session 0x556d2c872d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5047087 data_alloc: 268435456 data_used: 68329472
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.866680145s of 10.913626671s, submitted: 22
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 432676864 unmapped: 37724160 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b8000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5116831 data_alloc: 285212672 data_used: 74747904
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a05ed000/0x0/0x1bfc00000, data 0x792e805/0x7b41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a05ed000/0x0/0x1bfc00000, data 0x792e805/0x7b41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dabf4a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5202650 data_alloc: 285212672 data_used: 74747904
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5202650 data_alloc: 285212672 data_used: 74747904
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.437170029s of 12.675600052s, submitted: 60
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4a000/0x0/0x1bfc00000, data 0x82d0867/0x84e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5198846 data_alloc: 285212672 data_used: 74752000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d5e3800 session 0x556d2d81af00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433676288 unmapped: 36724736 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 436666368 unmapped: 33734656 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 437846016 unmapped: 32555008 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19f36d000/0x0/0x1bfc00000, data 0x8bad867/0x8dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438755328 unmapped: 31645696 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5335234 data_alloc: 285212672 data_used: 84652032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438755328 unmapped: 31645696 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438763520 unmapped: 31637504 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19f36d000/0x0/0x1bfc00000, data 0x8bad867/0x8dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 31113216 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 31113216 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db47000 session 0x556d2b50e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.948060989s of 14.110429764s, submitted: 28
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442073088 unmapped: 28327936 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2d68b400 session 0x556d2da69680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dec8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2d68b400 session 0x556d2df89a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5402262 data_alloc: 301989888 data_used: 93487104
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2cc42c00 session 0x556d2c9e25a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 451633152 unmapped: 27172864 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 406 ms_handle_reset con 0x556d2d5e3800 session 0x556d2dabcf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d350000/0x0/0x1bfc00000, data 0xa7b6311/0xa9cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 27049984 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2db47000 session 0x556d2ca30960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 25853952 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 455901184 unmapped: 22904832 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2c8d9400 session 0x556d2de8ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2cc42c00 session 0x556d2de8f0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2d5e3800 session 0x556d2de8fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 456187904 unmapped: 22618112 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ca43000/0x0/0x1bfc00000, data 0xb0c00b2/0xb2d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5688051 data_alloc: 301989888 data_used: 97107968
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ca3b000/0x0/0x1bfc00000, data 0xb0ca050/0xb2e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 20496384 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2db47000 session 0x556d2df883c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2d68b400 session 0x556d2de8e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 18333696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 18333696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1025a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2ad52400 session 0x556d2cbfaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2db4d000 session 0x556d2d904d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2db11c00 session 0x556d2b9f10e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 18317312 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2c8d9400 session 0x556d2d1ea780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 heartbeat osd_stat(store_statfs(0x19e63f000/0x0/0x1bfc00000, data 0x94c6dd3/0x96df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [0,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2cc42c00 session 0x556d2dce52c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460578816 unmapped: 18227200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5467115 data_alloc: 301989888 data_used: 96972800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.246646881s of 11.109315872s, submitted: 228
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 17154048 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 17072128 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 heartbeat osd_stat(store_statfs(0x19e6ae000/0x0/0x1bfc00000, data 0x9453976/0x966a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2ad52400 session 0x556d2b8cd680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2c8d9400 session 0x556d2c8730e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 462348288 unmapped: 16457728 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 410 ms_handle_reset con 0x556d2ab33000 session 0x556d2a0225a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 19800064 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 19800064 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 heartbeat osd_stat(store_statfs(0x19f778000/0x0/0x1bfc00000, data 0x838f24e/0x85a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2db11c00 session 0x556d2d1032c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5091980 data_alloc: 268435456 data_used: 70852608
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2db41c00 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2ab33000 session 0x556d2da68f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 450420736 unmapped: 28385280 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dabfc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a1f29000/0x0/0x1bfc00000, data 0x5bdff53/0x5df5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 413 ms_handle_reset con 0x556d2ad52400 session 0x556d2cfd6960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2b626000 session 0x556d2dfc8780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835556 data_alloc: 251658240 data_used: 53211136
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.235727310s of 10.002073288s, submitted: 231
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2ab33000 session 0x556d2d81b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426352640 unmapped: 52453376 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2c8d7800 session 0x556d2da69a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2f943000 session 0x556d2acaad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d300dd000 session 0x556d2b5472c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2dec1400 session 0x556d2c9821e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427401216 unmapped: 51404800 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2db5000/0x0/0x1bfc00000, data 0x4d52917/0x4f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427401216 unmapped: 51404800 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427409408 unmapped: 51396608 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427409408 unmapped: 51396608 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dfc8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713492 data_alloc: 251658240 data_used: 37957632
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2d1eb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d300dd000 session 0x556d2d1eba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3696000/0x0/0x1bfc00000, data 0x44712f6/0x4687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2b9f0000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dec9680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2dabf0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3696000/0x0/0x1bfc00000, data 0x44712f6/0x4687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2b9f0000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4534601 data_alloc: 234881024 data_used: 26701824
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416571392 unmapped: 62234624 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 62275584 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570921 data_alloc: 234881024 data_used: 29847552
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.859339714s of 15.267506599s, submitted: 73
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4571209 data_alloc: 234881024 data_used: 29855744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 60022784 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 60022784 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628243 data_alloc: 234881024 data_used: 29999104
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a2faa000/0x0/0x1bfc00000, data 0x4b5c328/0x4d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a2faa000/0x0/0x1bfc00000, data 0x4b5c328/0x4d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628243 data_alloc: 234881024 data_used: 29999104
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dec0400 session 0x556d2c975860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.229384422s of 15.399053574s, submitted: 41
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2cfd7860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2da69a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2da68f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3464000/0x0/0x1bfc00000, data 0x46a2358/0x48b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4562666 data_alloc: 234881024 data_used: 24821760
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2d68a400 session 0x556d2b522f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2a0225a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3464000/0x0/0x1bfc00000, data 0x46a2358/0x48b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d2f6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363470 data_alloc: 218103808 data_used: 17207296
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.333456993s of 11.540062904s, submitted: 69
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 71639040 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d2f6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2cfd70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b9f10e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416818 data_alloc: 234881024 data_used: 24432640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46a9000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416818 data_alloc: 234881024 data_used: 24432640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.415445328s of 10.419466972s, submitted: 1
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 71639040 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407199744 unmapped: 71606272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407257088 unmapped: 71548928 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2d1eb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2d81b2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2a80b000 session 0x556d2dabed20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b4d90e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.516037941s of 14.591504097s, submitted: 348
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2d0bda40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46a9000/0x0/0x1bfc00000, data 0x345d316/0x3675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4418944 data_alloc: 234881024 data_used: 24449024
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410017792 unmapped: 68788224 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2dce5a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406306816 unmapped: 72499200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84400 session 0x556d2ca30000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84000 session 0x556d2dce4780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d0b4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2df894a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2ac09e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406306816 unmapped: 72499200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84400 session 0x556d2d9041e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2dfc8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3f44000/0x0/0x1bfc00000, data 0x378d316/0x39a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396515 data_alloc: 218103808 data_used: 17211392
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2de9e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2b627400 session 0x556d2d9003c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2c8732c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 75685888 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437a000/0x0/0x1bfc00000, data 0x378d306/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 75702272 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b5221e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.527547836s of 11.853938103s, submitted: 113
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2dabd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392030 data_alloc: 218103808 data_used: 17207296
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 75726848 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407126016 unmapped: 75874304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 75841536 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4466270 data_alloc: 234881024 data_used: 27590656
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 75841536 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d904f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2cbfbc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489310 data_alloc: 234881024 data_used: 34934784
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280815125s of 11.295031548s, submitted: 2
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2dabfc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409903104 unmapped: 73097216 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d1eb0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409911296 unmapped: 76767232 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd84400 session 0x556d2df88000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409919488 unmapped: 76759040 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38b1000/0x0/0x1bfc00000, data 0x42572f6/0x446d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ac000/0x0/0x1bfc00000, data 0x425907b/0x4471000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409919488 unmapped: 76759040 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410853376 unmapped: 75825152 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4638340 data_alloc: 234881024 data_used: 36003840
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 72032256 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3085000/0x0/0x1bfc00000, data 0x4a8107b/0x4c99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d1ccd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b50e3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4657302 data_alloc: 234881024 data_used: 36233216
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.646044731s of 11.046380997s, submitted: 123
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415121408 unmapped: 71557120 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3084000/0x0/0x1bfc00000, data 0x4a8108a/0x4c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3060000/0x0/0x1bfc00000, data 0x4aa508a/0x4cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651338 data_alloc: 234881024 data_used: 36237312
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2d9043c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2ab8c960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2db10c00 session 0x556d2d10e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 70787072 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2db10c00 session 0x556d2d10fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d10e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2dec9c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2dec8000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2dec9a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2c983860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2c983a40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2acaaf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 68427776 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2b5234a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a282e000/0x0/0x1bfc00000, data 0x52d510f/0x54f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd70c00 session 0x556d2d904960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 66748416 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d904780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4834861 data_alloc: 251658240 data_used: 47263744
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 66748416 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a23cf000/0x0/0x1bfc00000, data 0x57330fc/0x594e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 66740224 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.567632675s of 10.894236565s, submitted: 71
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2b4d2000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420085760 unmapped: 66592768 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2dce54a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420085760 unmapped: 66592768 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dc09800 session 0x556d2de8f0e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2a80b400 session 0x556d2dce4000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2a80b400 session 0x556d2c8732c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 62185472 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dabe960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dc09800 session 0x556d2d81a780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2c9e3e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4899776 data_alloc: 268435456 data_used: 55554048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d81a960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 61874176 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a238a000/0x0/0x1bfc00000, data 0x577b08b/0x5994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2b558960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 61693952 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780724 data_alloc: 251658240 data_used: 48349184
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2a80b400 session 0x556d2acaba40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dfc9680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362530708s of 11.454614639s, submitted: 34
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2e46000/0x0/0x1bfc00000, data 0x4cbba0b/0x4ed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426819584 unmapped: 59858944 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783006 data_alloc: 251658240 data_used: 48893952
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2dc09800 session 0x556d2de9e000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2dd71800 session 0x556d2b56ad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 57425920 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a23d0000/0x0/0x1bfc00000, data 0x531aa0b/0x5536000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429047808 unmapped: 57630720 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429064192 unmapped: 57614336 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2101000/0x0/0x1bfc00000, data 0x55f1a0b/0x580d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431521792 unmapped: 55156736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 55148544 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4912544 data_alloc: 251658240 data_used: 49897472
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 55148544 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 55140352 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a1e08000/0x0/0x1bfc00000, data 0x58eaa0b/0x5b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.472081184s of 10.912376404s, submitted: 166
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908188 data_alloc: 251658240 data_used: 49905664
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2debd000 session 0x556d2ac08960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2b951c00 session 0x556d2d1021e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec83c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2183000/0x0/0x1bfc00000, data 0x4f6c9fb/0x5187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2db10c00 session 0x556d2de9e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2ca86800 session 0x556d2b562d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4787244 data_alloc: 251658240 data_used: 44290048
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2a80b400 session 0x556d2b5583c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427581440 unmapped: 59097088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2b951c00 session 0x556d2d901680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2b627400 session 0x556d2ac192c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2ca86800 session 0x556d2dce4f00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 420 ms_handle_reset con 0x556d2debd000 session 0x556d2d1cdc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438558720 unmapped: 48119808 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 420 ms_handle_reset con 0x556d2db10c00 session 0x556d2b5234a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438566912 unmapped: 48111616 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2f66000/0x0/0x1bfc00000, data 0x43d7433/0x45f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 421 ms_handle_reset con 0x556d2a80b400 session 0x556d2d10e960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 55459840 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a32ed000/0x0/0x1bfc00000, data 0x4403172/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 55459840 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a32ed000/0x0/0x1bfc00000, data 0x4403172/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649164 data_alloc: 251658240 data_used: 39026688
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 55451648 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 55451648 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.133255005s of 12.484390259s, submitted: 105
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 66445312 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2ca86800 session 0x556d2b5623c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479290 data_alloc: 234881024 data_used: 25804800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479290 data_alloc: 234881024 data_used: 25804800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 423444480 unmapped: 63234048 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2c8d7800 session 0x556d2df89680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.028622627s of 10.275989532s, submitted: 80
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2ab894a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 58810368 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x4a99d19/0x4cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670546 data_alloc: 234881024 data_used: 28876800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x4a99d19/0x4cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 423 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d9052c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417521664 unmapped: 73138176 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417538048 unmapped: 73121792 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644654 data_alloc: 234881024 data_used: 28934144
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a3325000/0x0/0x1bfc00000, data 0x43c9a90/0x45e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.459264755s of 10.672084808s, submitted: 53
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 71942144 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647720 data_alloc: 234881024 data_used: 28934144
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3317000/0x0/0x1bfc00000, data 0x43d6699/0x45f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651880 data_alloc: 234881024 data_used: 29356032
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3317000/0x0/0x1bfc00000, data 0x43d6699/0x45f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3315000/0x0/0x1bfc00000, data 0x43d9699/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3315000/0x0/0x1bfc00000, data 0x43d9699/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 ms_handle_reset con 0x556d2a80b400 session 0x556d2b9f0000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655828 data_alloc: 234881024 data_used: 30093312
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.640201569s of 12.666808128s, submitted: 20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d81be00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 75866112 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 75866112 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ca86800 session 0x556d2dfc9e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 75849728 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 75841536 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 75841536 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3e30000/0x0/0x1bfc00000, data 0x38ba580/0x3add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540438 data_alloc: 234881024 data_used: 21798912
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db10c00 session 0x556d2d9054a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2c6661e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dabd2c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dec1000 session 0x556d2cbfad20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416079872 unmapped: 74579968 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2ca31e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2e1a2800 session 0x556d2dec8d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2d904d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2ca301e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2cbfa3c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 74743808 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2c9e30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601573 data_alloc: 234881024 data_used: 24948736
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 74735616 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3a85000/0x0/0x1bfc00000, data 0x404d625/0x3e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 74735616 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.550464630s of 11.660453796s, submitted: 28
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd42000 session 0x556d2acab860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec8b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 78389248 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2d10f860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3a85000/0x0/0x1bfc00000, data 0x404d625/0x3e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2d10e780
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4721983 data_alloc: 234881024 data_used: 28655616
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db51000 session 0x556d2dce45a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dc08000 session 0x556d2dabe000
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec9860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2de2000/0x0/0x1bfc00000, data 0x4cf15c3/0x4b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4726105 data_alloc: 234881024 data_used: 28737536
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2de2000/0x0/0x1bfc00000, data 0x4cf15c3/0x4b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.725426674s of 11.887782097s, submitted: 30
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419667968 unmapped: 74670080 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770169 data_alloc: 234881024 data_used: 28729344
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 74661888 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2839000/0x0/0x1bfc00000, data 0x52925c3/0x50cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 74661888 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 68239360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 68239360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 68231168 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4864877 data_alloc: 251658240 data_used: 41721856
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a283f000/0x0/0x1bfc00000, data 0x52945c3/0x50cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a283f000/0x0/0x1bfc00000, data 0x52945c3/0x50cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.929699898s of 10.120367050s, submitted: 52
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427270144 unmapped: 67067904 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908561 data_alloc: 251658240 data_used: 44994560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d1eb680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dec1000 session 0x556d2b513c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427270144 unmapped: 67067904 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2b50e5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2638000/0x0/0x1bfc00000, data 0x549b5c3/0x52d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 70303744 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 70303744 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2688000/0x0/0x1bfc00000, data 0x5445590/0x527e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886082 data_alloc: 251658240 data_used: 41627648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2686000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.330945969s of 10.648733139s, submitted: 88
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886098 data_alloc: 251658240 data_used: 41627648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2686000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2df89e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4880610 data_alloc: 251658240 data_used: 41627648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2de8f680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2b5581e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db51000 session 0x556d2de8ef00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.472830772s of 10.495430946s, submitted: 4
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2dce4b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4872896 data_alloc: 251658240 data_used: 41492480
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426893312 unmapped: 67444736 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426893312 unmapped: 67444736 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d291680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2cbfa1e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2d81a960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28c0000/0x0/0x1bfc00000, data 0x5216580/0x504e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851682 data_alloc: 251658240 data_used: 41312256
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28c0000/0x0/0x1bfc00000, data 0x5216580/0x504e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b627400 session 0x556d2d10fc20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b951c00 session 0x556d2c9e2b40
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2c915c00 session 0x556d2d81af00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 67428352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a28eb000/0x0/0x1bfc00000, data 0x4dff295/0x5022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 67428352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2a80b400 session 0x556d2dd345a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4697398 data_alloc: 251658240 data_used: 36945920
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.024978638s of 11.152261734s, submitted: 36
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b627400 session 0x556d2d904960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 69632000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 427 ms_handle_reset con 0x556d2ad52800 session 0x556d2b4d30e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424714240 unmapped: 69623808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4708353 data_alloc: 251658240 data_used: 37019648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 428 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e3000/0x0/0x1bfc00000, data 0x4304846/0x452a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4710655 data_alloc: 251658240 data_used: 37019648
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dec1000 session 0x556d2d1ea5a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e3000/0x0/0x1bfc00000, data 0x4304846/0x452a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2ca30d20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.204053879s of 11.287176132s, submitted: 45
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2df892c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e2000/0x0/0x1bfc00000, data 0x43048a8/0x452b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424747008 unmapped: 69591040 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2d1ccf00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719096 data_alloc: 251658240 data_used: 38354944
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2cbfb860
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791514 data_alloc: 251658240 data_used: 38965248
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 69058560 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 69058560 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791514 data_alloc: 251658240 data_used: 38965248
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.591960907s of 13.724922180s, submitted: 31
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2ac19c20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dec1000 session 0x556d2dec94a0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2cad70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2da68960
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4799802 data_alloc: 251658240 data_used: 40140800
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852746 data_alloc: 251658240 data_used: 47648768
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2cfd70e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.680100441s of 11.751850128s, submitted: 4
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2db51000 session 0x556d2ab892c0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2dce50e0
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2b4d3680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b92000/0x0/0x1bfc00000, data 0x4b56846/0x4d7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [1,1])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850717 data_alloc: 251658240 data_used: 48275456
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2dabcd20
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a40f9000/0x0/0x1bfc00000, data 0x35ef846/0x3815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 64970752 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 63815680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4680866 data_alloc: 234881024 data_used: 34750464
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.299698830s of 12.514649391s, submitted: 127
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670962 data_alloc: 234881024 data_used: 34750464
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429735936 unmapped: 64602112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2da69e00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2cfef680
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.399600983s of 21.432668686s, submitted: 8
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2db51000 session 0x556d2de8fe00
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676511 data_alloc: 234881024 data_used: 34836480
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676511 data_alloc: 234881024 data_used: 34836480
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429948928 unmapped: 64389120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514173508s of 13.534637451s, submitted: 5
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4678591 data_alloc: 234881024 data_used: 34877440
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693807 data_alloc: 234881024 data_used: 36077568
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693983 data_alloc: 234881024 data_used: 36073472
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.544389725s of 11.950169563s, submitted: 8
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693167 data_alloc: 234881024 data_used: 36057088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4692815 data_alloc: 234881024 data_used: 36057088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4692815 data_alloc: 234881024 data_used: 36057088
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:28 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2dec83c0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2d9045a0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429801472 unmapped: 64536576 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429809664 unmapped: 64528384 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.238044739s of 18.349649429s, submitted: 8
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698909 data_alloc: 234881024 data_used: 35725312
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429826048 unmapped: 64512000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429826048 unmapped: 64512000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2b547860
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a37a8000/0x0/0x1bfc00000, data 0x3fee650/0x4166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712557 data_alloc: 234881024 data_used: 35729408
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2c8d9800 session 0x556d2b546f00
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2db51c00 session 0x556d2b56b680
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a37a8000/0x0/0x1bfc00000, data 0x3fee650/0x4166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429842432 unmapped: 64495616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2b627400 session 0x556d2c6661e0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429842432 unmapped: 64495616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 63725568 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2b951c00 session 0x556d2dd33e00
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2c8d9800 session 0x556d2acab860
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777565 data_alloc: 234881024 data_used: 35729408
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f38000/0x0/0x1bfc00000, data 0x485d6b2/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f38000/0x0/0x1bfc00000, data 0x485d6b2/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777565 data_alloc: 234881024 data_used: 35729408
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.582045555s of 15.460376740s, submitted: 42
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2f34000/0x0/0x1bfc00000, data 0x485f3d5/0x49d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 432 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2b4d2000
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 64143360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 433 ms_handle_reset con 0x556d2dcd6400 session 0x556d2ca30960
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2dd0000/0x0/0x1bfc00000, data 0x49beeed/0x4b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 64815104 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2c946800 session 0x556d2b513c20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2ad5c000 session 0x556d2dabed20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4810492 data_alloc: 234881024 data_used: 35737600
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2b627400 session 0x556d2dce4f00
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2c8d9800 session 0x556d2d1032c0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428040192 unmapped: 66297856 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2de8f4a0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b951c00 session 0x556d2de9e000
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1928000/0x0/0x1bfc00000, data 0x4cc6c8e/0x4e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 66289664 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428064768 unmapped: 66273280 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.074008942s of 20.379779816s, submitted: 88
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1361000/0x0/0x1bfc00000, data 0x528c9e9/0x540d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4898372 data_alloc: 251658240 data_used: 37662720
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1361000/0x0/0x1bfc00000, data 0x528c9e9/0x540d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b627400 session 0x556d2d102d20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c8d9800 session 0x556d2d1030e0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c946800 session 0x556d2dfc9680
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2deba000 session 0x556d2d9005a0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128e000/0x0/0x1bfc00000, data 0x535ea0c/0x54e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430055424 unmapped: 64282624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964757 data_alloc: 251658240 data_used: 45170688
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.676950455s of 12.754966736s, submitted: 19
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128c000/0x0/0x1bfc00000, data 0x5360a0c/0x54e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964989 data_alloc: 251658240 data_used: 45170688
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128c000/0x0/0x1bfc00000, data 0x5360a0c/0x54e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 63733760 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964989 data_alloc: 251658240 data_used: 45170688
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430268416 unmapped: 64069632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a113c000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004793 data_alloc: 251658240 data_used: 45223936
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a113c000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.769065857s of 14.865639687s, submitted: 37
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b627400 session 0x556d2acaba40
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b951c00 session 0x556d2d81b680
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1144000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2ad5c000 session 0x556d2de9ed20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999833 data_alloc: 251658240 data_used: 45346816
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c8d9800 session 0x556d2ad30000
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1145000/0x0/0x1bfc00000, data 0x57a89e9/0x5629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 62775296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2dd44400 session 0x556d2d0bc1e0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d300ddc00 session 0x556d2de9ed20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2c946800 session 0x556d2ca310e0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2dd44400 session 0x556d2d901a40
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2d75b400 session 0x556d2b559a40
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2b627400 session 0x556d2c9e21e0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2ad5c000 session 0x556d2d81b680
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428113920 unmapped: 66224128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2b627400 session 0x556d2d900b40
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a251e000/0x0/0x1bfc00000, data 0x3ff977e/0x4177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770683 data_alloc: 251658240 data_used: 37744640
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2a80b400 session 0x556d2dd35860
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2ad52800 session 0x556d2b512f00
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.628050804s of 10.047689438s, submitted: 132
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2c946800 session 0x556d2cbfa3c0
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4760989 data_alloc: 251658240 data_used: 37629952
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a261c000/0x0/0x1bfc00000, data 0x3f2148c/0x4150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 ms_handle_reset con 0x556d2a80b400 session 0x556d2de8fc20
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}'
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 68853760 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}'
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425099264 unmapped: 69238784 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 69443584 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 03:54:29 np0005539550 ceph-osd[84753]: do_command 'log dump' '{prefix=log dump}'
Nov 29 03:54:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45488 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37263 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45094 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 03:54:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106588293' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 03:54:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:29 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 03:54:29 np0005539550 podman[403201]: 2025-11-29 08:54:29.406845014 +0000 UTC m=+0.133738909 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45503 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37278 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45109 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 03:54:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633494285' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37296 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45124 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 03:54:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/272877073' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45527 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:30.277+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45133 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37311 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37323 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:30 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45151 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:30 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 03:54:30 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4293942550' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 03:54:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:31.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:31 np0005539550 nova_compute[257631]: 2025-11-29 08:54:31.282 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:31 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45169 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 29 03:54:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958189089' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 29 03:54:31 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37350 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:31 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:31.566+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:31 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:31 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45181 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 03:54:31 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171313568' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3837724057' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1042063632' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45199 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267732668' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539550 nova_compute[257631]: 2025-11-29 08:54:32.497 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1597377115' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:32 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45232 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T08:54:32.901+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:32 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2524553455' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 03:54:32 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020650254' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 03:54:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:33.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147152108' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165970960' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 03:54:33 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45665 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 03:54:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.4 total, 600.0 interval#012Cumulative writes: 61K writes, 229K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 61K writes, 22K syncs, 2.67 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4840 writes, 17K keys, 4840 commit groups, 1.0 writes per commit group, ingest: 15.96 MB, 0.03 MB/s#012Interval WAL: 4840 writes, 2054 syncs, 2.36 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 03:54:33 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45674 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262731917' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 29 03:54:33 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732643100' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 29 03:54:33 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45680 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45686 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200114691' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951603567' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45698 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37449 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 03:54:34 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717567010' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45716 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:34 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37467 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45737 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37482 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:35.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45755 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37497 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45767 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 03:54:35 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1351625979' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37512 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c11e52aa-7210-42a4-9295-d66376cb5c89 does not exist
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e36f38a-94a6-47c9-a269-ff1098f17e96 does not exist
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 91bca01c-57fd-48f6-a6d6-9c9fc8cd481a does not exist
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:54:36 np0005539550 nova_compute[257631]: 2025-11-29 08:54:36.284 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1035879221' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45776 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37548 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676447210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:36 np0005539550 podman[404740]: 2025-11-29 08:54:36.854538389 +0000 UTC m=+0.072534997 container create 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:54:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:36 np0005539550 podman[404740]: 2025-11-29 08:54:36.82861013 +0000 UTC m=+0.046606768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:36 np0005539550 systemd[1]: Started libpod-conmon-4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f.scope.
Nov 29 03:54:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:36 np0005539550 podman[404740]: 2025-11-29 08:54:36.991474906 +0000 UTC m=+0.209471534 container init 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 29 03:54:37 np0005539550 podman[404740]: 2025-11-29 08:54:36.999875157 +0000 UTC m=+0.217871775 container start 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:54:37 np0005539550 podman[404740]: 2025-11-29 08:54:37.004087772 +0000 UTC m=+0.222084380 container attach 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:54:37 np0005539550 quirky_booth[404790]: 167 167
Nov 29 03:54:37 np0005539550 systemd[1]: libpod-4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f.scope: Deactivated successfully.
Nov 29 03:54:37 np0005539550 podman[404808]: 2025-11-29 08:54:37.048545835 +0000 UTC m=+0.024904135 container died 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:54:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b4f9148251c2d25110931be00c5faebb4302c7310b6d09fd49af36b0787a2a2c-merged.mount: Deactivated successfully.
Nov 29 03:54:37 np0005539550 podman[404808]: 2025-11-29 08:54:37.095715725 +0000 UTC m=+0.072074055 container remove 4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_booth, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:37 np0005539550 systemd[1]: libpod-conmon-4ff482027efa1f3e64c6acd6f72c1b266f16060b79c1a02e269f6c5f14b4602f.scope: Deactivated successfully.
Nov 29 03:54:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/138124854' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37563 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45361 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:37 np0005539550 podman[404836]: 2025-11-29 08:54:37.301027714 +0000 UTC m=+0.040134565 container create 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 29 03:54:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:37 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45815 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:37 np0005539550 systemd[1]: Started libpod-conmon-927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb.scope.
Nov 29 03:54:37 np0005539550 podman[404836]: 2025-11-29 08:54:37.286554342 +0000 UTC m=+0.025661193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:37 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:37 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:37 np0005539550 podman[404836]: 2025-11-29 08:54:37.404250508 +0000 UTC m=+0.143357429 container init 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:54:37 np0005539550 podman[404836]: 2025-11-29 08:54:37.420502925 +0000 UTC m=+0.159609786 container start 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:37 np0005539550 podman[404836]: 2025-11-29 08:54:37.424803682 +0000 UTC m=+0.163910564 container attach 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:37 np0005539550 nova_compute[257631]: 2025-11-29 08:54:37.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:37 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45373 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:37 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45382 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 03:54:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1608151325' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 03:54:38 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45400 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:38 np0005539550 ecstatic_neumann[404874]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:54:38 np0005539550 ecstatic_neumann[404874]: --> relative data size: 1.0
Nov 29 03:54:38 np0005539550 ecstatic_neumann[404874]: --> All data devices are unavailable
Nov 29 03:54:38 np0005539550 systemd[1]: libpod-927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb.scope: Deactivated successfully.
Nov 29 03:54:38 np0005539550 podman[404836]: 2025-11-29 08:54:38.311169747 +0000 UTC m=+1.050276608 container died 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:38 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c7fd151170cf39272acd0ce6bba9572133779c594ab0b2621655a18b57066402-merged.mount: Deactivated successfully.
Nov 29 03:54:38 np0005539550 podman[404836]: 2025-11-29 08:54:38.370665617 +0000 UTC m=+1.109772468 container remove 927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:54:38 np0005539550 systemd[1]: libpod-conmon-927a5fb35e4321253c90250807da318ae86880ace83d5a809ae22fab69869fcb.scope: Deactivated successfully.
Nov 29 03:54:38 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45412 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:38 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37605 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:38 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45427 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 03:54:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478387535' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.045121638 +0000 UTC m=+0.040688799 container create e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:39 np0005539550 systemd[1]: Started libpod-conmon-e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7.scope.
Nov 29 03:54:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:39.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.0260118 +0000 UTC m=+0.021578981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.13548896 +0000 UTC m=+0.131056201 container init e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.143833389 +0000 UTC m=+0.139400590 container start e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.14787146 +0000 UTC m=+0.143438671 container attach e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:54:39 np0005539550 nervous_hertz[405252]: 167 167
Nov 29 03:54:39 np0005539550 systemd[1]: libpod-e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7.scope: Deactivated successfully.
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.152567268 +0000 UTC m=+0.148134469 container died e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 29 03:54:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3d56d7e541c4b32e3705f0dfb1c58587983c2d6332add3df2ec232ae11b1e415-merged.mount: Deactivated successfully.
Nov 29 03:54:39 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45439 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:39 np0005539550 podman[405214]: 2025-11-29 08:54:39.217446682 +0000 UTC m=+0.213013873 container remove e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hertz, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 03:54:39 np0005539550 systemd[1]: libpod-conmon-e53f2d31c4138523ae447faa937d516ebc1023d42f643638c813b843943698f7.scope: Deactivated successfully.
Nov 29 03:54:39 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45881 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 29 03:54:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/60409289' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 29 03:54:39 np0005539550 podman[405280]: 2025-11-29 08:54:39.422252828 +0000 UTC m=+0.053086080 container create f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:54:39 np0005539550 systemd[1]: Started libpod-conmon-f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b.scope.
Nov 29 03:54:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30bead9f40350efc0b22d2d89d2cc92730e99fb6a70632c1684da393982a6515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30bead9f40350efc0b22d2d89d2cc92730e99fb6a70632c1684da393982a6515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30bead9f40350efc0b22d2d89d2cc92730e99fb6a70632c1684da393982a6515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:39 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30bead9f40350efc0b22d2d89d2cc92730e99fb6a70632c1684da393982a6515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:39 np0005539550 podman[405280]: 2025-11-29 08:54:39.401422077 +0000 UTC m=+0.032255329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:39 np0005539550 podman[405280]: 2025-11-29 08:54:39.523259306 +0000 UTC m=+0.154092618 container init f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 29 03:54:39 np0005539550 podman[405280]: 2025-11-29 08:54:39.531296267 +0000 UTC m=+0.162129509 container start f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:39 np0005539550 podman[405280]: 2025-11-29 08:54:39.535230066 +0000 UTC m=+0.166063348 container attach f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:54:39 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45463 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:39 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 29 03:54:39 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817880777' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 29 03:54:40 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45484 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 03:54:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 03:54:40 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 29 03:54:40 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117757558' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]: {
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:    "0": [
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:        {
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "devices": [
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "/dev/loop3"
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            ],
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "lv_name": "ceph_lv0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "lv_size": "7511998464",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "name": "ceph_lv0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "tags": {
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.cluster_name": "ceph",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.crush_device_class": "",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.encrypted": "0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.osd_id": "0",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.type": "block",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:                "ceph.vdo": "0"
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            },
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "type": "block",
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:            "vg_name": "ceph_vg0"
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:        }
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]:    ]
Nov 29 03:54:40 np0005539550 musing_mahavira[405310]: }
Nov 29 03:54:40 np0005539550 systemd[1]: libpod-f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b.scope: Deactivated successfully.
Nov 29 03:54:40 np0005539550 podman[405280]: 2025-11-29 08:54:40.33881368 +0000 UTC m=+0.969646922 container died f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:54:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-30bead9f40350efc0b22d2d89d2cc92730e99fb6a70632c1684da393982a6515-merged.mount: Deactivated successfully.
Nov 29 03:54:40 np0005539550 podman[405280]: 2025-11-29 08:54:40.469274765 +0000 UTC m=+1.100108007 container remove f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:54:40 np0005539550 systemd[1]: libpod-conmon-f2195ef15e886ff46f4c59ed0b6752dfe267382b45ceb3f7014864cc1290127b.scope: Deactivated successfully.
Nov 29 03:54:40 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45911 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:40 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37677 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 29 03:54:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023488612' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 29 03:54:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.124957787 +0000 UTC m=+0.064455514 container create a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:54:41 np0005539550 systemd[1]: Started libpod-conmon-a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad.scope.
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.100283309 +0000 UTC m=+0.039781056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:41 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45529 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.217747929 +0000 UTC m=+0.157245676 container init a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.225651157 +0000 UTC m=+0.165148874 container start a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 29 03:54:41 np0005539550 elated_mirzakhani[405610]: 167 167
Nov 29 03:54:41 np0005539550 systemd[1]: libpod-a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad.scope: Deactivated successfully.
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.24253145 +0000 UTC m=+0.182029177 container attach a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.24375359 +0000 UTC m=+0.183251327 container died a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:54:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d8ecf2d6d20bcd7dca3b6355030230d760bec563a2f37e9e2d22f04b2f20044d-merged.mount: Deactivated successfully.
Nov 29 03:54:41 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45932 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:41 np0005539550 podman[405591]: 2025-11-29 08:54:41.298225324 +0000 UTC m=+0.237723051 container remove a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mirzakhani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:54:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:41.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:41 np0005539550 nova_compute[257631]: 2025-11-29 08:54:41.335 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:41 np0005539550 systemd[1]: libpod-conmon-a945d00e901ef2cd157b0ee85317ece40b53a249f82ac27e0b574eaae48cd4ad.scope: Deactivated successfully.
Nov 29 03:54:41 np0005539550 podman[405661]: 2025-11-29 08:54:41.544596751 +0000 UTC m=+0.044599848 container create 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:54:41 np0005539550 systemd[1]: Started libpod-conmon-3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238.scope.
Nov 29 03:54:41 np0005539550 podman[405661]: 2025-11-29 08:54:41.530459567 +0000 UTC m=+0.030462674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:54:41 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:54:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d48a0e522f23584c16f8bfd14467d31d639ca39acc017985f882b27c45ebf1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d48a0e522f23584c16f8bfd14467d31d639ca39acc017985f882b27c45ebf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d48a0e522f23584c16f8bfd14467d31d639ca39acc017985f882b27c45ebf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:41 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04d48a0e522f23584c16f8bfd14467d31d639ca39acc017985f882b27c45ebf1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:54:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 29 03:54:41 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/343919863' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 29 03:54:41 np0005539550 podman[405661]: 2025-11-29 08:54:41.667271641 +0000 UTC m=+0.167274758 container init 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 03:54:41 np0005539550 podman[405661]: 2025-11-29 08:54:41.673781894 +0000 UTC m=+0.173784981 container start 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:54:41 np0005539550 podman[405661]: 2025-11-29 08:54:41.677369174 +0000 UTC m=+0.177372281 container attach 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:54:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:41 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45941 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:41 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37701 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454099543' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 29 03:54:42 np0005539550 nova_compute[257631]: 2025-11-29 08:54:42.534 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]: {
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:        "osd_id": 0,
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:        "type": "bluestore"
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]:    }
Nov 29 03:54:42 np0005539550 beautiful_turing[405682]: }
Nov 29 03:54:42 np0005539550 systemd[1]: libpod-3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238.scope: Deactivated successfully.
Nov 29 03:54:42 np0005539550 podman[405661]: 2025-11-29 08:54:42.577002851 +0000 UTC m=+1.077005958 container died 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:54:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-04d48a0e522f23584c16f8bfd14467d31d639ca39acc017985f882b27c45ebf1-merged.mount: Deactivated successfully.
Nov 29 03:54:42 np0005539550 podman[405661]: 2025-11-29 08:54:42.624652893 +0000 UTC m=+1.124655990 container remove 3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:54:42 np0005539550 systemd[1]: libpod-conmon-3f72bd3d97e101799980d0806d12810068465d68c81a6ba1972bf6644b303238.scope: Deactivated successfully.
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:54:42 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0fa7a9ec-4998-4853-ad23-587f8fb79da3 does not exist
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1f18ecb1-382d-4e41-a674-b5668699c287 does not exist
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 07cc418d-bd44-472c-8fab-474ce5009d2e does not exist
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37716 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45962 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:43.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37728 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45977 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:43 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45571 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778481131' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 29 03:54:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211378413' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45998 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37749 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46004 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37755 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45592 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 29 03:54:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3831198336' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 29 03:54:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:45.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:45 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 29 03:54:45 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22446820' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 29 03:54:45 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45604 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:45 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37779 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:45 np0005539550 ovs-appctl[406830]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:45 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45613 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:45 np0005539550 ovs-appctl[406838]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:45 np0005539550 ovs-appctl[406848]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 03:54:46 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46034 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:46 np0005539550 nova_compute[257631]: 2025-11-29 08:54:46.336 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:46 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37788 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45631 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:47.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 03:54:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1263279368' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 03:54:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45640 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:47 np0005539550 nova_compute[257631]: 2025-11-29 08:54:47.536 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 29 03:54:47 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3307240702' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 29 03:54:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 03:54:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3149991991' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:48 np0005539550 podman[407433]: 2025-11-29 08:54:48.330781917 +0000 UTC m=+0.063651294 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 03:54:48 np0005539550 podman[407416]: 2025-11-29 08:54:48.334217033 +0000 UTC m=+0.065146281 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 03:54:48 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37815 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:48 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46082 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:48 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45664 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639093317' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:49.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:49 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45673 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 03:54:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Nov 29 03:54:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/612551705' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/220740761' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:49 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46109 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242303500' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37854 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707449652' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:51.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539550 nova_compute[257631]: 2025-11-29 08:54:51.339 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:51.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46133 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3421414835' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46139 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37872 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Nov 29 03:54:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1875405057' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45703 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539550 nova_compute[257631]: 2025-11-29 08:54:52.585 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37884 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46169 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37890 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46181 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:53 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Nov 29 03:54:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498456589' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Nov 29 03:54:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030152672' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37917 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46208 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45742 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37926 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46223 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 03:54:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3269832113' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:55.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:55.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Nov 29 03:54:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/314104981' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45784 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37953 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.37959 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:56 np0005539550 nova_compute[257631]: 2025-11-29 08:54:56.384 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45802 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:54:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/172812152' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45808 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:57.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Nov 29 03:54:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4246028580' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Nov 29 03:54:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:57.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:57 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 03:54:57 np0005539550 nova_compute[257631]: 2025-11-29 08:54:57.588 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:54:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:54:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:54:58 np0005539550 systemd[1]: Starting Time & Date Service...
Nov 29 03:54:59 np0005539550 systemd[1]: Started Time & Date Service.
Nov 29 03:54:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:54:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:54:59.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45826 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:54:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:54:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:54:59.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:54:59
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'images']
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:54:59 np0005539550 podman[409194]: 2025-11-29 08:54:59.553211393 +0000 UTC m=+0.096074985 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45832 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:54:59 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:54:59 np0005539550 nova_compute[257631]: 2025-11-29 08:54:59.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:00 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45850 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:55:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:00 np0005539550 nova_compute[257631]: 2025-11-29 08:55:00.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:00 np0005539550 nova_compute[257631]: 2025-11-29 08:55:00.922 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:55:00 np0005539550 nova_compute[257631]: 2025-11-29 08:55:00.922 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:55:00 np0005539550 nova_compute[257631]: 2025-11-29 08:55:00.950 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:55:01 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.45856 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 03:55:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:01.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:01.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:01 np0005539550 nova_compute[257631]: 2025-11-29 08:55:01.425 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:02 np0005539550 nova_compute[257631]: 2025-11-29 08:55:02.625 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:03.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:03.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:03 np0005539550 nova_compute[257631]: 2025-11-29 08:55:03.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:03 np0005539550 nova_compute[257631]: 2025-11-29 08:55:03.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:03 np0005539550 nova_compute[257631]: 2025-11-29 08:55:03.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:55:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.953 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:55:04 np0005539550 nova_compute[257631]: 2025-11-29 08:55:04.954 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:05.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:05.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3158463580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:05 np0005539550 nova_compute[257631]: 2025-11-29 08:55:05.400 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:05 np0005539550 nova_compute[257631]: 2025-11-29 08:55:05.608 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:55:05 np0005539550 nova_compute[257631]: 2025-11-29 08:55:05.610 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3868MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:55:05 np0005539550 nova_compute[257631]: 2025-11-29 08:55:05.610 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:05 np0005539550 nova_compute[257631]: 2025-11-29 08:55:05.610 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.129 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.130 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.157 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.428 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:55:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1746553917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.609 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.618 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.644 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.646 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:55:06 np0005539550 nova_compute[257631]: 2025-11-29 08:55:06.647 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:07.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:07.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:07 np0005539550 nova_compute[257631]: 2025-11-29 08:55:07.629 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:55:08 np0005539550 nova_compute[257631]: 2025-11-29 08:55:08.648 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:09.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:55:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:55:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:55:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:55:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:55:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:09.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:11.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:11.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:11 np0005539550 nova_compute[257631]: 2025-11-29 08:55:11.457 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:12 np0005539550 nova_compute[257631]: 2025-11-29 08:55:12.671 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:12 np0005539550 nova_compute[257631]: 2025-11-29 08:55:12.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:13.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:13.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:15.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:16 np0005539550 nova_compute[257631]: 2025-11-29 08:55:16.459 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:16 np0005539550 nova_compute[257631]: 2025-11-29 08:55:16.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:17.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:17.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:17 np0005539550 nova_compute[257631]: 2025-11-29 08:55:17.674 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:18 np0005539550 podman[409329]: 2025-11-29 08:55:18.839502486 +0000 UTC m=+0.065713146 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 03:55:18 np0005539550 podman[409328]: 2025-11-29 08:55:18.861929777 +0000 UTC m=+0.093365448 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 03:55:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:55:18.986 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:55:18.987 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:55:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:55:18.987 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:55:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:19.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:19.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:55:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:21.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:21.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:21 np0005539550 nova_compute[257631]: 2025-11-29 08:55:21.461 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:22 np0005539550 nova_compute[257631]: 2025-11-29 08:55:22.677 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:23.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:25.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:25.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:25 np0005539550 nova_compute[257631]: 2025-11-29 08:55:25.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:26 np0005539550 nova_compute[257631]: 2025-11-29 08:55:26.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:55:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:55:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:27.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:27 np0005539550 nova_compute[257631]: 2025-11-29 08:55:27.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:29 np0005539550 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 03:55:29 np0005539550 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 03:55:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:29.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:29.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:30 np0005539550 podman[409375]: 2025-11-29 08:55:30.401362158 +0000 UTC m=+0.131331009 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:55:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:30 np0005539550 nova_compute[257631]: 2025-11-29 08:55:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:30 np0005539550 nova_compute[257631]: 2025-11-29 08:55:30.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 03:55:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:31.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:31.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:31 np0005539550 nova_compute[257631]: 2025-11-29 08:55:31.498 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.760427) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531760561, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1147, "num_deletes": 251, "total_data_size": 1478760, "memory_usage": 1507288, "flush_reason": "Manual Compaction"}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531769110, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 1050502, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70640, "largest_seqno": 71786, "table_properties": {"data_size": 1045003, "index_size": 2509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 17712, "raw_average_key_size": 23, "raw_value_size": 1032203, "raw_average_value_size": 1354, "num_data_blocks": 108, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406467, "oldest_key_time": 1764406467, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 8718 microseconds, and 4203 cpu microseconds.
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.769172) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 1050502 bytes OK
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.769188) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.770614) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.770632) EVENT_LOG_v1 {"time_micros": 1764406531770629, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.770647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1472556, prev total WAL file size 1472556, number of live WAL files 2.
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.771631) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353035' seq:72057594037927935, type:22 .. '6D6772737461740032373537' seq:0, type:0; will stop at (end)
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(1025KB)], [158(13MB)]
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531771789, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 15483852, "oldest_snapshot_seqno": -1}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 10973 keys, 12038245 bytes, temperature: kUnknown
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531878139, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 12038245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11970212, "index_size": 39504, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 289574, "raw_average_key_size": 26, "raw_value_size": 11780633, "raw_average_value_size": 1073, "num_data_blocks": 1496, "num_entries": 10973, "num_filter_entries": 10973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406531, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.878513) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 12038245 bytes
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.881743) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.5 rd, 113.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(26.2) write-amplify(11.5) OK, records in: 11461, records dropped: 488 output_compression: NoCompression
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.881776) EVENT_LOG_v1 {"time_micros": 1764406531881760, "job": 98, "event": "compaction_finished", "compaction_time_micros": 106441, "compaction_time_cpu_micros": 68100, "output_level": 6, "num_output_files": 1, "total_output_size": 12038245, "num_input_records": 11461, "num_output_records": 10973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531882290, "job": 98, "event": "table_file_deletion", "file_number": 160}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406531886897, "job": 98, "event": "table_file_deletion", "file_number": 158}
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.771308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.887041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.887052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.887056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.887060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:31 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:55:31.887064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:55:32 np0005539550 nova_compute[257631]: 2025-11-29 08:55:32.723 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:33.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:33.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:35.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:35.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:36 np0005539550 nova_compute[257631]: 2025-11-29 08:55:36.502 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:37.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:37.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:37 np0005539550 nova_compute[257631]: 2025-11-29 08:55:37.726 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:39.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:39 np0005539550 nova_compute[257631]: 2025-11-29 08:55:39.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:41.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:41.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:41 np0005539550 nova_compute[257631]: 2025-11-29 08:55:41.541 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:41 np0005539550 nova_compute[257631]: 2025-11-29 08:55:41.933 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:55:41 np0005539550 nova_compute[257631]: 2025-11-29 08:55:41.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 03:55:41 np0005539550 nova_compute[257631]: 2025-11-29 08:55:41.952 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 03:55:42 np0005539550 nova_compute[257631]: 2025-11-29 08:55:42.759 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:43.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:43.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b8f1f785-12e1-4871-a75a-529721c74cfe does not exist
Nov 29 03:55:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9011b113-043e-4d6c-a2db-bf853238549a does not exist
Nov 29 03:55:44 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 600128b5-812f-40fc-804e-474f535fa7ce does not exist
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:44 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:55:44 np0005539550 systemd[1]: session-75.scope: Deactivated successfully.
Nov 29 03:55:44 np0005539550 systemd[1]: session-75.scope: Consumed 2min 52.003s CPU time, 967.4M memory peak, read 319.1M from disk, written 335.8M to disk.
Nov 29 03:55:44 np0005539550 systemd-logind[788]: Session 75 logged out. Waiting for processes to exit.
Nov 29 03:55:44 np0005539550 systemd-logind[788]: Removed session 75.
Nov 29 03:55:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:45 np0005539550 systemd-logind[788]: New session 76 of user zuul.
Nov 29 03:55:45 np0005539550 systemd[1]: Started Session 76 of User zuul.
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.128428164 +0000 UTC m=+0.059067419 container create 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 03:55:45 np0005539550 systemd[1]: Started libpod-conmon-1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f.scope.
Nov 29 03:55:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.101176872 +0000 UTC m=+0.031816117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.23052919 +0000 UTC m=+0.161168445 container init 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.242815287 +0000 UTC m=+0.173454512 container start 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.246430648 +0000 UTC m=+0.177069883 container attach 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:45 np0005539550 boring_wing[409775]: 167 167
Nov 29 03:55:45 np0005539550 systemd[1]: libpod-1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f.scope: Deactivated successfully.
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.253617608 +0000 UTC m=+0.184256863 container died 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:55:45 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fe495965d3ed72b943747c564220e3cd0ad501cde08d20082ea84b8f22c9b18b-merged.mount: Deactivated successfully.
Nov 29 03:55:45 np0005539550 podman[409735]: 2025-11-29 08:55:45.304652625 +0000 UTC m=+0.235291880 container remove 1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_wing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:45 np0005539550 systemd[1]: libpod-conmon-1b41e4823e698fa6f844ff95dcef5dc0ce1106682b7e84e2e256cfb8ab11a51f.scope: Deactivated successfully.
Nov 29 03:55:45 np0005539550 systemd[1]: session-76.scope: Deactivated successfully.
Nov 29 03:55:45 np0005539550 systemd-logind[788]: Session 76 logged out. Waiting for processes to exit.
Nov 29 03:55:45 np0005539550 systemd-logind[788]: Removed session 76.
Nov 29 03:55:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:45.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:45 np0005539550 systemd-logind[788]: New session 77 of user zuul.
Nov 29 03:55:45 np0005539550 podman[409801]: 2025-11-29 08:55:45.503723758 +0000 UTC m=+0.052543476 container create 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 29 03:55:45 np0005539550 systemd[1]: Started Session 77 of User zuul.
Nov 29 03:55:45 np0005539550 systemd[1]: Started libpod-conmon-67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a.scope.
Nov 29 03:55:45 np0005539550 podman[409801]: 2025-11-29 08:55:45.481258595 +0000 UTC m=+0.030078333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:45 np0005539550 podman[409801]: 2025-11-29 08:55:45.597544096 +0000 UTC m=+0.146363814 container init 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:45 np0005539550 podman[409801]: 2025-11-29 08:55:45.611236479 +0000 UTC m=+0.160056227 container start 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 03:55:45 np0005539550 podman[409801]: 2025-11-29 08:55:45.616696025 +0000 UTC m=+0.165515763 container attach 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 29 03:55:45 np0005539550 systemd[1]: session-77.scope: Deactivated successfully.
Nov 29 03:55:45 np0005539550 systemd-logind[788]: Session 77 logged out. Waiting for processes to exit.
Nov 29 03:55:45 np0005539550 systemd-logind[788]: Removed session 77.
Nov 29 03:55:46 np0005539550 funny_northcutt[409819]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:55:46 np0005539550 funny_northcutt[409819]: --> relative data size: 1.0
Nov 29 03:55:46 np0005539550 funny_northcutt[409819]: --> All data devices are unavailable
Nov 29 03:55:46 np0005539550 systemd[1]: libpod-67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a.scope: Deactivated successfully.
Nov 29 03:55:46 np0005539550 podman[409801]: 2025-11-29 08:55:46.517299277 +0000 UTC m=+1.066119005 container died 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:55:46 np0005539550 nova_compute[257631]: 2025-11-29 08:55:46.593 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-6abd652753dad3d24c21f75f49212d71125d36c0eab0bbb8c9fc31040962cdec-merged.mount: Deactivated successfully.
Nov 29 03:55:46 np0005539550 podman[409801]: 2025-11-29 08:55:46.633278639 +0000 UTC m=+1.182098357 container remove 67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:46 np0005539550 systemd[1]: libpod-conmon-67a9ea8f91708a6cb6ebe2798a23f71b62c100225915d967cfb50f0c0e8c538a.scope: Deactivated successfully.
Nov 29 03:55:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:47.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.521125363 +0000 UTC m=+0.054341082 container create 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:55:47 np0005539550 systemd[1]: Started libpod-conmon-522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343.scope.
Nov 29 03:55:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.498676421 +0000 UTC m=+0.031892210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.610564571 +0000 UTC m=+0.143780360 container init 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.621754971 +0000 UTC m=+0.154970720 container start 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.627491865 +0000 UTC m=+0.160707614 container attach 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:47 np0005539550 boring_turing[410030]: 167 167
Nov 29 03:55:47 np0005539550 systemd[1]: libpod-522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343.scope: Deactivated successfully.
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.629254499 +0000 UTC m=+0.162470218 container died 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1612a42f0aa2df54307d5971e6a35de773d6730d97f4a08698cc2d1ce5e479f4-merged.mount: Deactivated successfully.
Nov 29 03:55:47 np0005539550 podman[410014]: 2025-11-29 08:55:47.670493131 +0000 UTC m=+0.203708880 container remove 522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:55:47 np0005539550 systemd[1]: libpod-conmon-522ce1552c7b72100fc8ca69670841d9b73a202b0d2a350a25850acc89e0d343.scope: Deactivated successfully.
Nov 29 03:55:47 np0005539550 nova_compute[257631]: 2025-11-29 08:55:47.795 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:47 np0005539550 podman[410054]: 2025-11-29 08:55:47.913138095 +0000 UTC m=+0.071063690 container create 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:55:47 np0005539550 systemd[1]: Started libpod-conmon-7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6.scope.
Nov 29 03:55:47 np0005539550 podman[410054]: 2025-11-29 08:55:47.884845847 +0000 UTC m=+0.042771532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3644e7e731533f77d052e9958c7d5eb9ba3c2fc68dd920410ad5e219f0df8ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3644e7e731533f77d052e9958c7d5eb9ba3c2fc68dd920410ad5e219f0df8ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3644e7e731533f77d052e9958c7d5eb9ba3c2fc68dd920410ad5e219f0df8ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:47 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3644e7e731533f77d052e9958c7d5eb9ba3c2fc68dd920410ad5e219f0df8ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:48 np0005539550 podman[410054]: 2025-11-29 08:55:48.012995014 +0000 UTC m=+0.170920619 container init 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:55:48 np0005539550 podman[410054]: 2025-11-29 08:55:48.019661901 +0000 UTC m=+0.177587496 container start 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 29 03:55:48 np0005539550 podman[410054]: 2025-11-29 08:55:48.024178704 +0000 UTC m=+0.182104319 container attach 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]: {
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:    "0": [
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:        {
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "devices": [
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "/dev/loop3"
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            ],
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "lv_name": "ceph_lv0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "lv_size": "7511998464",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "name": "ceph_lv0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "tags": {
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.cluster_name": "ceph",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.crush_device_class": "",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.encrypted": "0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.osd_id": "0",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.type": "block",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:                "ceph.vdo": "0"
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            },
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "type": "block",
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:            "vg_name": "ceph_vg0"
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:        }
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]:    ]
Nov 29 03:55:48 np0005539550 serene_aryabhata[410070]: }
Nov 29 03:55:48 np0005539550 systemd[1]: libpod-7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6.scope: Deactivated successfully.
Nov 29 03:55:48 np0005539550 podman[410054]: 2025-11-29 08:55:48.787929131 +0000 UTC m=+0.945854716 container died 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:48 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c3644e7e731533f77d052e9958c7d5eb9ba3c2fc68dd920410ad5e219f0df8ed-merged.mount: Deactivated successfully.
Nov 29 03:55:48 np0005539550 podman[410054]: 2025-11-29 08:55:48.841923892 +0000 UTC m=+0.999849477 container remove 7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:48 np0005539550 systemd[1]: libpod-conmon-7cc51b8283d52e5bbba3e51586fc30a8d8bae9877f5595c4b00a5924a5cc15b6.scope: Deactivated successfully.
Nov 29 03:55:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:48 np0005539550 podman[410090]: 2025-11-29 08:55:48.953031723 +0000 UTC m=+0.069231373 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 03:55:49 np0005539550 podman[410131]: 2025-11-29 08:55:49.058050092 +0000 UTC m=+0.077848260 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.598743016 +0000 UTC m=+0.052821653 container create e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:55:49 np0005539550 systemd[1]: Started libpod-conmon-e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330.scope.
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.575938595 +0000 UTC m=+0.030017232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.696596855 +0000 UTC m=+0.150675502 container init e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.703944539 +0000 UTC m=+0.158023176 container start e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.707829336 +0000 UTC m=+0.161907973 container attach e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:55:49 np0005539550 infallible_babbage[410284]: 167 167
Nov 29 03:55:49 np0005539550 systemd[1]: libpod-e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330.scope: Deactivated successfully.
Nov 29 03:55:49 np0005539550 conmon[410284]: conmon e19144aabde7d0e3dca6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330.scope/container/memory.events
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.711531239 +0000 UTC m=+0.165609836 container died e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-efc27fdd714c01d17834e5060ec51697dded499bf406196a3f5579a678db9b37-merged.mount: Deactivated successfully.
Nov 29 03:55:49 np0005539550 podman[410268]: 2025-11-29 08:55:49.759632933 +0000 UTC m=+0.213711550 container remove e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_babbage, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:55:49 np0005539550 systemd[1]: libpod-conmon-e19144aabde7d0e3dca6c30d240e2a1416d66a69ba4e5dcc277743f514432330.scope: Deactivated successfully.
Nov 29 03:55:49 np0005539550 podman[410307]: 2025-11-29 08:55:49.919909683 +0000 UTC m=+0.039652053 container create 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:55:49 np0005539550 systemd[1]: Started libpod-conmon-96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5.scope.
Nov 29 03:55:49 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:55:49 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a5041410101b74856baf478bbfa77e1fe84eb3096b46ab1b80373f709cd190/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:49 np0005539550 podman[410307]: 2025-11-29 08:55:49.902803035 +0000 UTC m=+0.022545435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:55:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a5041410101b74856baf478bbfa77e1fe84eb3096b46ab1b80373f709cd190/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a5041410101b74856baf478bbfa77e1fe84eb3096b46ab1b80373f709cd190/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:50 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71a5041410101b74856baf478bbfa77e1fe84eb3096b46ab1b80373f709cd190/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:55:50 np0005539550 podman[410307]: 2025-11-29 08:55:50.013947917 +0000 UTC m=+0.133690307 container init 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 03:55:50 np0005539550 podman[410307]: 2025-11-29 08:55:50.022758278 +0000 UTC m=+0.142500648 container start 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:55:50 np0005539550 podman[410307]: 2025-11-29 08:55:50.026022239 +0000 UTC m=+0.145764609 container attach 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]: {
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:        "osd_id": 0,
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:        "type": "bluestore"
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]:    }
Nov 29 03:55:50 np0005539550 competent_heisenberg[410323]: }
Nov 29 03:55:50 np0005539550 systemd[1]: libpod-96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5.scope: Deactivated successfully.
Nov 29 03:55:50 np0005539550 podman[410307]: 2025-11-29 08:55:50.882393955 +0000 UTC m=+1.002136325 container died 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 29 03:55:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:50 np0005539550 systemd[1]: var-lib-containers-storage-overlay-71a5041410101b74856baf478bbfa77e1fe84eb3096b46ab1b80373f709cd190-merged.mount: Deactivated successfully.
Nov 29 03:55:50 np0005539550 podman[410307]: 2025-11-29 08:55:50.93855921 +0000 UTC m=+1.058301590 container remove 96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:55:50 np0005539550 systemd[1]: libpod-conmon-96dc29ff1233db36aa340dd5e64277f17555ef1322d163c59aa665b6a72e8dc5.scope: Deactivated successfully.
Nov 29 03:55:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:55:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:55:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 382cd1ea-9ec9-4f30-8e37-4e669094bf03 does not exist
Nov 29 03:55:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6171a63b-91bb-4179-9c0e-b36ceaf8d12f does not exist
Nov 29 03:55:51 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 21217f41-bca0-4636-a132-09d3888e9a44 does not exist
Nov 29 03:55:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:51.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:51 np0005539550 nova_compute[257631]: 2025-11-29 08:55:51.636 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:52 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:55:52 np0005539550 nova_compute[257631]: 2025-11-29 08:55:52.798 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:55.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:55.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:56 np0005539550 nova_compute[257631]: 2025-11-29 08:55:56.638 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:55:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:57.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:57.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:57 np0005539550 nova_compute[257631]: 2025-11-29 08:55:57.842 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:55:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:55:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:55:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:55:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:55:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:55:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:55:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:55:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:55:59.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:55:59
Nov 29 03:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr']
Nov 29 03:55:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:56:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:00 np0005539550 nova_compute[257631]: 2025-11-29 08:56:00.939 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:00 np0005539550 nova_compute[257631]: 2025-11-29 08:56:00.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:56:00 np0005539550 nova_compute[257631]: 2025-11-29 08:56:00.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:56:00 np0005539550 nova_compute[257631]: 2025-11-29 08:56:00.960 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:56:00 np0005539550 nova_compute[257631]: 2025-11-29 08:56:00.960 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539550 podman[410467]: 2025-11-29 08:56:01.440097482 +0000 UTC m=+0.165599286 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 03:56:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:01.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:01 np0005539550 nova_compute[257631]: 2025-11-29 08:56:01.679 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.980667) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561980754, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 522, "num_deletes": 255, "total_data_size": 555613, "memory_usage": 565504, "flush_reason": "Manual Compaction"}
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561989099, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 550047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71787, "largest_seqno": 72308, "table_properties": {"data_size": 547097, "index_size": 921, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6826, "raw_average_key_size": 18, "raw_value_size": 541200, "raw_average_value_size": 1478, "num_data_blocks": 40, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406532, "oldest_key_time": 1764406532, "file_creation_time": 1764406561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8481 microseconds, and 4755 cpu microseconds.
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.989154) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 550047 bytes OK
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.989180) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.991178) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.991203) EVENT_LOG_v1 {"time_micros": 1764406561991195, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.991225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 552640, prev total WAL file size 552640, number of live WAL files 2.
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.991980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373635' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(537KB)], [161(11MB)]
Nov 29 03:56:01 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406561992038, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12588292, "oldest_snapshot_seqno": -1}
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 10817 keys, 12447167 bytes, temperature: kUnknown
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562097352, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 12447167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12379215, "index_size": 39807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 287279, "raw_average_key_size": 26, "raw_value_size": 12191434, "raw_average_value_size": 1127, "num_data_blocks": 1506, "num_entries": 10817, "num_filter_entries": 10817, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.097737) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 12447167 bytes
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(45.5) write-amplify(22.6) OK, records in: 11339, records dropped: 522 output_compression: NoCompression
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.099446) EVENT_LOG_v1 {"time_micros": 1764406562099436, "job": 100, "event": "compaction_finished", "compaction_time_micros": 105508, "compaction_time_cpu_micros": 55212, "output_level": 6, "num_output_files": 1, "total_output_size": 12447167, "num_input_records": 11339, "num_output_records": 10817, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562099877, "job": 100, "event": "table_file_deletion", "file_number": 163}
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406562102532, "job": 100, "event": "table_file_deletion", "file_number": 161}
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:01.991807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.102685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.102694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.102698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.102702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:56:02.102705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:56:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:02 np0005539550 nova_compute[257631]: 2025-11-29 08:56:02.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:03.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:03 np0005539550 nova_compute[257631]: 2025-11-29 08:56:03.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:04 np0005539550 nova_compute[257631]: 2025-11-29 08:56:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:04 np0005539550 nova_compute[257631]: 2025-11-29 08:56:04.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:56:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:05.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:05 np0005539550 nova_compute[257631]: 2025-11-29 08:56:05.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.948 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.949 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:56:06 np0005539550 nova_compute[257631]: 2025-11-29 08:56:06.950 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/400603087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.390 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.599 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.600 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3998MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.601 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.601 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.681 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.682 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.702 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.733 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.734 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.753 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.806 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.834 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:56:07 np0005539550 nova_compute[257631]: 2025-11-29 08:56:07.931 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:56:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824679768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:56:08 np0005539550 nova_compute[257631]: 2025-11-29 08:56:08.341 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:56:08 np0005539550 nova_compute[257631]: 2025-11-29 08:56:08.347 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:56:08 np0005539550 nova_compute[257631]: 2025-11-29 08:56:08.512 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:56:08 np0005539550 nova_compute[257631]: 2025-11-29 08:56:08.516 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:56:08 np0005539550 nova_compute[257631]: 2025-11-29 08:56:08.516 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:56:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:56:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:56:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:56:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:56:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:09.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:11 np0005539550 nova_compute[257631]: 2025-11-29 08:56:11.682 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:12 np0005539550 nova_compute[257631]: 2025-11-29 08:56:12.977 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:13.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:13.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:14 np0005539550 nova_compute[257631]: 2025-11-29 08:56:14.511 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:15.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:15.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:16 np0005539550 nova_compute[257631]: 2025-11-29 08:56:16.729 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:17.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:17.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:17 np0005539550 nova_compute[257631]: 2025-11-29 08:56:17.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:17 np0005539550 nova_compute[257631]: 2025-11-29 08:56:17.986 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:56:18.986 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:56:18.987 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:56:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:56:18.987 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:56:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:19.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539550 podman[410597]: 2025-11-29 08:56:19.383282207 +0000 UTC m=+0.101525432 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:56:19 np0005539550 podman[410598]: 2025-11-29 08:56:19.393014961 +0000 UTC m=+0.111797279 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 03:56:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:19.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:19 np0005539550 nova_compute[257631]: 2025-11-29 08:56:19.732 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:56:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:21.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:21 np0005539550 nova_compute[257631]: 2025-11-29 08:56:21.731 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:22 np0005539550 nova_compute[257631]: 2025-11-29 08:56:22.987 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:23.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:26 np0005539550 nova_compute[257631]: 2025-11-29 08:56:26.733 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:27 np0005539550 nova_compute[257631]: 2025-11-29 08:56:27.989 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:29.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:31 np0005539550 nova_compute[257631]: 2025-11-29 08:56:31.734 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:32 np0005539550 podman[410640]: 2025-11-29 08:56:32.421123923 +0000 UTC m=+0.145931333 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 03:56:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:32 np0005539550 nova_compute[257631]: 2025-11-29 08:56:32.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 03:56:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 03:56:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:56:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:35.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:36 np0005539550 nova_compute[257631]: 2025-11-29 08:56:36.737 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 29 03:56:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:37.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 29 03:56:37 np0005539550 nova_compute[257631]: 2025-11-29 08:56:37.993 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 0 B/s wr, 27 op/s
Nov 29 03:56:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:39.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 62 op/s
Nov 29 03:56:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:41.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:41.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:41 np0005539550 nova_compute[257631]: 2025-11-29 08:56:41.741 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 0 B/s wr, 144 op/s
Nov 29 03:56:43 np0005539550 nova_compute[257631]: 2025-11-29 08:56:43.060 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:43.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 95 KiB/s rd, 0 B/s wr, 158 op/s
Nov 29 03:56:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:45.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:45.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:46 np0005539550 nova_compute[257631]: 2025-11-29 08:56:46.742 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 102 KiB/s rd, 0 B/s wr, 170 op/s
Nov 29 03:56:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:48 np0005539550 nova_compute[257631]: 2025-11-29 08:56:48.063 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 104 KiB/s rd, 0 B/s wr, 173 op/s
Nov 29 03:56:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:49.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:50 np0005539550 podman[410726]: 2025-11-29 08:56:50.320345478 +0000 UTC m=+0.050571267 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:56:50 np0005539550 podman[410725]: 2025-11-29 08:56:50.346941684 +0000 UTC m=+0.083547433 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 03:56:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 0 B/s wr, 147 op/s
Nov 29 03:56:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:51.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:51.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:51 np0005539550 nova_compute[257631]: 2025-11-29 08:56:51.772 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:52 np0005539550 podman[410934]: 2025-11-29 08:56:52.554495648 +0000 UTC m=+0.072470855 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:52 np0005539550 podman[410934]: 2025-11-29 08:56:52.66126262 +0000 UTC m=+0.179237817 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:56:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 67 KiB/s rd, 0 B/s wr, 112 op/s
Nov 29 03:56:53 np0005539550 nova_compute[257631]: 2025-11-29 08:56:53.065 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:56:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:53.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:53 np0005539550 podman[411088]: 2025-11-29 08:56:53.377632021 +0000 UTC m=+0.091006969 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:56:53 np0005539550 podman[411088]: 2025-11-29 08:56:53.395405476 +0000 UTC m=+0.108780404 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 03:56:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:56:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:53.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:56:53 np0005539550 podman[411153]: 2025-11-29 08:56:53.679409275 +0000 UTC m=+0.059637204 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Nov 29 03:56:53 np0005539550 podman[411153]: 2025-11-29 08:56:53.693247371 +0000 UTC m=+0.073475270 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vcs-type=git, architecture=x86_64, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:56:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 02288229-9da8-45a0-b96d-73dc4ed3eb3a does not exist
Nov 29 03:56:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e5825da2-e55f-476e-b0cc-66aa24e3cd55 does not exist
Nov 29 03:56:55 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e620d660-dd31-46bb-a40c-d4e0729ca9ae does not exist
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:56:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:55.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:56:55 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:56:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:55.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.697153388 +0000 UTC m=+0.037708935 container create d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:56:55 np0005539550 systemd[1]: Started libpod-conmon-d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903.scope.
Nov 29 03:56:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.765836067 +0000 UTC m=+0.106391614 container init d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.772517474 +0000 UTC m=+0.113073021 container start d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.775277613 +0000 UTC m=+0.115833180 container attach d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:56:55 np0005539550 laughing_hamilton[411473]: 167 167
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.680961103 +0000 UTC m=+0.021516680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:55 np0005539550 systemd[1]: libpod-d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903.scope: Deactivated successfully.
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.777230222 +0000 UTC m=+0.117785769 container died d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:56:55 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2773e5a7f46bf77d92552dcf65fb6ca7491ec26d458530e63e0c74fab582cf80-merged.mount: Deactivated successfully.
Nov 29 03:56:55 np0005539550 podman[411457]: 2025-11-29 08:56:55.811257694 +0000 UTC m=+0.151813241 container remove d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:56:55 np0005539550 systemd[1]: libpod-conmon-d734240ebfc6468f76f575f882886c594326824772f5cdd59217ccb5852ad903.scope: Deactivated successfully.
Nov 29 03:56:55 np0005539550 podman[411497]: 2025-11-29 08:56:55.960645683 +0000 UTC m=+0.042362411 container create 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 29 03:56:55 np0005539550 systemd[1]: Started libpod-conmon-56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde.scope.
Nov 29 03:56:56 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:56:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:56 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:56.029891756 +0000 UTC m=+0.111608514 container init 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:55.941542875 +0000 UTC m=+0.023259623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:56.042955613 +0000 UTC m=+0.124672341 container start 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:56.047395435 +0000 UTC m=+0.129112163 container attach 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:56:56 np0005539550 nova_compute[257631]: 2025-11-29 08:56:56.774 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:56:56 np0005539550 condescending_shaw[411512]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:56:56 np0005539550 condescending_shaw[411512]: --> relative data size: 1.0
Nov 29 03:56:56 np0005539550 condescending_shaw[411512]: --> All data devices are unavailable
Nov 29 03:56:56 np0005539550 systemd[1]: libpod-56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde.scope: Deactivated successfully.
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:56.92057605 +0000 UTC m=+1.002292868 container died 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:56:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.3 KiB/s rd, 0 B/s wr, 15 op/s
Nov 29 03:56:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-fe9db1c484717cb57f6056d866627def23191a0e3bcf249f6715d50b8cdcfd45-merged.mount: Deactivated successfully.
Nov 29 03:56:56 np0005539550 podman[411497]: 2025-11-29 08:56:56.980379257 +0000 UTC m=+1.062095975 container remove 56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_shaw, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:56:56 np0005539550 systemd[1]: libpod-conmon-56047d7dd8940a9e75d6bec69878fee4f57fbf4b9b3ca6ee71e3b7d9a4fc9dde.scope: Deactivated successfully.
Nov 29 03:56:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:57.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:56:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:57.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.616638323 +0000 UTC m=+0.043981432 container create 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:56:57 np0005539550 systemd[1]: Started libpod-conmon-64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d.scope.
Nov 29 03:56:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.593896264 +0000 UTC m=+0.021239393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.702598655 +0000 UTC m=+0.129941794 container init 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.714993115 +0000 UTC m=+0.142336264 container start 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.719324233 +0000 UTC m=+0.146667422 container attach 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:56:57 np0005539550 beautiful_jackson[411738]: 167 167
Nov 29 03:56:57 np0005539550 systemd[1]: libpod-64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d.scope: Deactivated successfully.
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.721966009 +0000 UTC m=+0.149309148 container died 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:56:57 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8059effd8772dd2cb398d352bf1f0d9265d96b39910bc53bb1799c083473bff8-merged.mount: Deactivated successfully.
Nov 29 03:56:57 np0005539550 podman[411680]: 2025-11-29 08:56:57.760249697 +0000 UTC m=+0.187592806 container remove 64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:57 np0005539550 systemd[1]: libpod-conmon-64fdf5de3e1cbc2042ef50b8432303739578a433d7bbeb05a3013eab28645f0d.scope: Deactivated successfully.
Nov 29 03:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:57 np0005539550 podman[411767]: 2025-11-29 08:56:57.932670872 +0000 UTC m=+0.045342036 container create abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:56:57 np0005539550 systemd[1]: Started libpod-conmon-abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e.scope.
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:57.91021688 +0000 UTC m=+0.022888114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:58 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:56:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8126140eea560c864f9b4130b926ebac7eee0a61f6d73c93ff3252439ff9deed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8126140eea560c864f9b4130b926ebac7eee0a61f6d73c93ff3252439ff9deed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8126140eea560c864f9b4130b926ebac7eee0a61f6d73c93ff3252439ff9deed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:58 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8126140eea560c864f9b4130b926ebac7eee0a61f6d73c93ff3252439ff9deed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:58.03884019 +0000 UTC m=+0.151511394 container init abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:58.054081021 +0000 UTC m=+0.166752215 container start abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:58.058909902 +0000 UTC m=+0.171581056 container attach abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 03:56:58 np0005539550 nova_compute[257631]: 2025-11-29 08:56:58.068 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:56:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:56:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]: {
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:    "0": [
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:        {
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "devices": [
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "/dev/loop3"
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            ],
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "lv_name": "ceph_lv0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "lv_size": "7511998464",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "name": "ceph_lv0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "tags": {
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.cluster_name": "ceph",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.crush_device_class": "",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.encrypted": "0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.osd_id": "0",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.type": "block",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:                "ceph.vdo": "0"
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            },
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "type": "block",
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:            "vg_name": "ceph_vg0"
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:        }
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]:    ]
Nov 29 03:56:58 np0005539550 wizardly_borg[411783]: }
Nov 29 03:56:58 np0005539550 systemd[1]: libpod-abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e.scope: Deactivated successfully.
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:58.827848109 +0000 UTC m=+0.940519283 container died abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:58 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8126140eea560c864f9b4130b926ebac7eee0a61f6d73c93ff3252439ff9deed-merged.mount: Deactivated successfully.
Nov 29 03:56:58 np0005539550 podman[411767]: 2025-11-29 08:56:58.892312962 +0000 UTC m=+1.004984146 container remove abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:56:58 np0005539550 systemd[1]: libpod-conmon-abcecf25dfcdb8373daf409944ab5ed546e7874eb23eb60dea87a39bb3bb989e.scope: Deactivated successfully.
Nov 29 03:56:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Nov 29 03:56:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:56:59.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:56:59
Nov 29 03:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Nov 29 03:56:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:56:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:56:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:56:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:56:59.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.580122878 +0000 UTC m=+0.045294175 container create 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 03:56:59 np0005539550 systemd[1]: Started libpod-conmon-6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4.scope.
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.562557809 +0000 UTC m=+0.027729126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:56:59 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.758432181 +0000 UTC m=+0.223603518 container init 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.764422361 +0000 UTC m=+0.229593668 container start 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.767726164 +0000 UTC m=+0.232897491 container attach 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Nov 29 03:56:59 np0005539550 gallant_bouman[411964]: 167 167
Nov 29 03:56:59 np0005539550 systemd[1]: libpod-6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4.scope: Deactivated successfully.
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.769837747 +0000 UTC m=+0.235009064 container died 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 03:56:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7f4ac700c78243eea67080d8514f612ff8f71f935d34dbd8ff87c8149631a956-merged.mount: Deactivated successfully.
Nov 29 03:56:59 np0005539550 podman[411948]: 2025-11-29 08:56:59.812667619 +0000 UTC m=+0.277838946 container remove 6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:56:59 np0005539550 systemd[1]: libpod-conmon-6a177a7c93a8c59823bb639416b230bd420863f4078eead495ceed87485871f4.scope: Deactivated successfully.
Nov 29 03:56:59 np0005539550 podman[411988]: 2025-11-29 08:56:59.981972297 +0000 UTC m=+0.037771997 container create 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:57:00 np0005539550 systemd[1]: Started libpod-conmon-2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2.scope.
Nov 29 03:57:00 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:57:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba69b29505e3a6d551f0e2e59acd8d04776ac78c31f909f1c608a1f67f5ad8f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba69b29505e3a6d551f0e2e59acd8d04776ac78c31f909f1c608a1f67f5ad8f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba69b29505e3a6d551f0e2e59acd8d04776ac78c31f909f1c608a1f67f5ad8f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:00 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba69b29505e3a6d551f0e2e59acd8d04776ac78c31f909f1c608a1f67f5ad8f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:57:00 np0005539550 podman[411988]: 2025-11-29 08:56:59.965507014 +0000 UTC m=+0.021306724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:57:00 np0005539550 podman[411988]: 2025-11-29 08:57:00.065900387 +0000 UTC m=+0.121700097 container init 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:57:00 np0005539550 podman[411988]: 2025-11-29 08:57:00.072363789 +0000 UTC m=+0.128163479 container start 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:57:00 np0005539550 podman[411988]: 2025-11-29 08:57:00.075363704 +0000 UTC m=+0.131163414 container attach 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:57:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]: {
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:        "osd_id": 0,
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:        "type": "bluestore"
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]:    }
Nov 29 03:57:00 np0005539550 distracted_tesla[412005]: }
Nov 29 03:57:00 np0005539550 systemd[1]: libpod-2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2.scope: Deactivated successfully.
Nov 29 03:57:00 np0005539550 podman[411988]: 2025-11-29 08:57:00.994657534 +0000 UTC m=+1.050457244 container died 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:57:01 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ba69b29505e3a6d551f0e2e59acd8d04776ac78c31f909f1c608a1f67f5ad8f8-merged.mount: Deactivated successfully.
Nov 29 03:57:01 np0005539550 podman[411988]: 2025-11-29 08:57:01.061424325 +0000 UTC m=+1.117224025 container remove 2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 03:57:01 np0005539550 systemd[1]: libpod-conmon-2039ef6242269ed07ad85371294e9edf6de2d61a8cec0d435761cec72cf9b9a2.scope: Deactivated successfully.
Nov 29 03:57:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:57:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:57:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:01.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:01.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:01 np0005539550 nova_compute[257631]: 2025-11-29 08:57:01.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ca3799c2-3f1a-4403-9c7f-2819f266ad4c does not exist
Nov 29 03:57:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 66fa3012-74aa-4963-8e58-5090d70b6797 does not exist
Nov 29 03:57:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6b6e1d8d-bf7c-40a4-9a42-94f428dde295 does not exist
Nov 29 03:57:01 np0005539550 nova_compute[257631]: 2025-11-29 08:57:01.948 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:57:02 np0005539550 nova_compute[257631]: 2025-11-29 08:57:02.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:02 np0005539550 nova_compute[257631]: 2025-11-29 08:57:02.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:57:02 np0005539550 nova_compute[257631]: 2025-11-29 08:57:02.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:57:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:02 np0005539550 nova_compute[257631]: 2025-11-29 08:57:02.935 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:57:03 np0005539550 nova_compute[257631]: 2025-11-29 08:57:03.160 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:03.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:03 np0005539550 podman[412092]: 2025-11-29 08:57:03.424659736 +0000 UTC m=+0.150216791 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:57:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:03.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:04 np0005539550 nova_compute[257631]: 2025-11-29 08:57:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:04 np0005539550 nova_compute[257631]: 2025-11-29 08:57:04.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:04 np0005539550 nova_compute[257631]: 2025-11-29 08:57:04.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:57:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:05.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:05 np0005539550 nova_compute[257631]: 2025-11-29 08:57:05.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.778 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.953 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:57:06 np0005539550 nova_compute[257631]: 2025-11-29 08:57:06.954 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:07.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3296981510' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.421 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:07.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.667 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.668 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.668 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.669 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.734 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.735 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:57:07 np0005539550 nova_compute[257631]: 2025-11-29 08:57:07.750 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.163 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:57:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109670277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.224 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.234 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.256 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.257 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:57:08 np0005539550 nova_compute[257631]: 2025-11-29 08:57:08.258 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:57:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:57:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:57:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:57:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:57:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:57:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:09.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:10 np0005539550 nova_compute[257631]: 2025-11-29 08:57:10.259 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:11 np0005539550 nova_compute[257631]: 2025-11-29 08:57:11.874 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:13 np0005539550 nova_compute[257631]: 2025-11-29 08:57:13.201 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:13.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:13 np0005539550 nova_compute[257631]: 2025-11-29 08:57:13.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:15.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:16 np0005539550 nova_compute[257631]: 2025-11-29 08:57:16.907 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:17.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:18 np0005539550 nova_compute[257631]: 2025-11-29 08:57:18.242 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:18 np0005539550 nova_compute[257631]: 2025-11-29 08:57:18.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:57:18.986 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:57:18.987 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:57:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:57:18.988 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:57:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:19.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:57:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:21 np0005539550 podman[412223]: 2025-11-29 08:57:21.345145303 +0000 UTC m=+0.077541662 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 03:57:21 np0005539550 podman[412222]: 2025-11-29 08:57:21.353507722 +0000 UTC m=+0.092015244 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 03:57:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:21.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:21 np0005539550 nova_compute[257631]: 2025-11-29 08:57:21.962 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:23 np0005539550 nova_compute[257631]: 2025-11-29 08:57:23.245 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:23.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:23.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:25.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:26 np0005539550 nova_compute[257631]: 2025-11-29 08:57:26.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:57:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:27 np0005539550 nova_compute[257631]: 2025-11-29 08:57:27.009 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:27.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:27.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:28 np0005539550 nova_compute[257631]: 2025-11-29 08:57:28.247 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:29.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:29.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:31.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:32 np0005539550 nova_compute[257631]: 2025-11-29 08:57:32.044 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:33 np0005539550 nova_compute[257631]: 2025-11-29 08:57:33.287 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:33.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:34 np0005539550 podman[412270]: 2025-11-29 08:57:34.397220986 +0000 UTC m=+0.126154720 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 03:57:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:37 np0005539550 nova_compute[257631]: 2025-11-29 08:57:37.048 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:37.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:37.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:38 np0005539550 nova_compute[257631]: 2025-11-29 08:57:38.290 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:39.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:39.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:41.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:42 np0005539550 nova_compute[257631]: 2025-11-29 08:57:42.092 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:43.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:43 np0005539550 nova_compute[257631]: 2025-11-29 08:57:43.338 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:47 np0005539550 nova_compute[257631]: 2025-11-29 08:57:47.141 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:47.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:48 np0005539550 nova_compute[257631]: 2025-11-29 08:57:48.340 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:49.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:49.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:51.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:51.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:52 np0005539550 nova_compute[257631]: 2025-11-29 08:57:52.175 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:52 np0005539550 podman[412357]: 2025-11-29 08:57:52.369349675 +0000 UTC m=+0.087770502 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 03:57:52 np0005539550 podman[412356]: 2025-11-29 08:57:52.377329566 +0000 UTC m=+0.105152570 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:57:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:53 np0005539550 nova_compute[257631]: 2025-11-29 08:57:53.343 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:53.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:53.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:55.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:55.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:57:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:57 np0005539550 nova_compute[257631]: 2025-11-29 08:57:57.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:57.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:57:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:57:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:57.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:57:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:57:58 np0005539550 nova_compute[257631]: 2025-11-29 08:57:58.355 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:57:58 np0005539550 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1950343944
Nov 29 03:57:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:57:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:57:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:57:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:57:59
Nov 29 03:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Nov 29 03:57:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:57:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:57:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:57:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:57:59.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.650259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680650385, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1241, "num_deletes": 251, "total_data_size": 2076005, "memory_usage": 2099952, "flush_reason": "Manual Compaction"}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680667539, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 2040821, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72309, "largest_seqno": 73549, "table_properties": {"data_size": 2034972, "index_size": 3179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12429, "raw_average_key_size": 19, "raw_value_size": 2023247, "raw_average_value_size": 3242, "num_data_blocks": 141, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406562, "oldest_key_time": 1764406562, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 17317 microseconds, and 8773 cpu microseconds.
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.667579) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 2040821 bytes OK
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.667597) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.668842) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.668858) EVENT_LOG_v1 {"time_micros": 1764406680668849, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.668887) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 2070595, prev total WAL file size 2070595, number of live WAL files 2.
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.669667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1992KB)], [164(11MB)]
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680669753, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 14487988, "oldest_snapshot_seqno": -1}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 10924 keys, 12556212 bytes, temperature: kUnknown
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680759656, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 12556212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12487673, "index_size": 40138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 290138, "raw_average_key_size": 26, "raw_value_size": 12297982, "raw_average_value_size": 1125, "num_data_blocks": 1515, "num_entries": 10924, "num_filter_entries": 10924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406680, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.760233) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 12556212 bytes
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.761972) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.8 rd, 139.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(13.3) write-amplify(6.2) OK, records in: 11441, records dropped: 517 output_compression: NoCompression
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.762009) EVENT_LOG_v1 {"time_micros": 1764406680761991, "job": 102, "event": "compaction_finished", "compaction_time_micros": 90084, "compaction_time_cpu_micros": 55046, "output_level": 6, "num_output_files": 1, "total_output_size": 12556212, "num_input_records": 11441, "num_output_records": 10924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680763010, "job": 102, "event": "table_file_deletion", "file_number": 166}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406680767149, "job": 102, "event": "table_file_deletion", "file_number": 164}
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.669573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.767307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.767329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.767333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.767336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-08:58:00.767340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 03:58:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:01.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:02 np0005539550 nova_compute[257631]: 2025-11-29 08:58:02.257 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:58:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:03.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1774df37-2fb5-4070-b865-48967948a488 does not exist
Nov 29 03:58:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5da9efab-ab68-4a00-afff-e7834497b31f does not exist
Nov 29 03:58:03 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 09acc3a5-fb45-4915-808b-1771db1f1aec does not exist
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:58:03 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.399 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:58:03 np0005539550 nova_compute[257631]: 2025-11-29 08:58:03.934 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:58:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:04 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.108441229 +0000 UTC m=+0.052642447 container create 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 03:58:04 np0005539550 systemd[1]: Started libpod-conmon-6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba.scope.
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.081293925 +0000 UTC m=+0.025495203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.221214691 +0000 UTC m=+0.165415959 container init 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.232159226 +0000 UTC m=+0.176360414 container start 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.235394938 +0000 UTC m=+0.179596216 container attach 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:58:04 np0005539550 systemd[1]: libpod-6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba.scope: Deactivated successfully.
Nov 29 03:58:04 np0005539550 serene_mirzakhani[412738]: 167 167
Nov 29 03:58:04 np0005539550 conmon[412738]: conmon 6ea5e06ba24229517a57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba.scope/container/memory.events
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.245115533 +0000 UTC m=+0.189316761 container died 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 03:58:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a33f82cb22526a9605207416438a7bf7119627f9b3008f76269e6683957af90a-merged.mount: Deactivated successfully.
Nov 29 03:58:04 np0005539550 podman[412721]: 2025-11-29 08:58:04.294478046 +0000 UTC m=+0.238679264 container remove 6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:58:04 np0005539550 systemd[1]: libpod-conmon-6ea5e06ba24229517a57a2200f802c013928e29a4b00c2f537a542219bd453ba.scope: Deactivated successfully.
Nov 29 03:58:04 np0005539550 podman[412762]: 2025-11-29 08:58:04.546191058 +0000 UTC m=+0.071463861 container create dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:04 np0005539550 systemd[1]: Started libpod-conmon-dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da.scope.
Nov 29 03:58:04 np0005539550 podman[412762]: 2025-11-29 08:58:04.518817569 +0000 UTC m=+0.044090382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:04 np0005539550 podman[412762]: 2025-11-29 08:58:04.654431876 +0000 UTC m=+0.179704689 container init dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 29 03:58:04 np0005539550 podman[412762]: 2025-11-29 08:58:04.668763467 +0000 UTC m=+0.194036260 container start dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:58:04 np0005539550 podman[412762]: 2025-11-29 08:58:04.673145267 +0000 UTC m=+0.198418080 container attach dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:04 np0005539550 podman[412776]: 2025-11-29 08:58:04.711644277 +0000 UTC m=+0.123541154 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 03:58:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:05 np0005539550 dazzling_swartz[412785]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:58:05 np0005539550 dazzling_swartz[412785]: --> relative data size: 1.0
Nov 29 03:58:05 np0005539550 dazzling_swartz[412785]: --> All data devices are unavailable
Nov 29 03:58:05 np0005539550 systemd[1]: libpod-dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da.scope: Deactivated successfully.
Nov 29 03:58:05 np0005539550 podman[412762]: 2025-11-29 08:58:05.492372868 +0000 UTC m=+1.017645721 container died dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 29 03:58:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-032d46c51692138021b4c62cd2aadd329afecff3058b0c445cb4d07d6dc4866b-merged.mount: Deactivated successfully.
Nov 29 03:58:05 np0005539550 podman[412762]: 2025-11-29 08:58:05.561991122 +0000 UTC m=+1.087263925 container remove dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 03:58:05 np0005539550 systemd[1]: libpod-conmon-dd88dcec2ba3297222944d394832bb162863a13cf02fa98cb99c516d1d1473da.scope: Deactivated successfully.
Nov 29 03:58:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:05 np0005539550 nova_compute[257631]: 2025-11-29 08:58:05.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:05 np0005539550 nova_compute[257631]: 2025-11-29 08:58:05.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:05 np0005539550 nova_compute[257631]: 2025-11-29 08:58:05.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:05 np0005539550 nova_compute[257631]: 2025-11-29 08:58:05.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.277110518 +0000 UTC m=+0.054280208 container create 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 03:58:06 np0005539550 systemd[1]: Started libpod-conmon-040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263.scope.
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.254668773 +0000 UTC m=+0.031838473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.374241895 +0000 UTC m=+0.151411585 container init 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.3867437 +0000 UTC m=+0.163913360 container start 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.390344741 +0000 UTC m=+0.167514431 container attach 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:58:06 np0005539550 nervous_zhukovsky[412992]: 167 167
Nov 29 03:58:06 np0005539550 systemd[1]: libpod-040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263.scope: Deactivated successfully.
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.395171793 +0000 UTC m=+0.172341453 container died 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 03:58:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-21ee647974f20d8c07c5d4aa65c25225d393ed2b00f72a741831cb716c8668d0-merged.mount: Deactivated successfully.
Nov 29 03:58:06 np0005539550 podman[412975]: 2025-11-29 08:58:06.443138871 +0000 UTC m=+0.220308561 container remove 040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 29 03:58:06 np0005539550 systemd[1]: libpod-conmon-040d8ce63f6d8cf73c653a2ce92df131a51bf3782c28136ba2a2a977515e6263.scope: Deactivated successfully.
Nov 29 03:58:06 np0005539550 podman[413018]: 2025-11-29 08:58:06.628322167 +0000 UTC m=+0.045152228 container create 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:58:06 np0005539550 systemd[1]: Started libpod-conmon-820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894.scope.
Nov 29 03:58:06 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c1a7df9f95c82358b36c3eb8b7d15f633514713180782d7016b9acebc3e34a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c1a7df9f95c82358b36c3eb8b7d15f633514713180782d7016b9acebc3e34a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c1a7df9f95c82358b36c3eb8b7d15f633514713180782d7016b9acebc3e34a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:06 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9c1a7df9f95c82358b36c3eb8b7d15f633514713180782d7016b9acebc3e34a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:06 np0005539550 podman[413018]: 2025-11-29 08:58:06.61135364 +0000 UTC m=+0.028183711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:06 np0005539550 podman[413018]: 2025-11-29 08:58:06.714693823 +0000 UTC m=+0.131523884 container init 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:58:06 np0005539550 podman[413018]: 2025-11-29 08:58:06.727444804 +0000 UTC m=+0.144274875 container start 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:58:06 np0005539550 podman[413018]: 2025-11-29 08:58:06.7308586 +0000 UTC m=+0.147688671 container attach 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.952 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:58:06 np0005539550 nova_compute[257631]: 2025-11-29 08:58:06.953 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:07.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/865732149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.440 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]: {
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:    "0": [
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:        {
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "devices": [
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "/dev/loop3"
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            ],
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "lv_name": "ceph_lv0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "lv_size": "7511998464",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "name": "ceph_lv0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "tags": {
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.cluster_name": "ceph",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.crush_device_class": "",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.encrypted": "0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.osd_id": "0",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.type": "block",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:                "ceph.vdo": "0"
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            },
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "type": "block",
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:            "vg_name": "ceph_vg0"
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:        }
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]:    ]
Nov 29 03:58:07 np0005539550 serene_cartwright[413035]: }
Nov 29 03:58:07 np0005539550 systemd[1]: libpod-820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894.scope: Deactivated successfully.
Nov 29 03:58:07 np0005539550 podman[413018]: 2025-11-29 08:58:07.484118129 +0000 UTC m=+0.900948210 container died 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:58:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a9c1a7df9f95c82358b36c3eb8b7d15f633514713180782d7016b9acebc3e34a-merged.mount: Deactivated successfully.
Nov 29 03:58:07 np0005539550 podman[413018]: 2025-11-29 08:58:07.554149403 +0000 UTC m=+0.970979464 container remove 820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_cartwright, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:58:07 np0005539550 systemd[1]: libpod-conmon-820d30eab8ce1ad28018b2f40f823c33327f6a0ff5e64b978fabae4af5aeb894.scope: Deactivated successfully.
Nov 29 03:58:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.650 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:58:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.652 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3996MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.652 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.652 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.833 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.834 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:58:07 np0005539550 nova_compute[257631]: 2025-11-29 08:58:07.963 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.401 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.408244952 +0000 UTC m=+0.057032188 container create 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 03:58:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:58:08 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4240691496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.428 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.435 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:58:08 np0005539550 systemd[1]: Started libpod-conmon-55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627.scope.
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.476 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.479 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:58:08 np0005539550 nova_compute[257631]: 2025-11-29 08:58:08.479 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.388573516 +0000 UTC m=+0.037360852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.515085344 +0000 UTC m=+0.163872610 container init 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.526721217 +0000 UTC m=+0.175508453 container start 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.530279877 +0000 UTC m=+0.179067133 container attach 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 03:58:08 np0005539550 great_merkle[413256]: 167 167
Nov 29 03:58:08 np0005539550 systemd[1]: libpod-55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627.scope: Deactivated successfully.
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.532117973 +0000 UTC m=+0.180905209 container died 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:58:08 np0005539550 systemd[1]: var-lib-containers-storage-overlay-87ba195abb3a6f8872b3173821d0b54678317aae99e596fba85e8ca7c7145677-merged.mount: Deactivated successfully.
Nov 29 03:58:08 np0005539550 podman[413238]: 2025-11-29 08:58:08.576253005 +0000 UTC m=+0.225040241 container remove 55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_merkle, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:58:08 np0005539550 systemd[1]: libpod-conmon-55d0cfe4919789b25f10ef409ef57bb7a77f1733139564971310b8aecbd09627.scope: Deactivated successfully.
Nov 29 03:58:08 np0005539550 podman[413282]: 2025-11-29 08:58:08.784516352 +0000 UTC m=+0.045598470 container create 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 03:58:08 np0005539550 systemd[1]: Started libpod-conmon-38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9.scope.
Nov 29 03:58:08 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:58:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9376bbd9bcbac438c39b8c60d5823f4edebd6eff208948cea35eebc54b13a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9376bbd9bcbac438c39b8c60d5823f4edebd6eff208948cea35eebc54b13a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9376bbd9bcbac438c39b8c60d5823f4edebd6eff208948cea35eebc54b13a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:08 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e9376bbd9bcbac438c39b8c60d5823f4edebd6eff208948cea35eebc54b13a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:58:08 np0005539550 podman[413282]: 2025-11-29 08:58:08.768277703 +0000 UTC m=+0.029359841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:58:08 np0005539550 podman[413282]: 2025-11-29 08:58:08.875124285 +0000 UTC m=+0.136206413 container init 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 03:58:08 np0005539550 podman[413282]: 2025-11-29 08:58:08.881458975 +0000 UTC m=+0.142541103 container start 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 03:58:08 np0005539550 podman[413282]: 2025-11-29 08:58:08.888600965 +0000 UTC m=+0.149683083 container attach 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:58:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:58:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:58:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:58:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:58:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:09.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:09 np0005539550 practical_bartik[413300]: {
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:        "osd_id": 0,
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:        "type": "bluestore"
Nov 29 03:58:09 np0005539550 practical_bartik[413300]:    }
Nov 29 03:58:09 np0005539550 practical_bartik[413300]: }
Nov 29 03:58:09 np0005539550 systemd[1]: libpod-38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9.scope: Deactivated successfully.
Nov 29 03:58:09 np0005539550 podman[413321]: 2025-11-29 08:58:09.861334792 +0000 UTC m=+0.045886778 container died 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 03:58:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-43e9376bbd9bcbac438c39b8c60d5823f4edebd6eff208948cea35eebc54b13a-merged.mount: Deactivated successfully.
Nov 29 03:58:09 np0005539550 podman[413321]: 2025-11-29 08:58:09.918612365 +0000 UTC m=+0.103164341 container remove 38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 03:58:09 np0005539550 systemd[1]: libpod-conmon-38f787d280d857b8bddf700e51183515dadfbbf75c08d42ff1e668dc6dde5ed9.scope: Deactivated successfully.
Nov 29 03:58:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:58:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:58:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 922f86aa-f46f-4d97-9714-8788b713af1d does not exist
Nov 29 03:58:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f8e0b260-5177-4f67-ba80-3466f51b6899 does not exist
Nov 29 03:58:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9457aac1-90fb-4f5a-a061-b6af1010b306 does not exist
Nov 29 03:58:10 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:58:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:11.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:11 np0005539550 nova_compute[257631]: 2025-11-29 08:58:11.480 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:11.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:12 np0005539550 nova_compute[257631]: 2025-11-29 08:58:12.262 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:13.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:13 np0005539550 nova_compute[257631]: 2025-11-29 08:58:13.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:13.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:15.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:15 np0005539550 nova_compute[257631]: 2025-11-29 08:58:15.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:17 np0005539550 nova_compute[257631]: 2025-11-29 08:58:17.315 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:17.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:18 np0005539550 nova_compute[257631]: 2025-11-29 08:58:18.467 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:58:18.988 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:58:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:58:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:58:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:58:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:19 np0005539550 nova_compute[257631]: 2025-11-29 08:58:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:58:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:22 np0005539550 nova_compute[257631]: 2025-11-29 08:58:22.318 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:23 np0005539550 podman[413443]: 2025-11-29 08:58:23.361325079 +0000 UTC m=+0.080300504 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 03:58:23 np0005539550 podman[413444]: 2025-11-29 08:58:23.370387857 +0000 UTC m=+0.090536192 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 03:58:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:23.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:23 np0005539550 nova_compute[257631]: 2025-11-29 08:58:23.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:25.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:27 np0005539550 nova_compute[257631]: 2025-11-29 08:58:27.320 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:27.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:28 np0005539550 nova_compute[257631]: 2025-11-29 08:58:28.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:29.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:30 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:31.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:31.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:32 np0005539550 nova_compute[257631]: 2025-11-29 08:58:32.342 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:32 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:33.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:33 np0005539550 nova_compute[257631]: 2025-11-29 08:58:33.542 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:33.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:34 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:35 np0005539550 podman[413486]: 2025-11-29 08:58:35.370307628 +0000 UTC m=+0.099179009 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Nov 29 03:58:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:35.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:36 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:37 np0005539550 nova_compute[257631]: 2025-11-29 08:58:37.344 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:37.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:37.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:38 np0005539550 nova_compute[257631]: 2025-11-29 08:58:38.587 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:38 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:39.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:40 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:41.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:41.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:42 np0005539550 nova_compute[257631]: 2025-11-29 08:58:42.345 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:42 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:43.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:43 np0005539550 nova_compute[257631]: 2025-11-29 08:58:43.629 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:43.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:44 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:45.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000050s ======
Nov 29 03:58:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:45.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Nov 29 03:58:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:46 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:47 np0005539550 nova_compute[257631]: 2025-11-29 08:58:47.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:47.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:47.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:48 np0005539550 nova_compute[257631]: 2025-11-29 08:58:48.669 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:48 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:49.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:50 np0005539550 nova_compute[257631]: 2025-11-29 08:58:50.459 257641 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:58:50 np0005539550 nova_compute[257631]: 2025-11-29 08:58:50.497 257641 DEBUG oslo_concurrency.processutils [None req-ff64d45f-ac0e-4f0e-bb3f-545ad70e0b2f 07d8fdc1f04d4769b5744eeac3a6f5f4 313f5427e3624aa189013c3cc05bee02 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:58:50 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:58:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:51.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:58:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:51.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:52 np0005539550 nova_compute[257631]: 2025-11-29 08:58:52.403 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:52 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:53.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:53 np0005539550 nova_compute[257631]: 2025-11-29 08:58:53.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:53.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:54 np0005539550 podman[413575]: 2025-11-29 08:58:54.352524959 +0000 UTC m=+0.081441813 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 03:58:54 np0005539550 podman[413576]: 2025-11-29 08:58:54.391002889 +0000 UTC m=+0.105102539 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 03:58:54 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:55.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:58:56.451 158978 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ce:68:48', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '2e:9d:b2:f8:66:7c'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 03:58:56 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:58:56.452 158978 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 03:58:56 np0005539550 nova_compute[257631]: 2025-11-29 08:58:56.451 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:58:56 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:57 np0005539550 nova_compute[257631]: 2025-11-29 08:58:57.405 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:57.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:58:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:58:58 np0005539550 nova_compute[257631]: 2025-11-29 08:58:58.721 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:58:58 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:58:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:58:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:58:59.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:58:59
Nov 29 03:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.meta', 'images', 'default.rgw.control']
Nov 29 03:58:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:58:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:58:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:58:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:58:59.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:00 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:01.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:01.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:02 np0005539550 nova_compute[257631]: 2025-11-29 08:59:02.455 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:02 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:03.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:03.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:03 np0005539550 nova_compute[257631]: 2025-11-29 08:59:03.764 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:04 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:59:04.454 158978 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=a63f2f14-fdc7-4ca7-8f8c-b6069e1c40e8, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 03:59:04 np0005539550 nova_compute[257631]: 2025-11-29 08:59:04.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:04 np0005539550 nova_compute[257631]: 2025-11-29 08:59:04.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 03:59:04 np0005539550 nova_compute[257631]: 2025-11-29 08:59:04.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 03:59:04 np0005539550 nova_compute[257631]: 2025-11-29 08:59:04.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 03:59:04 np0005539550 nova_compute[257631]: 2025-11-29 08:59:04.935 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:04 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:05.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:05 np0005539550 nova_compute[257631]: 2025-11-29 08:59:05.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539550 podman[413669]: 2025-11-29 08:59:06.424406476 +0000 UTC m=+0.158396422 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 03:59:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:06 np0005539550 nova_compute[257631]: 2025-11-29 08:59:06.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539550 nova_compute[257631]: 2025-11-29 08:59:06.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:06 np0005539550 nova_compute[257631]: 2025-11-29 08:59:06.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 03:59:06 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:07.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:07 np0005539550 nova_compute[257631]: 2025-11-29 08:59:07.458 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:07.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.767 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.959 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 03:59:08 np0005539550 nova_compute[257631]: 2025-11-29 08:59:08.960 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:08 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 03:59:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 03:59:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 03:59:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 03:59:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 03:59:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1483933663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.393 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:09.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.565 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.566 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4047MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.567 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.567 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.660 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.660 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 03:59:09 np0005539550 nova_compute[257631]: 2025-11-29 08:59:09.738 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 03:59:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:09.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 03:59:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592998917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 03:59:10 np0005539550 nova_compute[257631]: 2025-11-29 08:59:10.176 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 03:59:10 np0005539550 nova_compute[257631]: 2025-11-29 08:59:10.184 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 03:59:10 np0005539550 nova_compute[257631]: 2025-11-29 08:59:10.206 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 03:59:10 np0005539550 nova_compute[257631]: 2025-11-29 08:59:10.208 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 03:59:10 np0005539550 nova_compute[257631]: 2025-11-29 08:59:10.208 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:10 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 nova_compute[257631]: 2025-11-29 08:59:11.209 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:11.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6e479e5b-40d1-4b88-85dc-05f630121caf does not exist
Nov 29 03:59:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev dc39e57e-59cd-4d5d-8433-96d6de5616eb does not exist
Nov 29 03:59:12 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 4aad41e6-2c81-4f62-92f3-1f236fd63a40 does not exist
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 03:59:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 03:59:12 np0005539550 nova_compute[257631]: 2025-11-29 08:59:12.460 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.787986314 +0000 UTC m=+0.035027254 container create f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:59:12 np0005539550 systemd[1]: Started libpod-conmon-f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa.scope.
Nov 29 03:59:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.867883767 +0000 UTC m=+0.114924727 container init f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.773394736 +0000 UTC m=+0.020435696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.876541705 +0000 UTC m=+0.123582635 container start f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.879853438 +0000 UTC m=+0.126894388 container attach f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 03:59:12 np0005539550 wonderful_shannon[414153]: 167 167
Nov 29 03:59:12 np0005539550 systemd[1]: libpod-f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa.scope: Deactivated successfully.
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.883003608 +0000 UTC m=+0.130044538 container died f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 03:59:12 np0005539550 systemd[1]: var-lib-containers-storage-overlay-33890032f783b380e013400e85fd912e040c80f7e1ef2c6ff7cbaa080b6d2351-merged.mount: Deactivated successfully.
Nov 29 03:59:12 np0005539550 podman[414136]: 2025-11-29 08:59:12.915408854 +0000 UTC m=+0.162449804 container remove f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_shannon, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 29 03:59:12 np0005539550 systemd[1]: libpod-conmon-f50411b422fa2006ed16224bcca6ca83a3dba84a6b00322dfcdaf98e4dc910fa.scope: Deactivated successfully.
Nov 29 03:59:12 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 03:59:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:13 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.066446149 +0000 UTC m=+0.042623784 container create 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:59:13 np0005539550 systemd[1]: Started libpod-conmon-7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4.scope.
Nov 29 03:59:13 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:13 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.1363394 +0000 UTC m=+0.112517055 container init 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.051417571 +0000 UTC m=+0.027595226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.145245935 +0000 UTC m=+0.121423570 container start 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.147960363 +0000 UTC m=+0.124137998 container attach 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:59:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:13.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:13 np0005539550 nova_compute[257631]: 2025-11-29 08:59:13.793 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:13 np0005539550 busy_fermi[414192]: --> passed data devices: 0 physical, 1 LVM
Nov 29 03:59:13 np0005539550 busy_fermi[414192]: --> relative data size: 1.0
Nov 29 03:59:13 np0005539550 busy_fermi[414192]: --> All data devices are unavailable
Nov 29 03:59:13 np0005539550 systemd[1]: libpod-7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4.scope: Deactivated successfully.
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.899646971 +0000 UTC m=+0.875824606 container died 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:59:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0fc167641fe0aae4bbacaf18adf64ec8ab03495870331dac872aaf4fc949dd44-merged.mount: Deactivated successfully.
Nov 29 03:59:13 np0005539550 podman[414175]: 2025-11-29 08:59:13.947542588 +0000 UTC m=+0.923720223 container remove 7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 29 03:59:13 np0005539550 systemd[1]: libpod-conmon-7c480f0f176009be5bcf5ba69c9b9a7ac803b2752df37b249bf09fe56a1ceca4.scope: Deactivated successfully.
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.548806196 +0000 UTC m=+0.035190527 container create af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:59:14 np0005539550 systemd[1]: Started libpod-conmon-af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36.scope.
Nov 29 03:59:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.626080413 +0000 UTC m=+0.112464774 container init af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.532914436 +0000 UTC m=+0.019298787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.632574397 +0000 UTC m=+0.118958728 container start af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.635806668 +0000 UTC m=+0.122191009 container attach af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 03:59:14 np0005539550 loving_nightingale[414375]: 167 167
Nov 29 03:59:14 np0005539550 systemd[1]: libpod-af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36.scope: Deactivated successfully.
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.63787423 +0000 UTC m=+0.124258571 container died af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 03:59:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b994c60c86ba071300d3969050ab23b79623f967775e2cc8505cec28e7eb1ede-merged.mount: Deactivated successfully.
Nov 29 03:59:14 np0005539550 podman[414359]: 2025-11-29 08:59:14.672048761 +0000 UTC m=+0.158433092 container remove af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:59:14 np0005539550 systemd[1]: libpod-conmon-af0d1cd4e25ef0fc2c96738c5c813ebf1a5105c1a346719156558faf9b1ebf36.scope: Deactivated successfully.
Nov 29 03:59:14 np0005539550 podman[414400]: 2025-11-29 08:59:14.835822628 +0000 UTC m=+0.051993941 container create f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 03:59:14 np0005539550 systemd[1]: Started libpod-conmon-f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340.scope.
Nov 29 03:59:14 np0005539550 podman[414400]: 2025-11-29 08:59:14.809142575 +0000 UTC m=+0.025313988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1019158f23b15582bf938b3df8a7034a649ab9d77a1116e04a99f5971052555f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1019158f23b15582bf938b3df8a7034a649ab9d77a1116e04a99f5971052555f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1019158f23b15582bf938b3df8a7034a649ab9d77a1116e04a99f5971052555f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:14 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1019158f23b15582bf938b3df8a7034a649ab9d77a1116e04a99f5971052555f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:14 np0005539550 podman[414400]: 2025-11-29 08:59:14.922964933 +0000 UTC m=+0.139136256 container init f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 03:59:14 np0005539550 podman[414400]: 2025-11-29 08:59:14.932191446 +0000 UTC m=+0.148362769 container start f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:14 np0005539550 podman[414400]: 2025-11-29 08:59:14.93592052 +0000 UTC m=+0.152091843 container attach f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 03:59:14 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:15 np0005539550 zealous_gould[414418]: {
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:    "0": [
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:        {
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "devices": [
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "/dev/loop3"
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            ],
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "lv_name": "ceph_lv0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "lv_size": "7511998464",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "name": "ceph_lv0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "tags": {
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.cephx_lockbox_secret": "",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.cluster_name": "ceph",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.crush_device_class": "",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.encrypted": "0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.osd_id": "0",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.type": "block",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:                "ceph.vdo": "0"
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            },
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "type": "block",
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:            "vg_name": "ceph_vg0"
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:        }
Nov 29 03:59:15 np0005539550 zealous_gould[414418]:    ]
Nov 29 03:59:15 np0005539550 zealous_gould[414418]: }
Nov 29 03:59:15 np0005539550 systemd[1]: libpod-f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340.scope: Deactivated successfully.
Nov 29 03:59:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:15.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:15 np0005539550 podman[414427]: 2025-11-29 08:59:15.765979643 +0000 UTC m=+0.030928550 container died f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Nov 29 03:59:15 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1019158f23b15582bf938b3df8a7034a649ab9d77a1116e04a99f5971052555f-merged.mount: Deactivated successfully.
Nov 29 03:59:15 np0005539550 podman[414427]: 2025-11-29 08:59:15.821276576 +0000 UTC m=+0.086225433 container remove f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gould, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:15 np0005539550 systemd[1]: libpod-conmon-f3ddd224ca1d9953fed02bd2fb831cb0b694af7b15357c2dc6e0e9a2ffc8a340.scope: Deactivated successfully.
Nov 29 03:59:15 np0005539550 nova_compute[257631]: 2025-11-29 08:59:15.913 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.392841467 +0000 UTC m=+0.038422589 container create 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:59:16 np0005539550 systemd[1]: Started libpod-conmon-7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f.scope.
Nov 29 03:59:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.375093989 +0000 UTC m=+0.020675161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.480536386 +0000 UTC m=+0.126117508 container init 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.490421575 +0000 UTC m=+0.136002737 container start 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 03:59:16 np0005539550 friendly_saha[414597]: 167 167
Nov 29 03:59:16 np0005539550 systemd[1]: libpod-7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f.scope: Deactivated successfully.
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.494433526 +0000 UTC m=+0.140014648 container attach 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.494745544 +0000 UTC m=+0.140326666 container died 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 03:59:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-af4874d18859e140ce3a3949593b4079e9bb6e790dccb8178c3023e161f7ce2a-merged.mount: Deactivated successfully.
Nov 29 03:59:16 np0005539550 podman[414581]: 2025-11-29 08:59:16.529247013 +0000 UTC m=+0.174828135 container remove 7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 03:59:16 np0005539550 systemd[1]: libpod-conmon-7edf6106f41cdf86ff8faa56215e0dbce5b1778f4166eaf86a3184ab89fd061f.scope: Deactivated successfully.
Nov 29 03:59:16 np0005539550 podman[414621]: 2025-11-29 08:59:16.707807302 +0000 UTC m=+0.045050516 container create 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 29 03:59:16 np0005539550 systemd[1]: Started libpod-conmon-29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a.scope.
Nov 29 03:59:16 np0005539550 systemd[1]: Started libcrun container.
Nov 29 03:59:16 np0005539550 podman[414621]: 2025-11-29 08:59:16.685038168 +0000 UTC m=+0.022281402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 03:59:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e3ce2033f0fb911f630bb6bdf2e8fa04981df871678fbf6a19fbb650f04c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e3ce2033f0fb911f630bb6bdf2e8fa04981df871678fbf6a19fbb650f04c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e3ce2033f0fb911f630bb6bdf2e8fa04981df871678fbf6a19fbb650f04c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:16 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f3e3ce2033f0fb911f630bb6bdf2e8fa04981df871678fbf6a19fbb650f04c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 03:59:16 np0005539550 podman[414621]: 2025-11-29 08:59:16.807682198 +0000 UTC m=+0.144925452 container init 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 03:59:16 np0005539550 podman[414621]: 2025-11-29 08:59:16.821807854 +0000 UTC m=+0.159051078 container start 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:16 np0005539550 podman[414621]: 2025-11-29 08:59:16.826148954 +0000 UTC m=+0.163392148 container attach 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 03:59:16 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:17.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:17 np0005539550 nova_compute[257631]: 2025-11-29 08:59:17.462 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:17 np0005539550 clever_feynman[414637]: {
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:        "osd_id": 0,
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:        "type": "bluestore"
Nov 29 03:59:17 np0005539550 clever_feynman[414637]:    }
Nov 29 03:59:17 np0005539550 clever_feynman[414637]: }
Nov 29 03:59:17 np0005539550 systemd[1]: libpod-29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a.scope: Deactivated successfully.
Nov 29 03:59:17 np0005539550 podman[414621]: 2025-11-29 08:59:17.681403551 +0000 UTC m=+1.018646785 container died 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 03:59:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4f3e3ce2033f0fb911f630bb6bdf2e8fa04981df871678fbf6a19fbb650f04c9-merged.mount: Deactivated successfully.
Nov 29 03:59:17 np0005539550 podman[414621]: 2025-11-29 08:59:17.746397438 +0000 UTC m=+1.083640612 container remove 29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 03:59:17 np0005539550 systemd[1]: libpod-conmon-29e93976c8e9788c5c4d93b4b5f960873e949d5f8e32dd88f7f0afdf569e067a.scope: Deactivated successfully.
Nov 29 03:59:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:17.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 03:59:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 03:59:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ddc0d84a-d6ca-4755-bfa2-749b7e967138 does not exist
Nov 29 03:59:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 7d254235-89c7-405c-b7d8-6e1705547d6e does not exist
Nov 29 03:59:17 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c553b137-3b86-4c0f-9fa1-d1a60a5377ca does not exist
Nov 29 03:59:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:18 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 03:59:18 np0005539550 nova_compute[257631]: 2025-11-29 08:59:18.796 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:59:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 03:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:59:18.990 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 03:59:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 08:59:18.990 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 03:59:18 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:19.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 03:59:20 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:21.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:21.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:21 np0005539550 nova_compute[257631]: 2025-11-29 08:59:21.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:22 np0005539550 nova_compute[257631]: 2025-11-29 08:59:22.463 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:22 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:23.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:23.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:23 np0005539550 nova_compute[257631]: 2025-11-29 08:59:23.827 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:24 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:25 np0005539550 podman[414778]: 2025-11-29 08:59:25.362324807 +0000 UTC m=+0.086422258 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:59:25 np0005539550 podman[414779]: 2025-11-29 08:59:25.387246145 +0000 UTC m=+0.111241684 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:59:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:25.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:25.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:26 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:27.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:27 np0005539550 nova_compute[257631]: 2025-11-29 08:59:27.465 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:27.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:28 np0005539550 nova_compute[257631]: 2025-11-29 08:59:28.830 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:28 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:29.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:29.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:31.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:31.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:31 np0005539550 nova_compute[257631]: 2025-11-29 08:59:31.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 03:59:32 np0005539550 nova_compute[257631]: 2025-11-29 08:59:32.467 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:33.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:33.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:33 np0005539550 nova_compute[257631]: 2025-11-29 08:59:33.859 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:35.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:35.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:37 np0005539550 podman[414824]: 2025-11-29 08:59:37.335590404 +0000 UTC m=+0.080992762 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 03:59:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:37.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:37 np0005539550 nova_compute[257631]: 2025-11-29 08:59:37.470 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:37.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:38 np0005539550 nova_compute[257631]: 2025-11-29 08:59:38.861 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:39.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:39.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:41.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:41.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:42 np0005539550 nova_compute[257631]: 2025-11-29 08:59:42.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:43.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:43 np0005539550 nova_compute[257631]: 2025-11-29 08:59:43.923 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:45.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:45.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 03:59:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:47.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 03:59:47 np0005539550 nova_compute[257631]: 2025-11-29 08:59:47.474 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:48 np0005539550 nova_compute[257631]: 2025-11-29 08:59:48.926 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:49.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:49.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:51.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:52 np0005539550 nova_compute[257631]: 2025-11-29 08:59:52.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:53.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:53.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:53 np0005539550 nova_compute[257631]: 2025-11-29 08:59:53.929 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:55.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:55.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:56 np0005539550 podman[414910]: 2025-11-29 08:59:56.333606912 +0000 UTC m=+0.066294111 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 03:59:56 np0005539550 podman[414909]: 2025-11-29 08:59:56.346744773 +0000 UTC m=+0.082172011 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 03:59:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 03:59:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:57.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:57 np0005539550 nova_compute[257631]: 2025-11-29 08:59:57.522 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 03:59:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:57.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 03:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 03:59:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 03:59:58 np0005539550 nova_compute[257631]: 2025-11-29 08:59:58.932 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 03:59:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:08:59:59.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_08:59:59
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Nov 29 03:59:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 03:59:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 03:59:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 03:59:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:08:59:59.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:00 np0005539550 ceph-mon[74435]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 29 04:00:00 np0005539550 ceph-mon[74435]: overall HEALTH_OK
Nov 29 04:00:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:01.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:02 np0005539550 nova_compute[257631]: 2025-11-29 09:00:02.524 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:03.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:03 np0005539550 nova_compute[257631]: 2025-11-29 09:00:03.935 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.946 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.946 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:06 np0005539550 nova_compute[257631]: 2025-11-29 09:00:06.947 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539550 nova_compute[257631]: 2025-11-29 09:00:07.525 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:07 np0005539550 podman[415008]: 2025-11-29 09:00:07.568665867 +0000 UTC m=+0.089670009 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:00:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:07 np0005539550 nova_compute[257631]: 2025-11-29 09:00:07.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:07 np0005539550 nova_compute[257631]: 2025-11-29 09:00:07.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:00:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:00:08 np0005539550 nova_compute[257631]: 2025-11-29 09:00:08.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:08 np0005539550 nova_compute[257631]: 2025-11-29 09:00:08.938 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:00:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:00:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:09.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.950 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:00:10 np0005539550 nova_compute[257631]: 2025-11-29 09:00:10.951 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:11 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367948610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.390 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:11.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.576 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.577 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4048MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.578 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.578 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.640 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.640 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:00:11 np0005539550 nova_compute[257631]: 2025-11-29 09:00:11.676 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:00:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:11.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:00:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554192278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.112 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.121 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.141 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.145 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.145 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:12 np0005539550 nova_compute[257631]: 2025-11-29 09:00:12.564 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:13.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:13.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:13 np0005539550 nova_compute[257631]: 2025-11-29 09:00:13.985 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:15.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:17 np0005539550 nova_compute[257631]: 2025-11-29 09:00:17.140 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:17.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:17 np0005539550 nova_compute[257631]: 2025-11-29 09:00:17.600 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:17.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:18 np0005539550 nova_compute[257631]: 2025-11-29 09:00:18.988 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:00:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:00:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:00:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:00:18.989 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:00:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a1cfbabf-f23d-43ea-93e0-ff3bca3a1bc9 does not exist
Nov 29 04:00:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f9828e44-3a51-4241-8de6-148728ae27e3 does not exist
Nov 29 04:00:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ef2806be-bfa7-4534-bb5f-b95bef71f646 does not exist
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:00:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:00:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:19.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:19 np0005539550 podman[415406]: 2025-11-29 09:00:19.915638123 +0000 UTC m=+0.037056735 container create 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:00:19 np0005539550 systemd[1]: Started libpod-conmon-0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c.scope.
Nov 29 04:00:19 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:19 np0005539550 podman[415406]: 2025-11-29 09:00:19.899513087 +0000 UTC m=+0.020931729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:19 np0005539550 podman[415406]: 2025-11-29 09:00:19.99607778 +0000 UTC m=+0.117496392 container init 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:00:20 np0005539550 podman[415406]: 2025-11-29 09:00:20.002115232 +0000 UTC m=+0.123533864 container start 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:00:20 np0005539550 podman[415406]: 2025-11-29 09:00:20.005344423 +0000 UTC m=+0.126763065 container attach 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 29 04:00:20 np0005539550 musing_solomon[415422]: 167 167
Nov 29 04:00:20 np0005539550 systemd[1]: libpod-0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c.scope: Deactivated successfully.
Nov 29 04:00:20 np0005539550 podman[415406]: 2025-11-29 09:00:20.008667637 +0000 UTC m=+0.130086249 container died 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:00:20 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0f1e717a68ab954261f6fa971190a2e43294521ca66ced29ea9b78d5e43f4c37-merged.mount: Deactivated successfully.
Nov 29 04:00:20 np0005539550 podman[415406]: 2025-11-29 09:00:20.047796453 +0000 UTC m=+0.169215065 container remove 0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_solomon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:00:20 np0005539550 systemd[1]: libpod-conmon-0fbe07f250cf19bf149d663f81cc125e1d182474281ad0458ca33eaeac8e1a1c.scope: Deactivated successfully.
Nov 29 04:00:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:00:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:00:20 np0005539550 podman[415446]: 2025-11-29 09:00:20.194573531 +0000 UTC m=+0.037930117 container create 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:00:20 np0005539550 systemd[1]: Started libpod-conmon-62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e.scope.
Nov 29 04:00:20 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:20 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:20 np0005539550 podman[415446]: 2025-11-29 09:00:20.260096012 +0000 UTC m=+0.103452648 container init 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:00:20 np0005539550 podman[415446]: 2025-11-29 09:00:20.269786326 +0000 UTC m=+0.113142912 container start 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:00:20 np0005539550 podman[415446]: 2025-11-29 09:00:20.273445498 +0000 UTC m=+0.116802104 container attach 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:00:20 np0005539550 podman[415446]: 2025-11-29 09:00:20.180803394 +0000 UTC m=+0.024160000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:00:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:00:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:21 np0005539550 compassionate_chaplygin[415462]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:00:21 np0005539550 compassionate_chaplygin[415462]: --> relative data size: 1.0
Nov 29 04:00:21 np0005539550 compassionate_chaplygin[415462]: --> All data devices are unavailable
Nov 29 04:00:21 np0005539550 systemd[1]: libpod-62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e.scope: Deactivated successfully.
Nov 29 04:00:21 np0005539550 podman[415446]: 2025-11-29 09:00:21.107052831 +0000 UTC m=+0.950409417 container died 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 04:00:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8f05de90e9830e5135c2104ded3cd5832ab2841f4599de4ba6ea6d5cceaaf8a1-merged.mount: Deactivated successfully.
Nov 29 04:00:21 np0005539550 podman[415446]: 2025-11-29 09:00:21.154366483 +0000 UTC m=+0.997723069 container remove 62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:00:21 np0005539550 systemd[1]: libpod-conmon-62721a2c9cc1bc8693bde2f1ff6c45a35807d3bbcdbae77fb63451866ce9e72e.scope: Deactivated successfully.
Nov 29 04:00:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.749034335 +0000 UTC m=+0.060492125 container create d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:00:21 np0005539550 systemd[1]: Started libpod-conmon-d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559.scope.
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.715024788 +0000 UTC m=+0.026482638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:21 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.838435248 +0000 UTC m=+0.149893028 container init d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:00:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.845914916 +0000 UTC m=+0.157372676 container start d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.849148728 +0000 UTC m=+0.160606508 container attach d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:00:21 np0005539550 nostalgic_taussig[415648]: 167 167
Nov 29 04:00:21 np0005539550 systemd[1]: libpod-d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559.scope: Deactivated successfully.
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.851726273 +0000 UTC m=+0.163184053 container died d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 29 04:00:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:21 np0005539550 systemd[1]: var-lib-containers-storage-overlay-11552510a931899753e1e7e27e62f14b0a07ce54499a4442c7460df112af0151-merged.mount: Deactivated successfully.
Nov 29 04:00:21 np0005539550 podman[415632]: 2025-11-29 09:00:21.89055135 +0000 UTC m=+0.202009150 container remove d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_taussig, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:00:21 np0005539550 systemd[1]: libpod-conmon-d7b7f62925452dafe97142fb2dd3405e9d59dd1ac14f076d5c105202a0a4b559.scope: Deactivated successfully.
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.049630288 +0000 UTC m=+0.043080587 container create a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:00:22 np0005539550 systemd[1]: Started libpod-conmon-a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4.scope.
Nov 29 04:00:22 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc752fd3d019afe243ec7b349c15d0f552ace6b195d13216886eba596f57b09/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc752fd3d019afe243ec7b349c15d0f552ace6b195d13216886eba596f57b09/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc752fd3d019afe243ec7b349c15d0f552ace6b195d13216886eba596f57b09/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:22 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fc752fd3d019afe243ec7b349c15d0f552ace6b195d13216886eba596f57b09/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.031341347 +0000 UTC m=+0.024791656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.136548208 +0000 UTC m=+0.129998517 container init a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.142655581 +0000 UTC m=+0.136105880 container start a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.146402966 +0000 UTC m=+0.139853265 container attach a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:00:22 np0005539550 nova_compute[257631]: 2025-11-29 09:00:22.641 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]: {
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:    "0": [
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:        {
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "devices": [
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "/dev/loop3"
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            ],
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "lv_name": "ceph_lv0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "lv_size": "7511998464",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "name": "ceph_lv0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "tags": {
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.cluster_name": "ceph",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.crush_device_class": "",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.encrypted": "0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.osd_id": "0",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.type": "block",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:                "ceph.vdo": "0"
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            },
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "type": "block",
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:            "vg_name": "ceph_vg0"
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:        }
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]:    ]
Nov 29 04:00:22 np0005539550 jolly_pasteur[415689]: }
Nov 29 04:00:22 np0005539550 systemd[1]: libpod-a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4.scope: Deactivated successfully.
Nov 29 04:00:22 np0005539550 podman[415673]: 2025-11-29 09:00:22.910330123 +0000 UTC m=+0.903780422 container died a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:00:22 np0005539550 nova_compute[257631]: 2025-11-29 09:00:22.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:23 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1fc752fd3d019afe243ec7b349c15d0f552ace6b195d13216886eba596f57b09-merged.mount: Deactivated successfully.
Nov 29 04:00:23 np0005539550 podman[415673]: 2025-11-29 09:00:23.079169387 +0000 UTC m=+1.072619696 container remove a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pasteur, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:00:23 np0005539550 systemd[1]: libpod-conmon-a50df8702675e75de8ff2e0dbc37860a2692432082fc2c4a8c4a6eb7a2e381d4.scope: Deactivated successfully.
Nov 29 04:00:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:23.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:23 np0005539550 podman[415855]: 2025-11-29 09:00:23.918053183 +0000 UTC m=+0.046482003 container create fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:00:23 np0005539550 systemd[1]: Started libpod-conmon-fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9.scope.
Nov 29 04:00:23 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:23 np0005539550 podman[415855]: 2025-11-29 09:00:23.894205512 +0000 UTC m=+0.022634412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:23 np0005539550 nova_compute[257631]: 2025-11-29 09:00:23.990 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:23 np0005539550 podman[415855]: 2025-11-29 09:00:23.995409442 +0000 UTC m=+0.123838302 container init fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:00:24 np0005539550 podman[415855]: 2025-11-29 09:00:24.006036519 +0000 UTC m=+0.134465339 container start fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 29 04:00:24 np0005539550 inspiring_hypatia[415872]: 167 167
Nov 29 04:00:24 np0005539550 systemd[1]: libpod-fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9.scope: Deactivated successfully.
Nov 29 04:00:24 np0005539550 podman[415855]: 2025-11-29 09:00:24.010656796 +0000 UTC m=+0.139085636 container attach fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:00:24 np0005539550 podman[415855]: 2025-11-29 09:00:24.011114717 +0000 UTC m=+0.139543537 container died fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-79ab0413ed1b33a067e1cdcfc4aa491671abfebdf9bf1baaf9a4d01cf7cf41cb-merged.mount: Deactivated successfully.
Nov 29 04:00:24 np0005539550 podman[415855]: 2025-11-29 09:00:24.055724981 +0000 UTC m=+0.184153811 container remove fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 04:00:24 np0005539550 systemd[1]: libpod-conmon-fadf9053d8e367bbdf5f6d39c2176d627698706440aaf53917a03b6a73d2c4c9.scope: Deactivated successfully.
Nov 29 04:00:24 np0005539550 podman[415895]: 2025-11-29 09:00:24.23189569 +0000 UTC m=+0.060862855 container create 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:00:24 np0005539550 systemd[1]: Started libpod-conmon-2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942.scope.
Nov 29 04:00:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:00:24 np0005539550 podman[415895]: 2025-11-29 09:00:24.209022483 +0000 UTC m=+0.037989728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:00:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4321c6074c14340af959a5d96a3cf0dced61464de6033795d8272ca2c95a4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4321c6074c14340af959a5d96a3cf0dced61464de6033795d8272ca2c95a4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4321c6074c14340af959a5d96a3cf0dced61464de6033795d8272ca2c95a4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b4321c6074c14340af959a5d96a3cf0dced61464de6033795d8272ca2c95a4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:00:24 np0005539550 podman[415895]: 2025-11-29 09:00:24.312622364 +0000 UTC m=+0.141589529 container init 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:00:24 np0005539550 podman[415895]: 2025-11-29 09:00:24.327457737 +0000 UTC m=+0.156424912 container start 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 29 04:00:24 np0005539550 podman[415895]: 2025-11-29 09:00:24.331015377 +0000 UTC m=+0.159982542 container attach 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:00:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:25 np0005539550 naughty_keller[415912]: {
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:        "osd_id": 0,
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:        "type": "bluestore"
Nov 29 04:00:25 np0005539550 naughty_keller[415912]:    }
Nov 29 04:00:25 np0005539550 naughty_keller[415912]: }
Nov 29 04:00:25 np0005539550 systemd[1]: libpod-2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942.scope: Deactivated successfully.
Nov 29 04:00:25 np0005539550 podman[415895]: 2025-11-29 09:00:25.189570678 +0000 UTC m=+1.018537843 container died 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:00:25 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0b4321c6074c14340af959a5d96a3cf0dced61464de6033795d8272ca2c95a4e-merged.mount: Deactivated successfully.
Nov 29 04:00:25 np0005539550 podman[415895]: 2025-11-29 09:00:25.235948317 +0000 UTC m=+1.064915492 container remove 2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:00:25 np0005539550 systemd[1]: libpod-conmon-2814671fcdeebfb2244198b47eb9f72cb523e44b08635fda73f7598a262a9942.scope: Deactivated successfully.
Nov 29 04:00:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:00:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:25 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:00:25 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0bbb4378-cc2a-4ff0-9b6f-74d45626529c does not exist
Nov 29 04:00:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f4417d1-3045-4629-9029-40bc082ca422 does not exist
Nov 29 04:00:25 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev bf50b164-643f-4570-b947-872bcb9599ab does not exist
Nov 29 04:00:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:25.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:26 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:00:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:27 np0005539550 podman[415997]: 2025-11-29 09:00:27.334998391 +0000 UTC m=+0.066686721 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:00:27 np0005539550 podman[415996]: 2025-11-29 09:00:27.353555469 +0000 UTC m=+0.082542311 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 04:00:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:27.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:27 np0005539550 nova_compute[257631]: 2025-11-29 09:00:27.643 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:27.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:29 np0005539550 nova_compute[257631]: 2025-11-29 09:00:29.028 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:29.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:29.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:31.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:31.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:32 np0005539550 nova_compute[257631]: 2025-11-29 09:00:32.680 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:33.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:33.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:34 np0005539550 nova_compute[257631]: 2025-11-29 09:00:34.033 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:35.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:37 np0005539550 nova_compute[257631]: 2025-11-29 09:00:37.720 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:38 np0005539550 podman[416041]: 2025-11-29 09:00:38.42772066 +0000 UTC m=+0.148571874 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:00:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:39 np0005539550 nova_compute[257631]: 2025-11-29 09:00:39.067 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:39.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:41.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:42 np0005539550 nova_compute[257631]: 2025-11-29 09:00:42.758 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:43.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:43.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:43 np0005539550 nova_compute[257631]: 2025-11-29 09:00:43.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:43 np0005539550 nova_compute[257631]: 2025-11-29 09:00:43.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:00:44 np0005539550 nova_compute[257631]: 2025-11-29 09:00:44.072 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:45.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:45.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:47.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:47 np0005539550 nova_compute[257631]: 2025-11-29 09:00:47.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:47.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:48 np0005539550 nova_compute[257631]: 2025-11-29 09:00:48.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:49 np0005539550 nova_compute[257631]: 2025-11-29 09:00:49.113 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:49.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:52 np0005539550 nova_compute[257631]: 2025-11-29 09:00:52.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:53.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:53.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:54 np0005539550 nova_compute[257631]: 2025-11-29 09:00:54.116 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:00:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:55 np0005539550 nova_compute[257631]: 2025-11-29 09:00:55.933 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:00:55 np0005539550 nova_compute[257631]: 2025-11-29 09:00:55.934 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:00:56 np0005539550 nova_compute[257631]: 2025-11-29 09:00:56.031 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:00:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:00:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:57 np0005539550 nova_compute[257631]: 2025-11-29 09:00:57.804 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:00:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:57.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:00:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:00:58 np0005539550 podman[416126]: 2025-11-29 09:00:58.350836386 +0000 UTC m=+0.086384308 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd)
Nov 29 04:00:58 np0005539550 podman[416127]: 2025-11-29 09:00:58.37879744 +0000 UTC m=+0.102247987 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:00:59 np0005539550 nova_compute[257631]: 2025-11-29 09:00:59.120 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:00:59
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images']
Nov 29 04:00:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:00:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:00:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:00:59.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:00:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:00:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:00:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:00:59.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:01.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:01.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:02 np0005539550 nova_compute[257631]: 2025-11-29 09:01:02.807 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:03.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:04 np0005539550 nova_compute[257631]: 2025-11-29 09:01:04.122 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:05.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:07.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:07 np0005539550 nova_compute[257631]: 2025-11-29 09:01:07.852 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:07.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:01:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.018 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.019 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.019 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.050 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.050 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.051 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.051 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.052 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.052 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:09 np0005539550 nova_compute[257631]: 2025-11-29 09:01:09.124 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:01:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:01:09 np0005539550 podman[416229]: 2025-11-29 09:01:09.377801284 +0000 UTC m=+0.110180677 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 04:01:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:11.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.950 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.950 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:01:11 np0005539550 nova_compute[257631]: 2025-11-29 09:01:11.950 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:12 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3558995057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.393 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.638 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.639 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4052MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.639 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.639 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.730 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.730 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.748 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.769 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.770 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.786 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.813 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.834 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:01:12 np0005539550 nova_compute[257631]: 2025-11-29 09:01:12.867 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:01:13 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666014450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:01:13 np0005539550 nova_compute[257631]: 2025-11-29 09:01:13.265 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:01:13 np0005539550 nova_compute[257631]: 2025-11-29 09:01:13.275 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:01:13 np0005539550 nova_compute[257631]: 2025-11-29 09:01:13.308 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:01:13 np0005539550 nova_compute[257631]: 2025-11-29 09:01:13.311 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:01:13 np0005539550 nova_compute[257631]: 2025-11-29 09:01:13.311 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:13.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 04:01:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:13.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 04:01:14 np0005539550 nova_compute[257631]: 2025-11-29 09:01:14.183 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:14 np0005539550 nova_compute[257631]: 2025-11-29 09:01:14.313 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:15 np0005539550 nova_compute[257631]: 2025-11-29 09:01:15.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:17 np0005539550 nova_compute[257631]: 2025-11-29 09:01:17.854 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:17.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:01:18.990 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:01:18.991 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:01:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:01:18.991 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:01:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:19 np0005539550 nova_compute[257631]: 2025-11-29 09:01:19.186 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:19.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:01:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:01:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:21.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:22 np0005539550 nova_compute[257631]: 2025-11-29 09:01:22.877 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:23 np0005539550 nova_compute[257631]: 2025-11-29 09:01:23.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:24 np0005539550 nova_compute[257631]: 2025-11-29 09:01:24.229 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:25.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:25.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:27.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:27 np0005539550 nova_compute[257631]: 2025-11-29 09:01:27.919 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:27.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:28 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ae36863b-5603-4019-a797-5603b43bfc1d does not exist
Nov 29 04:01:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 82f0d14b-b448-476b-a4d8-be323bd8ff19 does not exist
Nov 29 04:01:29 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ac5abcf3-29d3-4903-9871-2ab5d84e70fe does not exist
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:01:29 np0005539550 nova_compute[257631]: 2025-11-29 09:01:29.232 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:29 np0005539550 podman[416517]: 2025-11-29 09:01:29.286260359 +0000 UTC m=+0.073043701 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 04:01:29 np0005539550 podman[416518]: 2025-11-29 09:01:29.329750885 +0000 UTC m=+0.109113280 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:29 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:01:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.780275216 +0000 UTC m=+0.053874379 container create 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:29 np0005539550 systemd[1]: Started libpod-conmon-1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f.scope.
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.751018759 +0000 UTC m=+0.024618002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:29 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.886634444 +0000 UTC m=+0.160233707 container init 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.899115259 +0000 UTC m=+0.172714452 container start 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.903432588 +0000 UTC m=+0.177031761 container attach 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:29 np0005539550 adoring_zhukovsky[416688]: 167 167
Nov 29 04:01:29 np0005539550 systemd[1]: libpod-1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f.scope: Deactivated successfully.
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.907263194 +0000 UTC m=+0.180862367 container died 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-402c8f3484a6c68f2e5fce8a9852eebe0931fdf42b10cdfc41b4c50b7675bf11-merged.mount: Deactivated successfully.
Nov 29 04:01:29 np0005539550 podman[416672]: 2025-11-29 09:01:29.965522312 +0000 UTC m=+0.239121515 container remove 1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:29.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:29 np0005539550 systemd[1]: libpod-conmon-1c25d56d763e4d4d45b68063792bc9d299e50d5e2eee4e7e61e96db5455ab90f.scope: Deactivated successfully.
Nov 29 04:01:30 np0005539550 podman[416712]: 2025-11-29 09:01:30.229137574 +0000 UTC m=+0.104190536 container create eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:30 np0005539550 podman[416712]: 2025-11-29 09:01:30.156297559 +0000 UTC m=+0.031350531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:30 np0005539550 systemd[1]: Started libpod-conmon-eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb.scope.
Nov 29 04:01:30 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:30 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:30 np0005539550 podman[416712]: 2025-11-29 09:01:30.585467681 +0000 UTC m=+0.460520633 container init eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:01:30 np0005539550 podman[416712]: 2025-11-29 09:01:30.593420302 +0000 UTC m=+0.468473294 container start eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 29 04:01:30 np0005539550 podman[416712]: 2025-11-29 09:01:30.756397008 +0000 UTC m=+0.631450000 container attach eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:01:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:31 np0005539550 nifty_murdock[416728]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:01:31 np0005539550 nifty_murdock[416728]: --> relative data size: 1.0
Nov 29 04:01:31 np0005539550 nifty_murdock[416728]: --> All data devices are unavailable
Nov 29 04:01:31 np0005539550 systemd[1]: libpod-eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb.scope: Deactivated successfully.
Nov 29 04:01:31 np0005539550 podman[416712]: 2025-11-29 09:01:31.500048424 +0000 UTC m=+1.375101386 container died eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:01:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:31.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:31.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c9234fcb0cf7157e9a589a422520942879a256703a1ef6b3a766dc334dd7355f-merged.mount: Deactivated successfully.
Nov 29 04:01:32 np0005539550 podman[416712]: 2025-11-29 09:01:32.29964166 +0000 UTC m=+2.174694632 container remove eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_murdock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 29 04:01:32 np0005539550 systemd[1]: libpod-conmon-eed0424e14785141eb8e8ba40508dcc47e6b867da55f0dc4ca2fec6417beb4eb.scope: Deactivated successfully.
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.898664432 +0000 UTC m=+0.054064883 container create 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:01:32 np0005539550 nova_compute[257631]: 2025-11-29 09:01:32.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:32 np0005539550 systemd[1]: Started libpod-conmon-3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d.scope.
Nov 29 04:01:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.971376934 +0000 UTC m=+0.126777415 container init 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.881157791 +0000 UTC m=+0.036558282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.981300544 +0000 UTC m=+0.136701005 container start 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.984655019 +0000 UTC m=+0.140055530 container attach 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:32 np0005539550 affectionate_perlman[416917]: 167 167
Nov 29 04:01:32 np0005539550 systemd[1]: libpod-3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d.scope: Deactivated successfully.
Nov 29 04:01:32 np0005539550 podman[416901]: 2025-11-29 09:01:32.988822424 +0000 UTC m=+0.144222905 container died 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:01:33 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d46e5a448ef4dc3afa515489bc82ef1813cf62120f8486267cb0df2035eafe51-merged.mount: Deactivated successfully.
Nov 29 04:01:33 np0005539550 podman[416901]: 2025-11-29 09:01:33.032991436 +0000 UTC m=+0.188391917 container remove 3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:01:33 np0005539550 systemd[1]: libpod-conmon-3c54cbafdc7448eadd6a2fb4da6814b87968a5508e9832d44d854599a61ea69d.scope: Deactivated successfully.
Nov 29 04:01:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:33 np0005539550 podman[416942]: 2025-11-29 09:01:33.267441843 +0000 UTC m=+0.068341762 container create 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 29 04:01:33 np0005539550 systemd[1]: Started libpod-conmon-7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5.scope.
Nov 29 04:01:33 np0005539550 podman[416942]: 2025-11-29 09:01:33.241840968 +0000 UTC m=+0.042740867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c777286c4bccf76220af3dda19c4a9dbac82d54884aa0a02af73b0365d19efa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c777286c4bccf76220af3dda19c4a9dbac82d54884aa0a02af73b0365d19efa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c777286c4bccf76220af3dda19c4a9dbac82d54884aa0a02af73b0365d19efa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c777286c4bccf76220af3dda19c4a9dbac82d54884aa0a02af73b0365d19efa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:33 np0005539550 podman[416942]: 2025-11-29 09:01:33.364844547 +0000 UTC m=+0.165744466 container init 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:01:33 np0005539550 podman[416942]: 2025-11-29 09:01:33.370984072 +0000 UTC m=+0.171883951 container start 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:33 np0005539550 podman[416942]: 2025-11-29 09:01:33.374763177 +0000 UTC m=+0.175663106 container attach 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:33.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:33.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:34 np0005539550 focused_tu[416959]: {
Nov 29 04:01:34 np0005539550 focused_tu[416959]:    "0": [
Nov 29 04:01:34 np0005539550 focused_tu[416959]:        {
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "devices": [
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "/dev/loop3"
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            ],
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "lv_name": "ceph_lv0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "lv_size": "7511998464",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "name": "ceph_lv0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "tags": {
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.cluster_name": "ceph",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.crush_device_class": "",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.encrypted": "0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.osd_id": "0",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.type": "block",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:                "ceph.vdo": "0"
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            },
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "type": "block",
Nov 29 04:01:34 np0005539550 focused_tu[416959]:            "vg_name": "ceph_vg0"
Nov 29 04:01:34 np0005539550 focused_tu[416959]:        }
Nov 29 04:01:34 np0005539550 focused_tu[416959]:    ]
Nov 29 04:01:34 np0005539550 focused_tu[416959]: }
Nov 29 04:01:34 np0005539550 systemd[1]: libpod-7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5.scope: Deactivated successfully.
Nov 29 04:01:34 np0005539550 podman[416942]: 2025-11-29 09:01:34.179630515 +0000 UTC m=+0.980530434 container died 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:01:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-8c777286c4bccf76220af3dda19c4a9dbac82d54884aa0a02af73b0365d19efa-merged.mount: Deactivated successfully.
Nov 29 04:01:34 np0005539550 nova_compute[257631]: 2025-11-29 09:01:34.235 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:34 np0005539550 podman[416942]: 2025-11-29 09:01:34.26993438 +0000 UTC m=+1.070834279 container remove 7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_tu, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:01:34 np0005539550 systemd[1]: libpod-conmon-7a74e72b2a02da7bba1dfbfe9d63ecb71741fdf291fe5f6780d228ddd6874db5.scope: Deactivated successfully.
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.030547534 +0000 UTC m=+0.069090202 container create 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:34.995648034 +0000 UTC m=+0.034190702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:35 np0005539550 systemd[1]: Started libpod-conmon-2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3.scope.
Nov 29 04:01:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.198659559 +0000 UTC m=+0.237202237 container init 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.211528823 +0000 UTC m=+0.250071481 container start 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:01:35 np0005539550 fervent_pascal[417137]: 167 167
Nov 29 04:01:35 np0005539550 systemd[1]: libpod-2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3.scope: Deactivated successfully.
Nov 29 04:01:35 np0005539550 conmon[417137]: conmon 2f2db1e9413f68bd6e02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3.scope/container/memory.events
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.232397299 +0000 UTC m=+0.270939937 container attach 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.232895682 +0000 UTC m=+0.271438310 container died 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 29 04:01:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-7ce07ff6a97469cad66eadaf8c33227cbe7be821caddc8efc0a685158d963875-merged.mount: Deactivated successfully.
Nov 29 04:01:35 np0005539550 podman[417121]: 2025-11-29 09:01:35.341956229 +0000 UTC m=+0.380498867 container remove 2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:01:35 np0005539550 systemd[1]: libpod-conmon-2f2db1e9413f68bd6e02411f4318a5d8a133aa28cf3260b513c77a45601fe6a3.scope: Deactivated successfully.
Nov 29 04:01:35 np0005539550 podman[417161]: 2025-11-29 09:01:35.552154155 +0000 UTC m=+0.078958690 container create 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:01:35 np0005539550 podman[417161]: 2025-11-29 09:01:35.504513845 +0000 UTC m=+0.031318460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:01:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:35 np0005539550 systemd[1]: Started libpod-conmon-9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68.scope.
Nov 29 04:01:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:01:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b896d1b537af589be69e0335bc536db4f2f36b5de29de745c268ac376ebccc65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b896d1b537af589be69e0335bc536db4f2f36b5de29de745c268ac376ebccc65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b896d1b537af589be69e0335bc536db4f2f36b5de29de745c268ac376ebccc65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:35 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b896d1b537af589be69e0335bc536db4f2f36b5de29de745c268ac376ebccc65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:01:35 np0005539550 podman[417161]: 2025-11-29 09:01:35.674543139 +0000 UTC m=+0.201347664 container init 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:35 np0005539550 podman[417161]: 2025-11-29 09:01:35.681201027 +0000 UTC m=+0.208005542 container start 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:01:35 np0005539550 podman[417161]: 2025-11-29 09:01:35.702570505 +0000 UTC m=+0.229375030 container attach 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:01:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]: {
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:        "osd_id": 0,
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:        "type": "bluestore"
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]:    }
Nov 29 04:01:36 np0005539550 upbeat_meninsky[417177]: }
Nov 29 04:01:36 np0005539550 systemd[1]: libpod-9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68.scope: Deactivated successfully.
Nov 29 04:01:36 np0005539550 podman[417161]: 2025-11-29 09:01:36.544778005 +0000 UTC m=+1.071582530 container died 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:01:36 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b896d1b537af589be69e0335bc536db4f2f36b5de29de745c268ac376ebccc65-merged.mount: Deactivated successfully.
Nov 29 04:01:36 np0005539550 podman[417161]: 2025-11-29 09:01:36.835875089 +0000 UTC m=+1.362679604 container remove 9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:01:36 np0005539550 nova_compute[257631]: 2025-11-29 09:01:36.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:01:36 np0005539550 systemd[1]: libpod-conmon-9fb852fb58c7b9108a6fe17e6de81708e360ea1c7be3d91f0ec2f84d5cd79a68.scope: Deactivated successfully.
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ebdedaf4-a0f7-487c-a97b-f2a7a98a08a8 does not exist
Nov 29 04:01:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 23381d4c-6f12-46ba-aabe-06fb8957feec does not exist
Nov 29 04:01:36 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a0e787e3-d276-4142-a480-9328dfb6cd12 does not exist
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:36.975150) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406896975226, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2109, "num_deletes": 251, "total_data_size": 3929716, "memory_usage": 3992416, "flush_reason": "Manual Compaction"}
Nov 29 04:01:36 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897036199, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 3828938, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73550, "largest_seqno": 75658, "table_properties": {"data_size": 3819374, "index_size": 6057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19260, "raw_average_key_size": 20, "raw_value_size": 3800423, "raw_average_value_size": 4004, "num_data_blocks": 265, "num_entries": 949, "num_filter_entries": 949, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406681, "oldest_key_time": 1764406681, "file_creation_time": 1764406896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 61101 microseconds, and 7699 cpu microseconds.
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.036251) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 3828938 bytes OK
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.036274) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.051878) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.051915) EVENT_LOG_v1 {"time_micros": 1764406897051906, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.051937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3921242, prev total WAL file size 3921242, number of live WAL files 2.
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.053068) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(3739KB)], [167(11MB)]
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897053160, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 16385150, "oldest_snapshot_seqno": -1}
Nov 29 04:01:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 11354 keys, 14379509 bytes, temperature: kUnknown
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897292381, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 14379509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14306629, "index_size": 43391, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28421, "raw_key_size": 299720, "raw_average_key_size": 26, "raw_value_size": 14107962, "raw_average_value_size": 1242, "num_data_blocks": 1650, "num_entries": 11354, "num_filter_entries": 11354, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764406897, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.292653) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 14379509 bytes
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.308561) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.5 rd, 60.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.0 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11873, records dropped: 519 output_compression: NoCompression
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.308597) EVENT_LOG_v1 {"time_micros": 1764406897308581, "job": 104, "event": "compaction_finished", "compaction_time_micros": 239307, "compaction_time_cpu_micros": 36553, "output_level": 6, "num_output_files": 1, "total_output_size": 14379509, "num_input_records": 11873, "num_output_records": 11354, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897310379, "job": 104, "event": "table_file_deletion", "file_number": 169}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764406897314628, "job": 104, "event": "table_file_deletion", "file_number": 167}
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.052938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.314767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.314772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.314773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.314774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:01:37.314776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:01:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:37.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:37 np0005539550 nova_compute[257631]: 2025-11-29 09:01:37.925 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:37.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:01:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:39 np0005539550 nova_compute[257631]: 2025-11-29 09:01:39.238 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:39.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:40 np0005539550 podman[417274]: 2025-11-29 09:01:40.462011059 +0000 UTC m=+0.180634273 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 04:01:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:41.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:42.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:42 np0005539550 nova_compute[257631]: 2025-11-29 09:01:42.961 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:43.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:44.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:44 np0005539550 nova_compute[257631]: 2025-11-29 09:01:44.264 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:46.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:47.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:48 np0005539550 nova_compute[257631]: 2025-11-29 09:01:48.006 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:48.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:49 np0005539550 nova_compute[257631]: 2025-11-29 09:01:49.267 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:01:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:01:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:50.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:51.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:52.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:53 np0005539550 nova_compute[257631]: 2025-11-29 09:01:53.036 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:01:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:53.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:01:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:54 np0005539550 nova_compute[257631]: 2025-11-29 09:01:54.269 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:56.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:01:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:01:58.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:01:58 np0005539550 nova_compute[257631]: 2025-11-29 09:01:58.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:01:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:01:59 np0005539550 nova_compute[257631]: 2025-11-29 09:01:59.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:01:59
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.log', '.rgw.root', 'backups', 'volumes']
Nov 29 04:01:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:01:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:01:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:01:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:01:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:00.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:00 np0005539550 podman[417347]: 2025-11-29 09:02:00.314821262 +0000 UTC m=+0.054057053 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:02:00 np0005539550 podman[417348]: 2025-11-29 09:02:00.337968605 +0000 UTC m=+0.073138844 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:02:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:02:01 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 16K writes, 75K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1524 writes, 6813 keys, 1524 commit groups, 1.0 writes per commit group, ingest: 10.56 MB, 0.02 MB/s#012Interval WAL: 1524 writes, 1524 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     39.6      2.58              0.34        52    0.050       0      0       0.0       0.0#012  L6      1/0   13.71 MB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   5.6     47.2     40.8     13.89              1.72        51    0.272    414K    27K       0.0       0.0#012 Sum      1/0   13.71 MB   0.0      0.6     0.1      0.5       0.7      0.1       0.0   6.6     39.8     40.7     16.47              2.06       103    0.160    414K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.8     96.1     98.5      0.75              0.29        10    0.075     57K   2571       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   0.0     47.2     40.8     13.89              1.72        51    0.272    414K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     39.7      2.57              0.34        51    0.050       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.7      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.100, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.65 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 16.5 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55611ecc71f0#2 capacity: 304.00 MB usage: 71.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000609 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4485,69.01 MB,22.7006%) FilterBlock(104,1.12 MB,0.368997%) IndexBlock(104,1.85 MB,0.608459%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 29 04:02:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:01.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:03 np0005539550 nova_compute[257631]: 2025-11-29 09:02:03.090 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:03.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:04.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:04 np0005539550 nova_compute[257631]: 2025-11-29 09:02:04.277 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 04:02:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:05.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 04:02:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:06.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:07.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:08 np0005539550 nova_compute[257631]: 2025-11-29 09:02:08.147 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:02:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:02:08 np0005539550 nova_compute[257631]: 2025-11-29 09:02:08.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:08 np0005539550 nova_compute[257631]: 2025-11-29 09:02:08.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:02:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.319 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.938 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.939 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:09 np0005539550 nova_compute[257631]: 2025-11-29 09:02:09.939 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:02:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:10.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:11 np0005539550 podman[417440]: 2025-11-29 09:02:11.344027438 +0000 UTC m=+0.078670163 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:02:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:11.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:12.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:12 np0005539550 nova_compute[257631]: 2025-11-29 09:02:12.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.211 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:13.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.948 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.949 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.949 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:02:13 np0005539550 nova_compute[257631]: 2025-11-29 09:02:13.950 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:14.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.322 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1375322580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.497 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.686 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.687 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4077MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.687 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.687 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.766 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.766 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:02:14 np0005539550 nova_compute[257631]: 2025-11-29 09:02:14.790 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:02:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:02:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39176008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:02:15 np0005539550 nova_compute[257631]: 2025-11-29 09:02:15.279 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:02:15 np0005539550 nova_compute[257631]: 2025-11-29 09:02:15.285 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:02:15 np0005539550 nova_compute[257631]: 2025-11-29 09:02:15.307 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:02:15 np0005539550 nova_compute[257631]: 2025-11-29 09:02:15.309 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:02:15 np0005539550 nova_compute[257631]: 2025-11-29 09:02:15.310 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:15.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:16.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:17.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:18 np0005539550 nova_compute[257631]: 2025-11-29 09:02:18.212 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:18 np0005539550 nova_compute[257631]: 2025-11-29 09:02:18.304 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:02:18.991 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:02:18.991 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:02:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:02:18.991 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:02:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:19 np0005539550 nova_compute[257631]: 2025-11-29 09:02:19.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:02:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:02:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:23 np0005539550 nova_compute[257631]: 2025-11-29 09:02:23.214 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:24.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:24 np0005539550 nova_compute[257631]: 2025-11-29 09:02:24.362 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:25.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:25 np0005539550 nova_compute[257631]: 2025-11-29 09:02:25.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:02:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.002000051s ======
Nov 29 04:02:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:26.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Nov 29 04:02:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:28.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:28 np0005539550 nova_compute[257631]: 2025-11-29 09:02:28.216 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:29 np0005539550 nova_compute[257631]: 2025-11-29 09:02:29.365 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:30.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:30 np0005539550 podman[417572]: 2025-11-29 09:02:30.570384639 +0000 UTC m=+0.050231867 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 04:02:30 np0005539550 podman[417571]: 2025-11-29 09:02:30.572738348 +0000 UTC m=+0.057971851 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 04:02:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:31.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:32.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:33 np0005539550 nova_compute[257631]: 2025-11-29 09:02:33.258 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:33.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:34.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:34 np0005539550 nova_compute[257631]: 2025-11-29 09:02:34.368 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:35.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:36.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:37.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:38.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:38 np0005539550 nova_compute[257631]: 2025-11-29 09:02:38.269 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1ea48c90-7bf0-4f86-a39a-257b3e5bbb6b does not exist
Nov 29 04:02:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3460c612-3b7c-4709-83e8-bf35a7c59209 does not exist
Nov 29 04:02:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 2d6236fc-d0b0-414f-ba22-15aad944f6e1 does not exist
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:02:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:02:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:39 np0005539550 podman[417885]: 2025-11-29 09:02:39.275817137 +0000 UTC m=+0.028465718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:39 np0005539550 nova_compute[257631]: 2025-11-29 09:02:39.397 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:39 np0005539550 podman[417885]: 2025-11-29 09:02:39.780305967 +0000 UTC m=+0.532954508 container create 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:40 np0005539550 systemd[1]: Started libpod-conmon-3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5.scope.
Nov 29 04:02:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:02:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:40 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:02:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:40.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:40 np0005539550 podman[417885]: 2025-11-29 09:02:40.207542761 +0000 UTC m=+0.960191322 container init 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 29 04:02:40 np0005539550 podman[417885]: 2025-11-29 09:02:40.216072716 +0000 UTC m=+0.968721257 container start 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:02:40 np0005539550 epic_northcutt[417902]: 167 167
Nov 29 04:02:40 np0005539550 systemd[1]: libpod-3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5.scope: Deactivated successfully.
Nov 29 04:02:40 np0005539550 conmon[417902]: conmon 3682eafc8075283d4de1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5.scope/container/memory.events
Nov 29 04:02:40 np0005539550 podman[417885]: 2025-11-29 09:02:40.517711916 +0000 UTC m=+1.270360477 container attach 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 29 04:02:40 np0005539550 podman[417885]: 2025-11-29 09:02:40.518848295 +0000 UTC m=+1.271496856 container died 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:02:40 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d6c235331e94cbd317ed5e950e0f5942c8d8ed7b5bde3ec2a4d89e6fa905e414-merged.mount: Deactivated successfully.
Nov 29 04:02:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:41 np0005539550 podman[417885]: 2025-11-29 09:02:41.663627247 +0000 UTC m=+2.416275788 container remove 3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 29 04:02:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:41.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:41 np0005539550 systemd[1]: libpod-conmon-3682eafc8075283d4de105e374b6d9e36f6f9f9f62c77d21ba9d99cbc84671b5.scope: Deactivated successfully.
Nov 29 04:02:41 np0005539550 podman[417974]: 2025-11-29 09:02:41.833182459 +0000 UTC m=+0.088767448 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 04:02:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:41 np0005539550 podman[418001]: 2025-11-29 09:02:41.850249809 +0000 UTC m=+0.029428823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:42.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:42 np0005539550 podman[418001]: 2025-11-29 09:02:42.293087895 +0000 UTC m=+0.472266919 container create f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 04:02:42 np0005539550 systemd[1]: Started libpod-conmon-f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4.scope.
Nov 29 04:02:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:42 np0005539550 podman[418001]: 2025-11-29 09:02:42.687822291 +0000 UTC m=+0.867001355 container init f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 29 04:02:42 np0005539550 podman[418001]: 2025-11-29 09:02:42.703035124 +0000 UTC m=+0.882214108 container start f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:02:42 np0005539550 podman[418001]: 2025-11-29 09:02:42.706578313 +0000 UTC m=+0.885757317 container attach f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:02:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:43 np0005539550 nova_compute[257631]: 2025-11-29 09:02:43.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:43 np0005539550 heuristic_shirley[418022]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:02:43 np0005539550 heuristic_shirley[418022]: --> relative data size: 1.0
Nov 29 04:02:43 np0005539550 heuristic_shirley[418022]: --> All data devices are unavailable
Nov 29 04:02:43 np0005539550 systemd[1]: libpod-f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4.scope: Deactivated successfully.
Nov 29 04:02:43 np0005539550 podman[418001]: 2025-11-29 09:02:43.560392035 +0000 UTC m=+1.739571059 container died f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 29 04:02:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d2fe0a0a1da601c30ee978e4b49bf1b013f373d73a8ba01a40dc2a7607c24707-merged.mount: Deactivated successfully.
Nov 29 04:02:43 np0005539550 podman[418001]: 2025-11-29 09:02:43.63518322 +0000 UTC m=+1.814362204 container remove f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_shirley, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:02:43 np0005539550 systemd[1]: libpod-conmon-f8103c525a743fd1ef047b559c6e98c083fcf2aaaf442d2111a331891ed4cfc4.scope: Deactivated successfully.
Nov 29 04:02:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:44.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:44 np0005539550 nova_compute[257631]: 2025-11-29 09:02:44.399 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:44 np0005539550 podman[418190]: 2025-11-29 09:02:44.463603602 +0000 UTC m=+0.073083272 container create 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:44 np0005539550 systemd[1]: Started libpod-conmon-32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198.scope.
Nov 29 04:02:44 np0005539550 podman[418190]: 2025-11-29 09:02:44.43176593 +0000 UTC m=+0.041245640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:44 np0005539550 podman[418190]: 2025-11-29 09:02:44.551907747 +0000 UTC m=+0.161387477 container init 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:02:44 np0005539550 podman[418190]: 2025-11-29 09:02:44.557099608 +0000 UTC m=+0.166579238 container start 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:02:44 np0005539550 podman[418190]: 2025-11-29 09:02:44.560078843 +0000 UTC m=+0.169558513 container attach 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:02:44 np0005539550 bold_borg[418206]: 167 167
Nov 29 04:02:44 np0005539550 systemd[1]: libpod-32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198.scope: Deactivated successfully.
Nov 29 04:02:44 np0005539550 conmon[418206]: conmon 32a8e2450682b5d86896 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198.scope/container/memory.events
Nov 29 04:02:44 np0005539550 podman[418211]: 2025-11-29 09:02:44.62781138 +0000 UTC m=+0.041584129 container died 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:02:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-b8465d58cb5ea6c51e0c1e9bcf03d6c382602c9a4f319c2c8535ca420c66504e-merged.mount: Deactivated successfully.
Nov 29 04:02:44 np0005539550 podman[418211]: 2025-11-29 09:02:44.949416732 +0000 UTC m=+0.363189391 container remove 32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_borg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 29 04:02:44 np0005539550 systemd[1]: libpod-conmon-32a8e2450682b5d86896426f7d571363bb1e61342b890c93a4ba05ee4ebba198.scope: Deactivated successfully.
Nov 29 04:02:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:45 np0005539550 podman[418233]: 2025-11-29 09:02:45.177697793 +0000 UTC m=+0.055867749 container create 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:02:45 np0005539550 systemd[1]: Started libpod-conmon-1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd.scope.
Nov 29 04:02:45 np0005539550 podman[418233]: 2025-11-29 09:02:45.153442072 +0000 UTC m=+0.031612098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:45 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a6123022a10d58776994528a483a10c375e1d62a49055cc036b759d7bba916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a6123022a10d58776994528a483a10c375e1d62a49055cc036b759d7bba916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a6123022a10d58776994528a483a10c375e1d62a49055cc036b759d7bba916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:45 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a6123022a10d58776994528a483a10c375e1d62a49055cc036b759d7bba916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:45 np0005539550 podman[418233]: 2025-11-29 09:02:45.282852722 +0000 UTC m=+0.161022688 container init 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:45 np0005539550 podman[418233]: 2025-11-29 09:02:45.292400523 +0000 UTC m=+0.170570459 container start 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 04:02:45 np0005539550 podman[418233]: 2025-11-29 09:02:45.29666648 +0000 UTC m=+0.174836426 container attach 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 29 04:02:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:45.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]: {
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:    "0": [
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:        {
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "devices": [
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "/dev/loop3"
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            ],
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "lv_name": "ceph_lv0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "lv_size": "7511998464",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "name": "ceph_lv0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "tags": {
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.cluster_name": "ceph",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.crush_device_class": "",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.encrypted": "0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.osd_id": "0",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.type": "block",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:                "ceph.vdo": "0"
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            },
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "type": "block",
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:            "vg_name": "ceph_vg0"
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:        }
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]:    ]
Nov 29 04:02:46 np0005539550 pedantic_lamport[418249]: }
Nov 29 04:02:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:46.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:46 np0005539550 podman[418233]: 2025-11-29 09:02:46.117024589 +0000 UTC m=+0.995194565 container died 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 29 04:02:46 np0005539550 systemd[1]: libpod-1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd.scope: Deactivated successfully.
Nov 29 04:02:46 np0005539550 systemd[1]: var-lib-containers-storage-overlay-26a6123022a10d58776994528a483a10c375e1d62a49055cc036b759d7bba916-merged.mount: Deactivated successfully.
Nov 29 04:02:46 np0005539550 podman[418233]: 2025-11-29 09:02:46.546620183 +0000 UTC m=+1.424790149 container remove 1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lamport, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:46 np0005539550 systemd[1]: libpod-conmon-1c2549a07c2823412107eee089636e4a4ee130a5da39343fc6bdb7461522f8dd.scope: Deactivated successfully.
Nov 29 04:02:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.185305714 +0000 UTC m=+0.041023714 container create 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:02:47 np0005539550 systemd[1]: Started libpod-conmon-2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b.scope.
Nov 29 04:02:47 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.168197213 +0000 UTC m=+0.023915233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.269536097 +0000 UTC m=+0.125254117 container init 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.275768764 +0000 UTC m=+0.131486764 container start 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:47 np0005539550 amazing_panini[418428]: 167 167
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.281942739 +0000 UTC m=+0.137660759 container attach 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:02:47 np0005539550 systemd[1]: libpod-2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b.scope: Deactivated successfully.
Nov 29 04:02:47 np0005539550 conmon[418428]: conmon 2f0a56f4564d5630f9f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b.scope/container/memory.events
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.283832957 +0000 UTC m=+0.139550957 container died 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:02:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-752ccd342f19ea8dd685b987e84012341a163bec830a7943d07b7ecf9c85d060-merged.mount: Deactivated successfully.
Nov 29 04:02:47 np0005539550 podman[418413]: 2025-11-29 09:02:47.338494614 +0000 UTC m=+0.194212614 container remove 2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:02:47 np0005539550 systemd[1]: libpod-conmon-2f0a56f4564d5630f9f6725dc81757282acc1ee3e5886f4c7bcc0ce3a87c781b.scope: Deactivated successfully.
Nov 29 04:02:47 np0005539550 podman[418452]: 2025-11-29 09:02:47.498115736 +0000 UTC m=+0.033322271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:02:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:47.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:47 np0005539550 podman[418452]: 2025-11-29 09:02:47.731187349 +0000 UTC m=+0.266393904 container create 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 29 04:02:48 np0005539550 systemd[1]: Started libpod-conmon-458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964.scope.
Nov 29 04:02:48 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:02:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3a8ee4a2733f0ffa0b1fdddd0c99473762944556da314ea95e844e7c77e453/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3a8ee4a2733f0ffa0b1fdddd0c99473762944556da314ea95e844e7c77e453/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3a8ee4a2733f0ffa0b1fdddd0c99473762944556da314ea95e844e7c77e453/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:48 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce3a8ee4a2733f0ffa0b1fdddd0c99473762944556da314ea95e844e7c77e453/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:02:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:48.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:48 np0005539550 podman[418452]: 2025-11-29 09:02:48.120193159 +0000 UTC m=+0.655399764 container init 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:02:48 np0005539550 podman[418452]: 2025-11-29 09:02:48.127582135 +0000 UTC m=+0.662788650 container start 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:02:48 np0005539550 podman[418452]: 2025-11-29 09:02:48.130839267 +0000 UTC m=+0.666045872 container attach 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 29 04:02:48 np0005539550 nova_compute[257631]: 2025-11-29 09:02:48.272 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]: {
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:        "osd_id": 0,
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:        "type": "bluestore"
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]:    }
Nov 29 04:02:48 np0005539550 loving_brahmagupta[418468]: }
Nov 29 04:02:48 np0005539550 systemd[1]: libpod-458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964.scope: Deactivated successfully.
Nov 29 04:02:48 np0005539550 podman[418452]: 2025-11-29 09:02:48.988521026 +0000 UTC m=+1.523727581 container died 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 29 04:02:49 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ce3a8ee4a2733f0ffa0b1fdddd0c99473762944556da314ea95e844e7c77e453-merged.mount: Deactivated successfully.
Nov 29 04:02:49 np0005539550 podman[418452]: 2025-11-29 09:02:49.057613746 +0000 UTC m=+1.592820261 container remove 458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 04:02:49 np0005539550 systemd[1]: libpod-conmon-458383a8698aa5ef53b107f1e0ad9e13b075ca177b999847448877850e4e8964.scope: Deactivated successfully.
Nov 29 04:02:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:02:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:02:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ba1d6edd-d3c9-4c60-afd9-7d77c7af9f7c does not exist
Nov 29 04:02:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6e933e64-b76c-4495-8906-a388ffcb16d7 does not exist
Nov 29 04:02:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev ed59d42e-1807-4a89-ad9c-2ac537b2c6bc does not exist
Nov 29 04:02:49 np0005539550 nova_compute[257631]: 2025-11-29 09:02:49.400 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:49.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:50.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:02:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:52.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:53 np0005539550 nova_compute[257631]: 2025-11-29 09:02:53.274 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:02:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:02:54 np0005539550 nova_compute[257631]: 2025-11-29 09:02:54.402 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:02:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:02:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:57.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:02:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:02:58.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:02:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:02:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:02:58 np0005539550 nova_compute[257631]: 2025-11-29 09:02:58.276 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:02:59 np0005539550 nova_compute[257631]: 2025-11-29 09:02:59.404 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:02:59
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'backups', 'vms', 'volumes', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 29 04:02:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:02:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:02:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:02:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:02:59.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:00.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:01 np0005539550 podman[418588]: 2025-11-29 09:03:01.209283212 +0000 UTC m=+0.061782597 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 04:03:01 np0005539550 podman[418587]: 2025-11-29 09:03:01.209404735 +0000 UTC m=+0.061508150 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:03:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:01.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:03 np0005539550 nova_compute[257631]: 2025-11-29 09:03:03.326 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:03.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:04.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:04 np0005539550 nova_compute[257631]: 2025-11-29 09:03:04.422 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:05.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:07.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:08 np0005539550 nova_compute[257631]: 2025-11-29 09:03:08.328 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:03:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:03:08 np0005539550 nova_compute[257631]: 2025-11-29 09:03:08.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:03:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:03:09 np0005539550 nova_compute[257631]: 2025-11-29 09:03:09.425 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:09.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:09 np0005539550 nova_compute[257631]: 2025-11-29 09:03:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539550 nova_compute[257631]: 2025-11-29 09:03:09.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:09 np0005539550 nova_compute[257631]: 2025-11-29 09:03:09.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:03:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:10 np0005539550 nova_compute[257631]: 2025-11-29 09:03:10.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:11.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:11 np0005539550 nova_compute[257631]: 2025-11-29 09:03:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:11 np0005539550 nova_compute[257631]: 2025-11-29 09:03:11.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:03:11 np0005539550 nova_compute[257631]: 2025-11-29 09:03:11.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:03:11 np0005539550 nova_compute[257631]: 2025-11-29 09:03:11.932 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:03:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:12.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:12 np0005539550 podman[418658]: 2025-11-29 09:03:12.379823029 +0000 UTC m=+0.102486973 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 04:03:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.331 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.944 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.945 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.945 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:03:13 np0005539550 nova_compute[257631]: 2025-11-29 09:03:13.945 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:14 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:14 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818656489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.368 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.427 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.549 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.550 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4042MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.550 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.550 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.734 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.735 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:03:14 np0005539550 nova_compute[257631]: 2025-11-29 09:03:14.806 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:03:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:15 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:03:15 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530509530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:03:15 np0005539550 nova_compute[257631]: 2025-11-29 09:03:15.257 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:03:15 np0005539550 nova_compute[257631]: 2025-11-29 09:03:15.263 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:03:15 np0005539550 nova_compute[257631]: 2025-11-29 09:03:15.327 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:03:15 np0005539550 nova_compute[257631]: 2025-11-29 09:03:15.329 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:03:15 np0005539550 nova_compute[257631]: 2025-11-29 09:03:15.330 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:16.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:18.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:18 np0005539550 nova_compute[257631]: 2025-11-29 09:03:18.380 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:03:18.993 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:03:18.993 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:03:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:03:18.993 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:03:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:19 np0005539550 nova_compute[257631]: 2025-11-29 09:03:19.326 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:19 np0005539550 nova_compute[257631]: 2025-11-29 09:03:19.471 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:19.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:20.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:03:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:03:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:22.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:23 np0005539550 nova_compute[257631]: 2025-11-29 09:03:23.382 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:24.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:24 np0005539550 nova_compute[257631]: 2025-11-29 09:03:24.472 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:25.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:26.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:27.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:27 np0005539550 nova_compute[257631]: 2025-11-29 09:03:27.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:28.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:28 np0005539550 nova_compute[257631]: 2025-11-29 09:03:28.385 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:29 np0005539550 nova_compute[257631]: 2025-11-29 09:03:29.475 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:31 np0005539550 podman[418789]: 2025-11-29 09:03:31.349149308 +0000 UTC m=+0.079014002 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:03:31 np0005539550 podman[418790]: 2025-11-29 09:03:31.359258363 +0000 UTC m=+0.078467008 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 04:03:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:31.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.118693) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012118919, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1183, "num_deletes": 259, "total_data_size": 1968711, "memory_usage": 1997584, "flush_reason": "Manual Compaction"}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012142926, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1946407, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75659, "largest_seqno": 76841, "table_properties": {"data_size": 1940819, "index_size": 2982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11788, "raw_average_key_size": 19, "raw_value_size": 1929555, "raw_average_value_size": 3178, "num_data_blocks": 133, "num_entries": 607, "num_filter_entries": 607, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764406898, "oldest_key_time": 1764406898, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 24317 microseconds, and 9670 cpu microseconds.
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.142999) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1946407 bytes OK
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.143043) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.144647) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.144672) EVENT_LOG_v1 {"time_micros": 1764407012144664, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.144700) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1963471, prev total WAL file size 1963471, number of live WAL files 2.
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.145773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323730' seq:0, type:0; will stop at (end)
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1900KB)], [170(13MB)]
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012145824, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 16325916, "oldest_snapshot_seqno": -1}
Nov 29 04:03:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:32.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 11430 keys, 16184724 bytes, temperature: kUnknown
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012293263, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 16184724, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16109244, "index_size": 45799, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28613, "raw_key_size": 302227, "raw_average_key_size": 26, "raw_value_size": 15907146, "raw_average_value_size": 1391, "num_data_blocks": 1751, "num_entries": 11430, "num_filter_entries": 11430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764407012, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.294068) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 16184724 bytes
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.295617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.6 rd, 109.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.7 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(16.7) write-amplify(8.3) OK, records in: 11961, records dropped: 531 output_compression: NoCompression
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.295664) EVENT_LOG_v1 {"time_micros": 1764407012295644, "job": 106, "event": "compaction_finished", "compaction_time_micros": 147557, "compaction_time_cpu_micros": 65889, "output_level": 6, "num_output_files": 1, "total_output_size": 16184724, "num_input_records": 11961, "num_output_records": 11430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012296412, "job": 106, "event": "table_file_deletion", "file_number": 172}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407012301173, "job": 106, "event": "table_file_deletion", "file_number": 170}
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.145717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.301292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.301300) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.301303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.301307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:32 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:32.301310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:33 np0005539550 nova_compute[257631]: 2025-11-29 09:03:33.388 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:33.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:34.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:34 np0005539550 nova_compute[257631]: 2025-11-29 09:03:34.477 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:35.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:36.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:37.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:38.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:38 np0005539550 nova_compute[257631]: 2025-11-29 09:03:38.428 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:39 np0005539550 nova_compute[257631]: 2025-11-29 09:03:39.509 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:39.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:40.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:41.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:41 np0005539550 nova_compute[257631]: 2025-11-29 09:03:41.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:03:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:42.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:43 np0005539550 podman[418885]: 2025-11-29 09:03:43.342732191 +0000 UTC m=+0.079915754 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:03:43 np0005539550 nova_compute[257631]: 2025-11-29 09:03:43.431 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:43.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:44.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:44 np0005539550 nova_compute[257631]: 2025-11-29 09:03:44.510 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:45.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:46.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:46 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:47.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:48.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:48 np0005539550 nova_compute[257631]: 2025-11-29 09:03:48.432 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:49 np0005539550 nova_compute[257631]: 2025-11-29 09:03:49.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:49.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:50.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:03:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c3a2e52a-a862-4f27-8fb8-67fec5f21fdd does not exist
Nov 29 04:03:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3b4fa576-eac8-495f-9cda-80352a3ad463 does not exist
Nov 29 04:03:50 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e933ed1f-5bee-4f7e-af22-b00244ad3fca does not exist
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:03:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:03:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.182243925 +0000 UTC m=+0.049697604 container create 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:51 np0005539550 systemd[1]: Started libpod-conmon-02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b.scope.
Nov 29 04:03:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.161854481 +0000 UTC m=+0.029308200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.266170299 +0000 UTC m=+0.133623988 container init 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.274456228 +0000 UTC m=+0.141909907 container start 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.27811137 +0000 UTC m=+0.145565039 container attach 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 29 04:03:51 np0005539550 optimistic_volhard[419202]: 167 167
Nov 29 04:03:51 np0005539550 systemd[1]: libpod-02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b.scope: Deactivated successfully.
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.282785278 +0000 UTC m=+0.150238957 container died 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:51 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f9d23016fe616461f5f8674a4dc4a63315e897a73a99cb549f181e7a4c26a8ad-merged.mount: Deactivated successfully.
Nov 29 04:03:51 np0005539550 podman[419186]: 2025-11-29 09:03:51.321138234 +0000 UTC m=+0.188591913 container remove 02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_volhard, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:03:51 np0005539550 systemd[1]: libpod-conmon-02dba34ab99493c7eb19afd6884be6b28216f56d1ccfbd1568da893e7903d63b.scope: Deactivated successfully.
Nov 29 04:03:51 np0005539550 podman[419225]: 2025-11-29 09:03:51.507595222 +0000 UTC m=+0.046655067 container create 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:03:51 np0005539550 systemd[1]: Started libpod-conmon-90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343.scope.
Nov 29 04:03:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:03:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:03:51 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:03:51 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:51 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:51 np0005539550 podman[419225]: 2025-11-29 09:03:51.488175932 +0000 UTC m=+0.027235807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:51 np0005539550 podman[419225]: 2025-11-29 09:03:51.597461016 +0000 UTC m=+0.136520871 container init 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:51 np0005539550 podman[419225]: 2025-11-29 09:03:51.604160935 +0000 UTC m=+0.143220780 container start 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:03:51 np0005539550 podman[419225]: 2025-11-29 09:03:51.608012172 +0000 UTC m=+0.147072047 container attach 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:03:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:51.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.016094) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032016543, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 428, "num_deletes": 250, "total_data_size": 348421, "memory_usage": 355824, "flush_reason": "Manual Compaction"}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032022508, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 283158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 76842, "largest_seqno": 77269, "table_properties": {"data_size": 280773, "index_size": 484, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6618, "raw_average_key_size": 20, "raw_value_size": 275863, "raw_average_value_size": 854, "num_data_blocks": 22, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407012, "oldest_key_time": 1764407012, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 6492 microseconds, and 3065 cpu microseconds.
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.022578) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 283158 bytes OK
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.022607) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025632) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025659) EVENT_LOG_v1 {"time_micros": 1764407032025651, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.025681) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 345799, prev total WAL file size 345799, number of live WAL files 2.
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.026971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373536' seq:72057594037927935, type:22 .. '6D6772737461740033303037' seq:0, type:0; will stop at (end)
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(276KB)], [173(15MB)]
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032027011, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 16467882, "oldest_snapshot_seqno": -1}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 11246 keys, 12694758 bytes, temperature: kUnknown
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032142325, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 12694758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12625151, "index_size": 40374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28165, "raw_key_size": 298605, "raw_average_key_size": 26, "raw_value_size": 12430872, "raw_average_value_size": 1105, "num_data_blocks": 1525, "num_entries": 11246, "num_filter_entries": 11246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764407032, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.142697) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 12694758 bytes
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.144653) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 110.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.4 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(103.0) write-amplify(44.8) OK, records in: 11753, records dropped: 507 output_compression: NoCompression
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.144685) EVENT_LOG_v1 {"time_micros": 1764407032144671, "job": 108, "event": "compaction_finished", "compaction_time_micros": 115411, "compaction_time_cpu_micros": 43087, "output_level": 6, "num_output_files": 1, "total_output_size": 12694758, "num_input_records": 11753, "num_output_records": 11246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032144964, "job": 108, "event": "table_file_deletion", "file_number": 175}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407032150084, "job": 108, "event": "table_file_deletion", "file_number": 173}
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.026846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.150121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.150125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.150127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.150129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:03:52.150130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:03:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:52.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:52 np0005539550 awesome_kowalevski[419242]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:03:52 np0005539550 awesome_kowalevski[419242]: --> relative data size: 1.0
Nov 29 04:03:52 np0005539550 awesome_kowalevski[419242]: --> All data devices are unavailable
Nov 29 04:03:52 np0005539550 systemd[1]: libpod-90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343.scope: Deactivated successfully.
Nov 29 04:03:52 np0005539550 podman[419225]: 2025-11-29 09:03:52.402193971 +0000 UTC m=+0.941253816 container died 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 29 04:03:52 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f78652b198a2ed90a9003dc0adc5a0efe5977e675dd3f36a256d7b1037804f11-merged.mount: Deactivated successfully.
Nov 29 04:03:52 np0005539550 podman[419225]: 2025-11-29 09:03:52.518253935 +0000 UTC m=+1.057313780 container remove 90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:03:52 np0005539550 systemd[1]: libpod-conmon-90d66b7a628b7876b3d3ccaaf734d3316306c2cd2d080b4009a5e55714b48343.scope: Deactivated successfully.
Nov 29 04:03:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.19599278 +0000 UTC m=+0.037172638 container create 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Nov 29 04:03:53 np0005539550 systemd[1]: Started libpod-conmon-08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921.scope.
Nov 29 04:03:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.258738191 +0000 UTC m=+0.099918069 container init 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.264549927 +0000 UTC m=+0.105729785 container start 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.26746103 +0000 UTC m=+0.108640888 container attach 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:03:53 np0005539550 romantic_haslett[419431]: 167 167
Nov 29 04:03:53 np0005539550 systemd[1]: libpod-08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921.scope: Deactivated successfully.
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.269592824 +0000 UTC m=+0.110772682 container died 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.180459418 +0000 UTC m=+0.021639306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:53 np0005539550 systemd[1]: var-lib-containers-storage-overlay-4ac75201db1738f7d7b3f0491b31dea05e7b6bf519396e1b434f9a5db99cba14-merged.mount: Deactivated successfully.
Nov 29 04:03:53 np0005539550 podman[419414]: 2025-11-29 09:03:53.320526407 +0000 UTC m=+0.161706305 container remove 08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 29 04:03:53 np0005539550 systemd[1]: libpod-conmon-08c68a14e811dbaf50135022af8044ca5042311ba6203cd6d01515bf56576921.scope: Deactivated successfully.
Nov 29 04:03:53 np0005539550 nova_compute[257631]: 2025-11-29 09:03:53.434 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:53 np0005539550 podman[419453]: 2025-11-29 09:03:53.540963071 +0000 UTC m=+0.054340300 container create cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:03:53 np0005539550 systemd[1]: Started libpod-conmon-cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8.scope.
Nov 29 04:03:53 np0005539550 podman[419453]: 2025-11-29 09:03:53.513864668 +0000 UTC m=+0.027241897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:53 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8ec5aec5bba4f0b6374f337da176aa8db54c3ed231462fdceadb7ec9fe378e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8ec5aec5bba4f0b6374f337da176aa8db54c3ed231462fdceadb7ec9fe378e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8ec5aec5bba4f0b6374f337da176aa8db54c3ed231462fdceadb7ec9fe378e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:53 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce8ec5aec5bba4f0b6374f337da176aa8db54c3ed231462fdceadb7ec9fe378e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:53 np0005539550 podman[419453]: 2025-11-29 09:03:53.642360956 +0000 UTC m=+0.155738235 container init cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:03:53 np0005539550 podman[419453]: 2025-11-29 09:03:53.653363053 +0000 UTC m=+0.166740242 container start cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:03:53 np0005539550 podman[419453]: 2025-11-29 09:03:53.661632821 +0000 UTC m=+0.175010050 container attach cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:03:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:53.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:54.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]: {
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:    "0": [
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:        {
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "devices": [
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "/dev/loop3"
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            ],
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "lv_name": "ceph_lv0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "lv_size": "7511998464",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "name": "ceph_lv0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "tags": {
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.cluster_name": "ceph",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.crush_device_class": "",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.encrypted": "0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.osd_id": "0",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.type": "block",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:                "ceph.vdo": "0"
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            },
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "type": "block",
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:            "vg_name": "ceph_vg0"
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:        }
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]:    ]
Nov 29 04:03:54 np0005539550 wizardly_kalam[419469]: }
Nov 29 04:03:54 np0005539550 systemd[1]: libpod-cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8.scope: Deactivated successfully.
Nov 29 04:03:54 np0005539550 podman[419453]: 2025-11-29 09:03:54.376060771 +0000 UTC m=+0.889437980 container died cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 29 04:03:54 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ce8ec5aec5bba4f0b6374f337da176aa8db54c3ed231462fdceadb7ec9fe378e-merged.mount: Deactivated successfully.
Nov 29 04:03:54 np0005539550 podman[419453]: 2025-11-29 09:03:54.443975502 +0000 UTC m=+0.957352711 container remove cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 29 04:03:54 np0005539550 systemd[1]: libpod-conmon-cf0313e8f2197d2d4ef78720be858eec19723783ed9f4b2398acbfedb11370c8.scope: Deactivated successfully.
Nov 29 04:03:54 np0005539550 nova_compute[257631]: 2025-11-29 09:03:54.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:55 np0005539550 podman[419634]: 2025-11-29 09:03:55.155661013 +0000 UTC m=+0.080261333 container create 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 29 04:03:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:55 np0005539550 podman[419634]: 2025-11-29 09:03:55.109236814 +0000 UTC m=+0.033837174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:55 np0005539550 systemd[1]: Started libpod-conmon-0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b.scope.
Nov 29 04:03:55 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:03:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:55.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:03:56 np0005539550 podman[419634]: 2025-11-29 09:03:56.009724381 +0000 UTC m=+0.934324791 container init 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:56 np0005539550 podman[419634]: 2025-11-29 09:03:56.022323409 +0000 UTC m=+0.946923759 container start 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 29 04:03:56 np0005539550 epic_matsumoto[419650]: 167 167
Nov 29 04:03:56 np0005539550 systemd[1]: libpod-0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b.scope: Deactivated successfully.
Nov 29 04:03:56 np0005539550 podman[419634]: 2025-11-29 09:03:56.187781337 +0000 UTC m=+1.112381677 container attach 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 29 04:03:56 np0005539550 podman[419634]: 2025-11-29 09:03:56.188224269 +0000 UTC m=+1.112824589 container died 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 04:03:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:56 np0005539550 systemd[1]: var-lib-containers-storage-overlay-027c95314c2a888391a8e9ec22f64353676c747d9f347a3d5149bd48549efcaa-merged.mount: Deactivated successfully.
Nov 29 04:03:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:03:56 np0005539550 podman[419634]: 2025-11-29 09:03:56.996597354 +0000 UTC m=+1.921197714 container remove 0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 29 04:03:57 np0005539550 systemd[1]: libpod-conmon-0d04020574396772578d315104f83c6ce5d09534fb53754e908cc1497f0d081b.scope: Deactivated successfully.
Nov 29 04:03:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:57 np0005539550 podman[419676]: 2025-11-29 09:03:57.200807989 +0000 UTC m=+0.047046426 container create c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:03:57 np0005539550 systemd[1]: Started libpod-conmon-c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4.scope.
Nov 29 04:03:57 np0005539550 podman[419676]: 2025-11-29 09:03:57.181765969 +0000 UTC m=+0.028004406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:03:57 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:03:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fa574af92c9fdb013e26a0e2c695777f8a1def9aaa4356836a32794a59c7aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fa574af92c9fdb013e26a0e2c695777f8a1def9aaa4356836a32794a59c7aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fa574af92c9fdb013e26a0e2c695777f8a1def9aaa4356836a32794a59c7aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:57 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fa574af92c9fdb013e26a0e2c695777f8a1def9aaa4356836a32794a59c7aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:03:57 np0005539550 podman[419676]: 2025-11-29 09:03:57.304167243 +0000 UTC m=+0.150405690 container init c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Nov 29 04:03:57 np0005539550 podman[419676]: 2025-11-29 09:03:57.315769956 +0000 UTC m=+0.162008393 container start c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Nov 29 04:03:57 np0005539550 podman[419676]: 2025-11-29 09:03:57.321177812 +0000 UTC m=+0.167416219 container attach c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:03:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:57.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]: {
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:        "osd_id": 0,
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:        "type": "bluestore"
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]:    }
Nov 29 04:03:58 np0005539550 infallible_johnson[419693]: }
Nov 29 04:03:58 np0005539550 systemd[1]: libpod-c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4.scope: Deactivated successfully.
Nov 29 04:03:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:03:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:03:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:03:58 np0005539550 podman[419714]: 2025-11-29 09:03:58.238102494 +0000 UTC m=+0.030535011 container died c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 29 04:03:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:03:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:03:58 np0005539550 nova_compute[257631]: 2025-11-29 09:03:58.435 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:59 np0005539550 systemd[1]: var-lib-containers-storage-overlay-94fa574af92c9fdb013e26a0e2c695777f8a1def9aaa4356836a32794a59c7aa-merged.mount: Deactivated successfully.
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:03:59 np0005539550 nova_compute[257631]: 2025-11-29 09:03:59.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:03:59
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'vms']
Nov 29 04:03:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:03:59 np0005539550 podman[419714]: 2025-11-29 09:03:59.621124159 +0000 UTC m=+1.413556666 container remove c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:03:59 np0005539550 systemd[1]: libpod-conmon-c2da1769f1fc6bd0b7bd86779762dc40f07cdf8824bbebf85214ee54136b2da4.scope: Deactivated successfully.
Nov 29 04:03:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:03:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:03:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:03:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:03:59.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:00.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:04:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 00509a9b-4840-46b8-980b-d0a16cc34eea does not exist
Nov 29 04:04:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev e42efa2d-cc44-4638-96a3-0f5ec6d874b3 does not exist
Nov 29 04:04:01 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9b56526e-8fae-4d1d-bd93-c7b5b2b4988d does not exist
Nov 29 04:04:01 np0005539550 podman[419755]: 2025-11-29 09:04:01.622138744 +0000 UTC m=+0.074891958 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:04:01 np0005539550 podman[419756]: 2025-11-29 09:04:01.631857258 +0000 UTC m=+0.079415941 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 04:04:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:01.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:04:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:02.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:03 np0005539550 nova_compute[257631]: 2025-11-29 09:04:03.438 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:03.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:04.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:04 np0005539550 nova_compute[257631]: 2025-11-29 09:04:04.523 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:05.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:06.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:07.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:08 np0005539550 nova_compute[257631]: 2025-11-29 09:04:08.441 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:04:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:04:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:04:09 np0005539550 nova_compute[257631]: 2025-11-29 09:04:09.526 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:09.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:09 np0005539550 nova_compute[257631]: 2025-11-29 09:04:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:09 np0005539550 nova_compute[257631]: 2025-11-29 09:04:09.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:09 np0005539550 nova_compute[257631]: 2025-11-29 09:04:09.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:04:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:10.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:10 np0005539550 nova_compute[257631]: 2025-11-29 09:04:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:11.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:11 np0005539550 nova_compute[257631]: 2025-11-29 09:04:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:11 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:12.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:13 np0005539550 nova_compute[257631]: 2025-11-29 09:04:13.441 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:13.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:13 np0005539550 nova_compute[257631]: 2025-11-29 09:04:13.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:13 np0005539550 nova_compute[257631]: 2025-11-29 09:04:13.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:04:13 np0005539550 nova_compute[257631]: 2025-11-29 09:04:13.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:04:13 np0005539550 nova_compute[257631]: 2025-11-29 09:04:13.944 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:04:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:14.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:14 np0005539550 podman[419876]: 2025-11-29 09:04:14.395471374 +0000 UTC m=+0.125776680 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 04:04:14 np0005539550 nova_compute[257631]: 2025-11-29 09:04:14.527 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:14 np0005539550 nova_compute[257631]: 2025-11-29 09:04:14.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:15.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:15 np0005539550 nova_compute[257631]: 2025-11-29 09:04:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.073 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.073 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.074 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.074 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.074 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:16.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:16 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140661824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.539 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.712 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.713 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.713 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.713 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.927 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.927 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:04:16 np0005539550 nova_compute[257631]: 2025-11-29 09:04:16.942 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:04:16 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:04:17 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561967471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:04:17 np0005539550 nova_compute[257631]: 2025-11-29 09:04:17.409 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:04:17 np0005539550 nova_compute[257631]: 2025-11-29 09:04:17.416 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:04:17 np0005539550 nova_compute[257631]: 2025-11-29 09:04:17.442 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:04:17 np0005539550 nova_compute[257631]: 2025-11-29 09:04:17.443 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:04:17 np0005539550 nova_compute[257631]: 2025-11-29 09:04:17.444 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:17.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:18.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:18 np0005539550 nova_compute[257631]: 2025-11-29 09:04:18.444 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:04:18.994 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:04:18.995 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:04:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:04:18.995 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:04:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:19 np0005539550 nova_compute[257631]: 2025-11-29 09:04:19.529 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:19.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:20.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:04:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:04:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:21 np0005539550 nova_compute[257631]: 2025-11-29 09:04:21.439 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:22.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:23 np0005539550 nova_compute[257631]: 2025-11-29 09:04:23.445 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:23.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:24.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:24 np0005539550 nova_compute[257631]: 2025-11-29 09:04:24.531 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:25.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:26 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:27.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:27 np0005539550 nova_compute[257631]: 2025-11-29 09:04:27.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:28.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:28 np0005539550 nova_compute[257631]: 2025-11-29 09:04:28.447 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:29 np0005539550 nova_compute[257631]: 2025-11-29 09:04:29.533 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:29.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:30.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:31 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:32 np0005539550 podman[420007]: 2025-11-29 09:04:32.328656785 +0000 UTC m=+0.063770588 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 04:04:32 np0005539550 podman[420008]: 2025-11-29 09:04:32.349984532 +0000 UTC m=+0.081859293 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:04:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:33 np0005539550 nova_compute[257631]: 2025-11-29 09:04:33.496 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:04:33 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.4 total, 600.0 interval#012Cumulative writes: 61K writes, 230K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 23K syncs, 2.66 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 702 writes, 1067 keys, 702 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 702 writes, 350 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:04:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:33.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:34.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:34 np0005539550 nova_compute[257631]: 2025-11-29 09:04:34.573 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:35.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:36.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:36 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:37.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:38.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:38 np0005539550 nova_compute[257631]: 2025-11-29 09:04:38.497 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:39 np0005539550 nova_compute[257631]: 2025-11-29 09:04:39.575 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:39.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:40.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:41.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:41 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:42.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:43 np0005539550 nova_compute[257631]: 2025-11-29 09:04:43.499 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:43.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:44.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:44 np0005539550 nova_compute[257631]: 2025-11-29 09:04:44.578 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:45 np0005539550 podman[420098]: 2025-11-29 09:04:45.412962224 +0000 UTC m=+0.150302678 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 29 04:04:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:45.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:46.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:47 np0005539550 ceph-mgr[74726]: [devicehealth INFO root] Check health
Nov 29 04:04:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:47.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:48.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:48 np0005539550 nova_compute[257631]: 2025-11-29 09:04:48.502 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:49 np0005539550 nova_compute[257631]: 2025-11-29 09:04:49.581 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:04:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:49.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:04:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:50.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:51.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:52.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:53 np0005539550 nova_compute[257631]: 2025-11-29 09:04:53.504 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:53.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:54.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:54 np0005539550 nova_compute[257631]: 2025-11-29 09:04:54.638 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:55.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:04:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:04:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:57.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:04:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:04:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:04:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:04:58.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:04:58 np0005539550 nova_compute[257631]: 2025-11-29 09:04:58.506 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:04:59
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta']
Nov 29 04:04:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:04:59 np0005539550 nova_compute[257631]: 2025-11-29 09:04:59.640 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:04:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:04:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:04:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:04:59.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:00.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:01 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:01 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:01 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:01.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:02.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 21866593-0ab5-45be-a74d-3a4f132ea607 does not exist
Nov 29 04:05:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1d41c13a-e06a-4a2c-a6af-28a2aab563cc does not exist
Nov 29 04:05:02 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8b7641e6-d6bf-452a-b099-c4500b0b6ea1 does not exist
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:05:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:05:03 np0005539550 podman[420338]: 2025-11-29 09:05:03.158032365 +0000 UTC m=+0.070590630 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 04:05:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:03 np0005539550 podman[420339]: 2025-11-29 09:05:03.196101704 +0000 UTC m=+0.091374683 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 04:05:03 np0005539550 nova_compute[257631]: 2025-11-29 09:05:03.508 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:05:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:03 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.666086535 +0000 UTC m=+0.048997715 container create f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 29 04:05:03 np0005539550 systemd[1]: Started libpod-conmon-f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb.scope.
Nov 29 04:05:03 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.645416164 +0000 UTC m=+0.028327424 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.756701238 +0000 UTC m=+0.139612428 container init f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.763660714 +0000 UTC m=+0.146571884 container start f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.76708804 +0000 UTC m=+0.149999210 container attach f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:05:03 np0005539550 vigilant_einstein[420508]: 167 167
Nov 29 04:05:03 np0005539550 systemd[1]: libpod-f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb.scope: Deactivated successfully.
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.771894711 +0000 UTC m=+0.154805881 container died f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:05:03 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2d4929dcd09ee85ea4942751c6fa3ccd06d0005f14dc22e18dc7dfa529db4eb3-merged.mount: Deactivated successfully.
Nov 29 04:05:03 np0005539550 podman[420491]: 2025-11-29 09:05:03.818203068 +0000 UTC m=+0.201114238 container remove f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_einstein, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:05:03 np0005539550 systemd[1]: libpod-conmon-f9de8e0bb48580447e2e47c8ee55899722840a83a98a739c34855e2b30a88cfb.scope: Deactivated successfully.
Nov 29 04:05:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:03.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:03 np0005539550 podman[420532]: 2025-11-29 09:05:03.970125625 +0000 UTC m=+0.040988403 container create bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Nov 29 04:05:04 np0005539550 systemd[1]: Started libpod-conmon-bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b.scope.
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:03.952911332 +0000 UTC m=+0.023774130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:04 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:04 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:04.066004521 +0000 UTC m=+0.136867309 container init bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:04.077017689 +0000 UTC m=+0.147880457 container start bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:04.080405424 +0000 UTC m=+0.151268202 container attach bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:05:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:04.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:04 np0005539550 nova_compute[257631]: 2025-11-29 09:05:04.642 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:04 np0005539550 dreamy_greider[420549]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:05:04 np0005539550 dreamy_greider[420549]: --> relative data size: 1.0
Nov 29 04:05:04 np0005539550 dreamy_greider[420549]: --> All data devices are unavailable
Nov 29 04:05:04 np0005539550 systemd[1]: libpod-bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b.scope: Deactivated successfully.
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:04.922253193 +0000 UTC m=+0.993115981 container died bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:05:04 np0005539550 systemd[1]: var-lib-containers-storage-overlay-0e7ff77776c931cf3dda3587888f8d730d253b23536e20bc2d4ae41a11109499-merged.mount: Deactivated successfully.
Nov 29 04:05:04 np0005539550 podman[420532]: 2025-11-29 09:05:04.994681838 +0000 UTC m=+1.065544616 container remove bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:05 np0005539550 systemd[1]: libpod-conmon-bdb8127d0cd1129981c498d28f105250e61ebc2ebca3d79ade674578bd90680b.scope: Deactivated successfully.
Nov 29 04:05:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.604227925 +0000 UTC m=+0.047591960 container create 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 29 04:05:05 np0005539550 systemd[1]: Started libpod-conmon-53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f.scope.
Nov 29 04:05:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.582156279 +0000 UTC m=+0.025520374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.690787836 +0000 UTC m=+0.134151891 container init 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.69847983 +0000 UTC m=+0.141843865 container start 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.701693171 +0000 UTC m=+0.145057226 container attach 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 29 04:05:05 np0005539550 xenodochial_feynman[420735]: 167 167
Nov 29 04:05:05 np0005539550 systemd[1]: libpod-53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f.scope: Deactivated successfully.
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.706357729 +0000 UTC m=+0.149721774 container died 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:05 np0005539550 systemd[1]: var-lib-containers-storage-overlay-1ca60b992ed7c457c45343d18bef2936dba647c383f20df549bb495efcc14f29-merged.mount: Deactivated successfully.
Nov 29 04:05:05 np0005539550 podman[420719]: 2025-11-29 09:05:05.744231183 +0000 UTC m=+0.187595218 container remove 53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:05 np0005539550 systemd[1]: libpod-conmon-53c924a4061ffcc6c50b6df820b7c89f7973b27286ea009d80f46730df17f56f.scope: Deactivated successfully.
Nov 29 04:05:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:05.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:05 np0005539550 podman[420762]: 2025-11-29 09:05:05.9132005 +0000 UTC m=+0.037747492 container create 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 04:05:05 np0005539550 systemd[1]: Started libpod-conmon-4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2.scope.
Nov 29 04:05:05 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48402ed4c0251d12a2e6e75b66f5ea7ca47ec167820eb543d77d6968bd1409ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48402ed4c0251d12a2e6e75b66f5ea7ca47ec167820eb543d77d6968bd1409ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48402ed4c0251d12a2e6e75b66f5ea7ca47ec167820eb543d77d6968bd1409ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:05 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48402ed4c0251d12a2e6e75b66f5ea7ca47ec167820eb543d77d6968bd1409ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:05 np0005539550 podman[420762]: 2025-11-29 09:05:05.898575791 +0000 UTC m=+0.023122813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:05 np0005539550 podman[420762]: 2025-11-29 09:05:05.994490308 +0000 UTC m=+0.119037300 container init 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:06 np0005539550 podman[420762]: 2025-11-29 09:05:06.00053158 +0000 UTC m=+0.125078582 container start 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:06 np0005539550 podman[420762]: 2025-11-29 09:05:06.003657089 +0000 UTC m=+0.128204081 container attach 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:06.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]: {
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:    "0": [
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:        {
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "devices": [
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "/dev/loop3"
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            ],
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "lv_name": "ceph_lv0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "lv_size": "7511998464",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "name": "ceph_lv0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "tags": {
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.cluster_name": "ceph",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.crush_device_class": "",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.encrypted": "0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.osd_id": "0",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.type": "block",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:                "ceph.vdo": "0"
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            },
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "type": "block",
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:            "vg_name": "ceph_vg0"
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:        }
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]:    ]
Nov 29 04:05:06 np0005539550 blissful_snyder[420779]: }
Nov 29 04:05:06 np0005539550 systemd[1]: libpod-4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2.scope: Deactivated successfully.
Nov 29 04:05:06 np0005539550 conmon[420779]: conmon 4b654e3a014e3a768e7e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2.scope/container/memory.events
Nov 29 04:05:06 np0005539550 podman[420788]: 2025-11-29 09:05:06.789388295 +0000 UTC m=+0.025109533 container died 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:05:06 np0005539550 systemd[1]: var-lib-containers-storage-overlay-48402ed4c0251d12a2e6e75b66f5ea7ca47ec167820eb543d77d6968bd1409ba-merged.mount: Deactivated successfully.
Nov 29 04:05:06 np0005539550 podman[420788]: 2025-11-29 09:05:06.838256527 +0000 UTC m=+0.073977745 container remove 4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_snyder, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:06 np0005539550 systemd[1]: libpod-conmon-4b654e3a014e3a768e7e422c956922b512966bc8c92322eaa86476334790e0e2.scope: Deactivated successfully.
Nov 29 04:05:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.413399257 +0000 UTC m=+0.037659880 container create 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:05:07 np0005539550 systemd[1]: Started libpod-conmon-7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb.scope.
Nov 29 04:05:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.490733896 +0000 UTC m=+0.114994519 container init 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.396286226 +0000 UTC m=+0.020546829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.499302512 +0000 UTC m=+0.123563115 container start 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.502728358 +0000 UTC m=+0.126988981 container attach 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 04:05:07 np0005539550 funny_dijkstra[420961]: 167 167
Nov 29 04:05:07 np0005539550 systemd[1]: libpod-7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb.scope: Deactivated successfully.
Nov 29 04:05:07 np0005539550 conmon[420961]: conmon 7dc4947a6eb3f98101fc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb.scope/container/memory.events
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.505149359 +0000 UTC m=+0.129409962 container died 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 29 04:05:07 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f8c4696738da1afe949b668f17586890d846e59cd053507216c682a3e8e7f455-merged.mount: Deactivated successfully.
Nov 29 04:05:07 np0005539550 podman[420945]: 2025-11-29 09:05:07.539003152 +0000 UTC m=+0.163263755 container remove 7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 29 04:05:07 np0005539550 systemd[1]: libpod-conmon-7dc4947a6eb3f98101fc0d1d438ce43d852df83651d76be00f4c6b9c1076becb.scope: Deactivated successfully.
Nov 29 04:05:07 np0005539550 podman[420985]: 2025-11-29 09:05:07.691258568 +0000 UTC m=+0.041676841 container create 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:05:07 np0005539550 systemd[1]: Started libpod-conmon-78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3.scope.
Nov 29 04:05:07 np0005539550 podman[420985]: 2025-11-29 09:05:07.674498436 +0000 UTC m=+0.024916729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:05:07 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:05:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e921af119a7cb2526e9b826f63db663536ac24467d8282010ce1971f020849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e921af119a7cb2526e9b826f63db663536ac24467d8282010ce1971f020849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e921af119a7cb2526e9b826f63db663536ac24467d8282010ce1971f020849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:07 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e921af119a7cb2526e9b826f63db663536ac24467d8282010ce1971f020849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:05:07 np0005539550 podman[420985]: 2025-11-29 09:05:07.795210187 +0000 UTC m=+0.145628490 container init 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:05:07 np0005539550 podman[420985]: 2025-11-29 09:05:07.812940164 +0000 UTC m=+0.163358447 container start 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:05:07 np0005539550 podman[420985]: 2025-11-29 09:05:07.816964955 +0000 UTC m=+0.167383228 container attach 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:05:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:07.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:08.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:08 np0005539550 nova_compute[257631]: 2025-11-29 09:05:08.513 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:05:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]: {
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:        "osd_id": 0,
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:        "type": "bluestore"
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]:    }
Nov 29 04:05:08 np0005539550 elated_dubinsky[421001]: }
Nov 29 04:05:08 np0005539550 systemd[1]: libpod-78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3.scope: Deactivated successfully.
Nov 29 04:05:08 np0005539550 podman[420985]: 2025-11-29 09:05:08.809115701 +0000 UTC m=+1.159533984 container died 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:05:08 np0005539550 systemd[1]: libpod-78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3.scope: Consumed 1.003s CPU time.
Nov 29 04:05:09 np0005539550 systemd[1]: var-lib-containers-storage-overlay-48e921af119a7cb2526e9b826f63db663536ac24467d8282010ce1971f020849-merged.mount: Deactivated successfully.
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:09 np0005539550 podman[420985]: 2025-11-29 09:05:09.203948529 +0000 UTC m=+1.554366842 container remove 78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:05:09 np0005539550 systemd[1]: libpod-conmon-78c3c13461fd1590462f042ea2de271bd77fe679c38eb98e7824eb97b221d8a3.scope: Deactivated successfully.
Nov 29 04:05:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:05:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:09 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:05:09 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev f5deb1ba-2e7e-4854-8440-4a4ffed7d390 does not exist
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 51e95819-f478-4af9-981e-55027995f985 does not exist
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 0a04c5e7-1e9c-4bba-951f-c182868bb577 does not exist
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:05:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:05:09 np0005539550 nova_compute[257631]: 2025-11-29 09:05:09.644 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:09 np0005539550 nova_compute[257631]: 2025-11-29 09:05:09.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:09 np0005539550 nova_compute[257631]: 2025-11-29 09:05:09.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:05:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:10.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:10 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:05:10 np0005539550 nova_compute[257631]: 2025-11-29 09:05:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:11.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:11 np0005539550 nova_compute[257631]: 2025-11-29 09:05:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:12.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:12 np0005539550 nova_compute[257631]: 2025-11-29 09:05:12.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:13 np0005539550 nova_compute[257631]: 2025-11-29 09:05:13.515 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:13 np0005539550 nova_compute[257631]: 2025-11-29 09:05:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:13 np0005539550 nova_compute[257631]: 2025-11-29 09:05:13.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:05:13 np0005539550 nova_compute[257631]: 2025-11-29 09:05:13.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:05:13 np0005539550 nova_compute[257631]: 2025-11-29 09:05:13.938 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:05:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:14 np0005539550 nova_compute[257631]: 2025-11-29 09:05:14.647 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:16.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:16 np0005539550 podman[421090]: 2025-11-29 09:05:16.345567919 +0000 UTC m=+0.087325271 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 04:05:16 np0005539550 nova_compute[257631]: 2025-11-29 09:05:16.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.953 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:05:17 np0005539550 nova_compute[257631]: 2025-11-29 09:05:17.953 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:18.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:18 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685726268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.386 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.516 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.565 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.566 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3994MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.566 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:18 np0005539550 nova_compute[257631]: 2025-11-29 09:05:18.567 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:05:18.995 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:05:18.995 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:05:18.996 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:19 np0005539550 nova_compute[257631]: 2025-11-29 09:05:19.555 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:05:19 np0005539550 nova_compute[257631]: 2025-11-29 09:05:19.555 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:05:19 np0005539550 nova_compute[257631]: 2025-11-29 09:05:19.573 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:05:19 np0005539550 nova_compute[257631]: 2025-11-29 09:05:19.648 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.740526) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119740657, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 996, "num_deletes": 251, "total_data_size": 1547594, "memory_usage": 1570048, "flush_reason": "Manual Compaction"}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119791263, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1518411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77270, "largest_seqno": 78265, "table_properties": {"data_size": 1513656, "index_size": 2342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10532, "raw_average_key_size": 19, "raw_value_size": 1503997, "raw_average_value_size": 2811, "num_data_blocks": 104, "num_entries": 535, "num_filter_entries": 535, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407032, "oldest_key_time": 1764407032, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 50806 microseconds, and 6644 cpu microseconds.
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.791329) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1518411 bytes OK
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.791352) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.793256) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.793277) EVENT_LOG_v1 {"time_micros": 1764407119793270, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.793299) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1543011, prev total WAL file size 1543011, number of live WAL files 2.
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.794163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1482KB)], [176(12MB)]
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119794278, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 14213169, "oldest_snapshot_seqno": -1}
Nov 29 04:05:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 11266 keys, 12138566 bytes, temperature: kUnknown
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119931010, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 12138566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12069641, "index_size": 39628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 299675, "raw_average_key_size": 26, "raw_value_size": 11875776, "raw_average_value_size": 1054, "num_data_blocks": 1485, "num_entries": 11266, "num_filter_entries": 11266, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764407119, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.931371) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 12138566 bytes
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.938139) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.9 rd, 88.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(17.4) write-amplify(8.0) OK, records in: 11781, records dropped: 515 output_compression: NoCompression
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.938158) EVENT_LOG_v1 {"time_micros": 1764407119938149, "job": 110, "event": "compaction_finished", "compaction_time_micros": 136828, "compaction_time_cpu_micros": 40863, "output_level": 6, "num_output_files": 1, "total_output_size": 12138566, "num_input_records": 11781, "num_output_records": 11266, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119938589, "job": 110, "event": "table_file_deletion", "file_number": 178}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407119940671, "job": 110, "event": "table_file_deletion", "file_number": 176}
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.794001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.940776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.940784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.940786) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.940789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:19 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:05:19.940792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:05:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:05:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310174967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.050 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.056 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.069 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.070 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.070 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.071 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.071 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.072 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.072 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.073 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.073 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.073 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.101 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.101 257641 WARNING nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.101 257641 WARNING nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.101 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Removable base files: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488 /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.102 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/f62ef5f82502d01c82174408aec7f3ac942e2488#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.102 257641 INFO nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6e1589dfec5abd76868fdc022175780e085b08de#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.102 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.103 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 04:05:20 np0005539550 nova_compute[257631]: 2025-11-29 09:05:20.103 257641 DEBUG nova.virt.libvirt.imagecache [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 04:05:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:20.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:05:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:05:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:22.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:23 np0005539550 nova_compute[257631]: 2025-11-29 09:05:23.519 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:23.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:24 np0005539550 nova_compute[257631]: 2025-11-29 09:05:24.098 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:24 np0005539550 nova_compute[257631]: 2025-11-29 09:05:24.652 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:25.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:28.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:28 np0005539550 nova_compute[257631]: 2025-11-29 09:05:28.521 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:28 np0005539550 nova_compute[257631]: 2025-11-29 09:05:28.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:29 np0005539550 nova_compute[257631]: 2025-11-29 09:05:29.654 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:30.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:32.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:33 np0005539550 podman[421220]: 2025-11-29 09:05:33.369351845 +0000 UTC m=+0.090899331 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:05:33 np0005539550 podman[421219]: 2025-11-29 09:05:33.379206333 +0000 UTC m=+0.102383700 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 04:05:33 np0005539550 nova_compute[257631]: 2025-11-29 09:05:33.571 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:34.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:34 np0005539550 nova_compute[257631]: 2025-11-29 09:05:34.710 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:35.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:36.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:37.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:38 np0005539550 nova_compute[257631]: 2025-11-29 09:05:38.604 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:39 np0005539550 nova_compute[257631]: 2025-11-29 09:05:39.763 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:39.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:41.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:42.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:43 np0005539550 nova_compute[257631]: 2025-11-29 09:05:43.607 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:43.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:44.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:44 np0005539550 nova_compute[257631]: 2025-11-29 09:05:44.766 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:44 np0005539550 nova_compute[257631]: 2025-11-29 09:05:44.914 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:45.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:46.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:47 np0005539550 podman[421312]: 2025-11-29 09:05:47.412485899 +0000 UTC m=+0.136950892 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 04:05:47 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:47 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:47 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:47.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:48.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:48 np0005539550 nova_compute[257631]: 2025-11-29 09:05:48.609 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:49 np0005539550 nova_compute[257631]: 2025-11-29 09:05:49.769 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:49 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:49 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:49 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:49.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:50.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:50 np0005539550 nova_compute[257631]: 2025-11-29 09:05:50.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:50 np0005539550 nova_compute[257631]: 2025-11-29 09:05:50.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 04:05:50 np0005539550 nova_compute[257631]: 2025-11-29 09:05:50.935 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:05:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:51 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:51 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:51 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:51.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:05:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:52.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:05:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:53 np0005539550 nova_compute[257631]: 2025-11-29 09:05:53.610 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:53 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:53 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:53 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:53.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:05:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:54.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:05:54 np0005539550 nova_compute[257631]: 2025-11-29 09:05:54.773 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:55 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:55 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:55 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:56.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:05:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:57 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:57 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:57 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:57.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:05:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:05:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:05:58.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:05:58 np0005539550 nova_compute[257631]: 2025-11-29 09:05:58.613 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:05:59
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'backups', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root', 'default.rgw.control']
Nov 29 04:05:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:05:59 np0005539550 nova_compute[257631]: 2025-11-29 09:05:59.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:05:59 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:05:59 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:05:59 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:05:59.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:02 np0005539550 nova_compute[257631]: 2025-11-29 09:06:02.947 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:02 np0005539550 nova_compute[257631]: 2025-11-29 09:06:02.947 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 04:06:02 np0005539550 nova_compute[257631]: 2025-11-29 09:06:02.971 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 04:06:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:03 np0005539550 nova_compute[257631]: 2025-11-29 09:06:03.616 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:03 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:03 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:03 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:03.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:04 np0005539550 podman[421397]: 2025-11-29 09:06:04.349708787 +0000 UTC m=+0.073125439 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:06:04 np0005539550 podman[421396]: 2025-11-29 09:06:04.369005643 +0000 UTC m=+0.097858088 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 04:06:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:04 np0005539550 nova_compute[257631]: 2025-11-29 09:06:04.776 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:05 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:05 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:05 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:05.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:07 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:07 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:07 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:07.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:08.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:06:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:06:08 np0005539550 nova_compute[257631]: 2025-11-29 09:06:08.618 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:06:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:06:09 np0005539550 nova_compute[257631]: 2025-11-29 09:06:09.777 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:09 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:09 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:09 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:09.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev c2c323b0-8360-402d-9854-a9b7e2e8524e does not exist
Nov 29 04:06:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a1f0b66-12be-4310-815a-114de72ccce0 does not exist
Nov 29 04:06:10 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 92dcc6aa-0f45-4e8e-8dc0-64ade3847a3f does not exist
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:06:10 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:06:10 np0005539550 nova_compute[257631]: 2025-11-29 09:06:10.944 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:10 np0005539550 nova_compute[257631]: 2025-11-29 09:06:10.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.214009244 +0000 UTC m=+0.064074707 container create e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:06:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.171983726 +0000 UTC m=+0.022049209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:11 np0005539550 systemd[1]: Started libpod-conmon-e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c.scope.
Nov 29 04:06:11 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.523147293 +0000 UTC m=+0.373212776 container init e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.535469417 +0000 UTC m=+0.385534870 container start e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 29 04:06:11 np0005539550 sharp_williamson[421723]: 167 167
Nov 29 04:06:11 np0005539550 systemd[1]: libpod-e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c.scope: Deactivated successfully.
Nov 29 04:06:11 np0005539550 conmon[421723]: conmon e6f1288abc5367c02551 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c.scope/container/memory.events
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.56566823 +0000 UTC m=+0.415733713 container attach e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.56620957 +0000 UTC m=+0.416275053 container died e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 29 04:06:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:06:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:11 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:06:11 np0005539550 systemd[1]: var-lib-containers-storage-overlay-f0a8955a712be820256ee4836f64109a798614911a9b1bdae5e7f8431824f5d1-merged.mount: Deactivated successfully.
Nov 29 04:06:11 np0005539550 podman[421707]: 2025-11-29 09:06:11.654050388 +0000 UTC m=+0.504115871 container remove e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:06:11 np0005539550 systemd[1]: libpod-conmon-e6f1288abc5367c02551f6f7d1fe5949246f7c1f1825761dc87fcb1e2a7de02c.scope: Deactivated successfully.
Nov 29 04:06:11 np0005539550 podman[421749]: 2025-11-29 09:06:11.890557289 +0000 UTC m=+0.076766260 container create 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:06:11 np0005539550 nova_compute[257631]: 2025-11-29 09:06:11.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:11 np0005539550 podman[421749]: 2025-11-29 09:06:11.838202754 +0000 UTC m=+0.024411695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:11 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:11 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:11 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:11.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:12 np0005539550 systemd[1]: Started libpod-conmon-38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc.scope.
Nov 29 04:06:12 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:12 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:12 np0005539550 podman[421749]: 2025-11-29 09:06:12.11029968 +0000 UTC m=+0.296508661 container init 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:12 np0005539550 podman[421749]: 2025-11-29 09:06:12.117328513 +0000 UTC m=+0.303537444 container start 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:12 np0005539550 podman[421749]: 2025-11-29 09:06:12.121473392 +0000 UTC m=+0.307682353 container attach 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:06:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000020s ======
Nov 29 04:06:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:12.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 29 04:06:12 np0005539550 nova_compute[257631]: 2025-11-29 09:06:12.918 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:12 np0005539550 laughing_wozniak[421765]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:06:12 np0005539550 laughing_wozniak[421765]: --> relative data size: 1.0
Nov 29 04:06:12 np0005539550 laughing_wozniak[421765]: --> All data devices are unavailable
Nov 29 04:06:13 np0005539550 systemd[1]: libpod-38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc.scope: Deactivated successfully.
Nov 29 04:06:13 np0005539550 podman[421749]: 2025-11-29 09:06:13.006080765 +0000 UTC m=+1.192289776 container died 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:06:13 np0005539550 systemd[1]: var-lib-containers-storage-overlay-33b12a85ed242e0b576235d23c6cb4f6862ae6436cb681cabcf51a58346e43ed-merged.mount: Deactivated successfully.
Nov 29 04:06:13 np0005539550 podman[421749]: 2025-11-29 09:06:13.206203025 +0000 UTC m=+1.392411956 container remove 38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:13 np0005539550 systemd[1]: libpod-conmon-38413ed700af26200106d522778d9a7bcd5e7175f34abe3b49e5867934a24bdc.scope: Deactivated successfully.
Nov 29 04:06:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:13 np0005539550 nova_compute[257631]: 2025-11-29 09:06:13.620 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:13 np0005539550 nova_compute[257631]: 2025-11-29 09:06:13.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:13 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:13 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:13 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.09245204 +0000 UTC m=+0.082135351 container create c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.055151701 +0000 UTC m=+0.044835052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:14 np0005539550 systemd[1]: Started libpod-conmon-c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2.scope.
Nov 29 04:06:14 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.413622277 +0000 UTC m=+0.403305618 container init c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.423063066 +0000 UTC m=+0.412746367 container start c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:06:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:14.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:14 np0005539550 musing_bell[421950]: 167 167
Nov 29 04:06:14 np0005539550 systemd[1]: libpod-c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2.scope: Deactivated successfully.
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.451442615 +0000 UTC m=+0.441125966 container attach c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.452619118 +0000 UTC m=+0.442302449 container died c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 04:06:14 np0005539550 nova_compute[257631]: 2025-11-29 09:06:14.779 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:14 np0005539550 systemd[1]: var-lib-containers-storage-overlay-360a4c218271d8d7d164fd919942be8d968d6a784bc7664708624a678a04e455-merged.mount: Deactivated successfully.
Nov 29 04:06:14 np0005539550 podman[421934]: 2025-11-29 09:06:14.903075039 +0000 UTC m=+0.892758350 container remove c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:06:15 np0005539550 systemd[1]: libpod-conmon-c0255833cb7dc9b335de0ee1a19f6f327b957c4c1fb102a94123969555e275d2.scope: Deactivated successfully.
Nov 29 04:06:15 np0005539550 podman[421976]: 2025-11-29 09:06:15.087812647 +0000 UTC m=+0.048189726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:15 np0005539550 podman[421976]: 2025-11-29 09:06:15.200963915 +0000 UTC m=+0.161341004 container create 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:06:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:15 np0005539550 systemd[1]: Started libpod-conmon-0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098.scope.
Nov 29 04:06:15 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf89062d40debb34c994c7a54c406bc2ccbf280fcb792a035c89012e5e942c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf89062d40debb34c994c7a54c406bc2ccbf280fcb792a035c89012e5e942c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf89062d40debb34c994c7a54c406bc2ccbf280fcb792a035c89012e5e942c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:15 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf89062d40debb34c994c7a54c406bc2ccbf280fcb792a035c89012e5e942c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:15 np0005539550 podman[421976]: 2025-11-29 09:06:15.472393318 +0000 UTC m=+0.432770397 container init 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:06:15 np0005539550 podman[421976]: 2025-11-29 09:06:15.485720201 +0000 UTC m=+0.446097270 container start 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:15 np0005539550 podman[421976]: 2025-11-29 09:06:15.590093992 +0000 UTC m=+0.550471101 container attach 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:06:15 np0005539550 nova_compute[257631]: 2025-11-29 09:06:15.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:15 np0005539550 nova_compute[257631]: 2025-11-29 09:06:15.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:06:15 np0005539550 nova_compute[257631]: 2025-11-29 09:06:15.920 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:06:15 np0005539550 nova_compute[257631]: 2025-11-29 09:06:15.945 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:06:15 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:15 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:15 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:16 np0005539550 trusting_wu[421993]: {
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:    "0": [
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:        {
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "devices": [
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "/dev/loop3"
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            ],
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "lv_name": "ceph_lv0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "lv_size": "7511998464",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "name": "ceph_lv0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "tags": {
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.cluster_name": "ceph",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.crush_device_class": "",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.encrypted": "0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.osd_id": "0",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.type": "block",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:                "ceph.vdo": "0"
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            },
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "type": "block",
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:            "vg_name": "ceph_vg0"
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:        }
Nov 29 04:06:16 np0005539550 trusting_wu[421993]:    ]
Nov 29 04:06:16 np0005539550 trusting_wu[421993]: }
Nov 29 04:06:16 np0005539550 systemd[1]: libpod-0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098.scope: Deactivated successfully.
Nov 29 04:06:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:16 np0005539550 podman[422002]: 2025-11-29 09:06:16.447090221 +0000 UTC m=+0.056275569 container died 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:06:16 np0005539550 systemd[1]: var-lib-containers-storage-overlay-bf89062d40debb34c994c7a54c406bc2ccbf280fcb792a035c89012e5e942c46-merged.mount: Deactivated successfully.
Nov 29 04:06:16 np0005539550 podman[422002]: 2025-11-29 09:06:16.512448622 +0000 UTC m=+0.121633930 container remove 0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_wu, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 04:06:16 np0005539550 systemd[1]: libpod-conmon-0a32bea5848cae9bd68033dbc639b0ae1648ee52ca1e4d723be27b8083d08098.scope: Deactivated successfully.
Nov 29 04:06:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.403426567 +0000 UTC m=+0.065197258 container create 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 29 04:06:17 np0005539550 systemd[1]: Started libpod-conmon-0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e.scope.
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.367967484 +0000 UTC m=+0.029738225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.527674326 +0000 UTC m=+0.189445007 container init 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.538187376 +0000 UTC m=+0.199958057 container start 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.543377004 +0000 UTC m=+0.205147685 container attach 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 29 04:06:17 np0005539550 keen_pare[422178]: 167 167
Nov 29 04:06:17 np0005539550 systemd[1]: libpod-0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e.scope: Deactivated successfully.
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.549527231 +0000 UTC m=+0.211297922 container died 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:17 np0005539550 systemd[1]: var-lib-containers-storage-overlay-16269a7288c87c6470a38d44e2709684dc85a0224bd7a6c2d6de0531a8ba9aea-merged.mount: Deactivated successfully.
Nov 29 04:06:17 np0005539550 podman[422160]: 2025-11-29 09:06:17.608846837 +0000 UTC m=+0.270617498 container remove 0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_pare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 29 04:06:17 np0005539550 systemd[1]: libpod-conmon-0507653cf7e7e79e5fe41470c4899f92cc78b8ebc5b4486cb404fdd6e96d139e.scope: Deactivated successfully.
Nov 29 04:06:17 np0005539550 podman[422180]: 2025-11-29 09:06:17.643135388 +0000 UTC m=+0.160507588 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 04:06:17 np0005539550 podman[422228]: 2025-11-29 09:06:17.819823163 +0000 UTC m=+0.056601516 container create 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 29 04:06:17 np0005539550 systemd[1]: Started libpod-conmon-78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e.scope.
Nov 29 04:06:17 np0005539550 podman[422228]: 2025-11-29 09:06:17.791740529 +0000 UTC m=+0.028518902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:06:17 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:06:17 np0005539550 nova_compute[257631]: 2025-11-29 09:06:17.921 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d59643886ff4df84412cee025c85a07c3bb5088378f39b7591f41ab0344cff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d59643886ff4df84412cee025c85a07c3bb5088378f39b7591f41ab0344cff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d59643886ff4df84412cee025c85a07c3bb5088378f39b7591f41ab0344cff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:17 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d59643886ff4df84412cee025c85a07c3bb5088378f39b7591f41ab0344cff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:06:17 np0005539550 podman[422228]: 2025-11-29 09:06:17.949016665 +0000 UTC m=+0.185795108 container init 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:06:17 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:17 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:17 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:17.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:17 np0005539550 podman[422228]: 2025-11-29 09:06:17.959762449 +0000 UTC m=+0.196540822 container start 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 04:06:17 np0005539550 podman[422228]: 2025-11-29 09:06:17.963573042 +0000 UTC m=+0.200351725 container attach 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:06:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:18.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:18 np0005539550 nova_compute[257631]: 2025-11-29 09:06:18.622 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:18 np0005539550 sweet_galois[422245]: {
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:        "osd_id": 0,
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:        "type": "bluestore"
Nov 29 04:06:18 np0005539550 sweet_galois[422245]:    }
Nov 29 04:06:18 np0005539550 sweet_galois[422245]: }
Nov 29 04:06:18 np0005539550 systemd[1]: libpod-78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e.scope: Deactivated successfully.
Nov 29 04:06:18 np0005539550 podman[422228]: 2025-11-29 09:06:18.989257044 +0000 UTC m=+1.226035417 container died 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:06:18 np0005539550 systemd[1]: libpod-78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e.scope: Consumed 1.042s CPU time.
Nov 29 04:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:06:18.996 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:06:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:06:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:19 np0005539550 systemd[1]: var-lib-containers-storage-overlay-11d59643886ff4df84412cee025c85a07c3bb5088378f39b7591f41ab0344cff-merged.mount: Deactivated successfully.
Nov 29 04:06:19 np0005539550 podman[422228]: 2025-11-29 09:06:19.06700762 +0000 UTC m=+1.303785983 container remove 78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 29 04:06:19 np0005539550 systemd[1]: libpod-conmon-78a283ac06bbc279643085b3f11e9d3f18d5b4f75bb8b64505dfad3f2c15fd9e.scope: Deactivated successfully.
Nov 29 04:06:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:06:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:19 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:06:19 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 6ecd73b0-26a9-4e89-8a57-7447e4c6511f does not exist
Nov 29 04:06:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 5804bf71-090c-43bd-b614-f135abe8ca3b does not exist
Nov 29 04:06:19 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 77d4fbf7-58f9-47e4-aede-0c9d077a95b2 does not exist
Nov 29 04:06:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.784 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.957 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.959 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.960 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.960 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:06:19 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:19 np0005539550 nova_compute[257631]: 2025-11-29 09:06:19.961 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:19 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:19 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:19.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:06:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:20.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.512 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:06:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.756 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.758 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4023MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.759 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.850 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.850 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.872 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing inventories for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.903 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating ProviderTree inventory for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.903 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Updating inventory in ProviderTree for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.924 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing aggregate associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.955 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Refreshing trait associations for resource provider a73c606e-2495-4af4-b703-8d4b3001fdf5, traits: COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NODE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_USB,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 04:06:20 np0005539550 nova_compute[257631]: 2025-11-29 09:06:20.969 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:06:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:06:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3357329831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:06:21 np0005539550 nova_compute[257631]: 2025-11-29 09:06:21.443 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:06:21 np0005539550 nova_compute[257631]: 2025-11-29 09:06:21.452 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:06:21 np0005539550 nova_compute[257631]: 2025-11-29 09:06:21.470 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:06:21 np0005539550 nova_compute[257631]: 2025-11-29 09:06:21.472 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:06:21 np0005539550 nova_compute[257631]: 2025-11-29 09:06:21.472 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:06:21 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:21 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:21 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:22.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:23 np0005539550 nova_compute[257631]: 2025-11-29 09:06:23.623 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:23 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:23 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:23 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:23.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:24.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:24 np0005539550 nova_compute[257631]: 2025-11-29 09:06:24.469 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:24 np0005539550 nova_compute[257631]: 2025-11-29 09:06:24.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:25 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:25 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:25 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:25.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:26.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:27 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:27 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:27 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:27 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:28.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:28 np0005539550 nova_compute[257631]: 2025-11-29 09:06:28.626 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:29 np0005539550 nova_compute[257631]: 2025-11-29 09:06:29.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:29 np0005539550 nova_compute[257631]: 2025-11-29 09:06:29.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:29 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:29 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:29 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:29.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:30.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:31 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:31 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:31 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:31.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:32 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:32.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:33 np0005539550 nova_compute[257631]: 2025-11-29 09:06:33.670 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:33 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:33 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:33 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:33.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:34.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:34 np0005539550 nova_compute[257631]: 2025-11-29 09:06:34.733 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:06:34 np0005539550 nova_compute[257631]: 2025-11-29 09:06:34.896 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:06:35 np0005539550 podman[422433]: 2025-11-29 09:06:35.362348031 +0000 UTC m=+0.087971641 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 04:06:35 np0005539550 podman[422434]: 2025-11-29 09:06:35.362754599 +0000 UTC m=+0.088557393 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:06:35 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:35 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:35 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:35.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:36.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:37 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:06:37 np0005539550 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 04:06:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Nov 29 04:06:37 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 29 04:06:37 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:37 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:37 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:37.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:38 np0005539550 nova_compute[257631]: 2025-11-29 09:06:38.672 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 10 op/s
Nov 29 04:06:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Nov 29 04:06:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 29 04:06:39 np0005539550 radosgw[93278]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 29 04:06:39 np0005539550 nova_compute[257631]: 2025-11-29 09:06:39.900 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:39 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:39 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:39 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:39.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:40.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 0 B/s wr, 57 op/s
Nov 29 04:06:41 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:41 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:41 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:41.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:42 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:42.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 66 KiB/s rd, 0 B/s wr, 109 op/s
Nov 29 04:06:43 np0005539550 nova_compute[257631]: 2025-11-29 09:06:43.675 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:43 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:43 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:43 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:43.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:44.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:44 np0005539550 nova_compute[257631]: 2025-11-29 09:06:44.903 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Nov 29 04:06:45 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:45 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:45 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:45.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:46.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:47 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 91 KiB/s rd, 0 B/s wr, 151 op/s
Nov 29 04:06:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:47.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539550 podman[422530]: 2025-11-29 09:06:48.376255624 +0000 UTC m=+0.119519380 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 29 04:06:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:48 np0005539550 nova_compute[257631]: 2025-11-29 09:06:48.677 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 150 op/s
Nov 29 04:06:49 np0005539550 nova_compute[257631]: 2025-11-29 09:06:49.904 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:50.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 140 op/s
Nov 29 04:06:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:52.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 57 KiB/s rd, 0 B/s wr, 94 op/s
Nov 29 04:06:53 np0005539550 nova_compute[257631]: 2025-11-29 09:06:53.678 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:54.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:54 np0005539550 nova_compute[257631]: 2025-11-29 09:06:54.906 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Nov 29 04:06:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:56.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:06:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 0 B/s wr, 19 op/s
Nov 29 04:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:06:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:06:58.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:06:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:06:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:06:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:06:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:06:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:06:58.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:06:58 np0005539550 nova_compute[257631]: 2025-11-29 09:06:58.739 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:06:59
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Nov 29 04:06:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:06:59 np0005539550 nova_compute[257631]: 2025-11-29 09:06:59.909 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:00.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:00.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:07:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:02.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Nov 29 04:07:03 np0005539550 nova_compute[257631]: 2025-11-29 09:07:03.741 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:04.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:04 np0005539550 nova_compute[257631]: 2025-11-29 09:07:04.911 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:06.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:06 np0005539550 podman[422616]: 2025-11-29 09:07:06.314064888 +0000 UTC m=+0.055776910 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 04:07:06 np0005539550 podman[422615]: 2025-11-29 09:07:06.338248896 +0000 UTC m=+0.083012086 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:07:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:08.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:07:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:07:08 np0005539550 nova_compute[257631]: 2025-11-29 09:07:08.743 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:07:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:07:09 np0005539550 nova_compute[257631]: 2025-11-29 09:07:09.912 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:10.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:10 np0005539550 nova_compute[257631]: 2025-11-29 09:07:10.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:10 np0005539550 nova_compute[257631]: 2025-11-29 09:07:10.919 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:07:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000020s ======
Nov 29 04:07:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000020s
Nov 29 04:07:12 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:12 np0005539550 nova_compute[257631]: 2025-11-29 09:07:12.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:13 np0005539550 nova_compute[257631]: 2025-11-29 09:07:13.745 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:13 np0005539550 nova_compute[257631]: 2025-11-29 09:07:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:13 np0005539550 nova_compute[257631]: 2025-11-29 09:07:13.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:14.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:14 np0005539550 nova_compute[257631]: 2025-11-29 09:07:14.914 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:16.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:17 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:17 np0005539550 nova_compute[257631]: 2025-11-29 09:07:17.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:17 np0005539550 nova_compute[257631]: 2025-11-29 09:07:17.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:07:17 np0005539550 nova_compute[257631]: 2025-11-29 09:07:17.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:07:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539550 nova_compute[257631]: 2025-11-29 09:07:18.350 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:07:18 np0005539550 nova_compute[257631]: 2025-11-29 09:07:18.350 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:18.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:18 np0005539550 nova_compute[257631]: 2025-11-29 09:07:18.747 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:07:18.996 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:07:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:07:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:19 np0005539550 podman[422661]: 2025-11-29 09:07:19.387066145 +0000 UTC m=+0.125537344 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.916 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.957 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.958 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.958 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:07:19 np0005539550 nova_compute[257631]: 2025-11-29 09:07:19.958 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:20 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:20 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95934649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.438 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:20.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.629 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.630 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4027MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.630 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.631 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.708 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.709 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:07:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:07:20 np0005539550 nova_compute[257631]: 2025-11-29 09:07:20.732 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:07:20 np0005539550 podman[422882]: 2025-11-29 09:07:20.77743603 +0000 UTC m=+0.088484441 container exec 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:20 np0005539550 podman[422882]: 2025-11-29 09:07:20.922344061 +0000 UTC m=+0.233392442 container exec_died 7bc856b2ad589277ae4e979f16d0132b20688d1d13e69f4d37a96134c5d8f182 (image=quay.io/ceph/ceph:v18, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3352719956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:07:21 np0005539550 nova_compute[257631]: 2025-11-29 09:07:21.205 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:07:21 np0005539550 nova_compute[257631]: 2025-11-29 09:07:21.213 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:07:21 np0005539550 nova_compute[257631]: 2025-11-29 09:07:21.252 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:07:21 np0005539550 nova_compute[257631]: 2025-11-29 09:07:21.253 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:07:21 np0005539550 nova_compute[257631]: 2025-11-29 09:07:21.253 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:07:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:21 np0005539550 podman[423056]: 2025-11-29 09:07:21.595095393 +0000 UTC m=+0.068323148 container exec 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 04:07:21 np0005539550 podman[423056]: 2025-11-29 09:07:21.625624653 +0000 UTC m=+0.098852328 container exec_died 2f135342cc7a57491185f1abe9f112ca33f71dee1d7f695e7ec4552ba694dd1c (image=quay.io/ceph/haproxy:2.3, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-haproxy-rgw-default-compute-0-uyfjya)
Nov 29 04:07:21 np0005539550 podman[423122]: 2025-11-29 09:07:21.836398744 +0000 UTC m=+0.056983283 container exec 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, version=2.2.4, architecture=x86_64, release=1793, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph.)
Nov 29 04:07:21 np0005539550 podman[423122]: 2025-11-29 09:07:21.854969477 +0000 UTC m=+0.075554026 container exec_died 8ed5c5f5d99f85abb721f35e02d3bda00ae28f22ef86cb1dad367de2014d79e7 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-b66774a7-56d9-5535-bd8c-681234404870-keepalived-rgw-default-compute-0-jyvvou, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 04:07:21 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:07:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:22.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:07:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:22.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:23 np0005539550 nova_compute[257631]: 2025-11-29 09:07:23.768 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:23 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 3a60e1a9-ce7c-4a14-a95c-7937deab20f0 does not exist
Nov 29 04:07:23 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 50869bab-59b8-4ecb-b085-b92c22159dba does not exist
Nov 29 04:07:23 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d4cc3c9-af36-4023-a9d8-b8323338c68f does not exist
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:07:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:07:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.475247891 +0000 UTC m=+0.055245569 container create aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:24 np0005539550 systemd[1]: Started libpod-conmon-aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999.scope.
Nov 29 04:07:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:24.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.447448724 +0000 UTC m=+0.027446412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.56577109 +0000 UTC m=+0.145768868 container init aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.573294723 +0000 UTC m=+0.153292381 container start aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.576998633 +0000 UTC m=+0.156996311 container attach aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 29 04:07:24 np0005539550 serene_montalcini[423495]: 167 167
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.578816868 +0000 UTC m=+0.158814536 container died aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 29 04:07:24 np0005539550 systemd[1]: libpod-aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999.scope: Deactivated successfully.
Nov 29 04:07:24 np0005539550 systemd[1]: var-lib-containers-storage-overlay-840f29fdbab2a200ea8d88e295f3c71e98dc81fc01e6aded1832151a8732658b-merged.mount: Deactivated successfully.
Nov 29 04:07:24 np0005539550 podman[423479]: 2025-11-29 09:07:24.617836128 +0000 UTC m=+0.197833796 container remove aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 29 04:07:24 np0005539550 systemd[1]: libpod-conmon-aecaa86269ecbf3a0f1a3a66360ed3b7632fc55eeaf5e2e4b8092cfdb453f999.scope: Deactivated successfully.
Nov 29 04:07:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:07:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:24 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:07:24 np0005539550 podman[423519]: 2025-11-29 09:07:24.797671273 +0000 UTC m=+0.038493282 container create 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:07:24 np0005539550 systemd[1]: Started libpod-conmon-4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af.scope.
Nov 29 04:07:24 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:24 np0005539550 podman[423519]: 2025-11-29 09:07:24.779904655 +0000 UTC m=+0.020726664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:24 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:24 np0005539550 podman[423519]: 2025-11-29 09:07:24.887534769 +0000 UTC m=+0.128356788 container init 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:24 np0005539550 podman[423519]: 2025-11-29 09:07:24.902600055 +0000 UTC m=+0.143422074 container start 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:24 np0005539550 podman[423519]: 2025-11-29 09:07:24.906890706 +0000 UTC m=+0.147712715 container attach 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 29 04:07:24 np0005539550 nova_compute[257631]: 2025-11-29 09:07:24.918 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:25 np0005539550 nova_compute[257631]: 2025-11-29 09:07:25.249 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:25 np0005539550 sweet_elgamal[423535]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:07:25 np0005539550 sweet_elgamal[423535]: --> relative data size: 1.0
Nov 29 04:07:25 np0005539550 sweet_elgamal[423535]: --> All data devices are unavailable
Nov 29 04:07:25 np0005539550 systemd[1]: libpod-4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af.scope: Deactivated successfully.
Nov 29 04:07:25 np0005539550 podman[423519]: 2025-11-29 09:07:25.78465385 +0000 UTC m=+1.025475889 container died 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:26.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:26.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:28.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:28.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:28 np0005539550 nova_compute[257631]: 2025-11-29 09:07:28.771 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:29 np0005539550 systemd[1]: var-lib-containers-storage-overlay-70367a4dbdefafe408b7d38f265f953fcde1517cd74c486714323e19ac5b89f0-merged.mount: Deactivated successfully.
Nov 29 04:07:29 np0005539550 nova_compute[257631]: 2025-11-29 09:07:29.922 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:30.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:30.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:30 np0005539550 nova_compute[257631]: 2025-11-29 09:07:30.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:31 np0005539550 podman[423519]: 2025-11-29 09:07:31.081918538 +0000 UTC m=+6.322740567 container remove 4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_elgamal, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:31 np0005539550 systemd[1]: libpod-conmon-4c48fe77ae03de6c9b866f34a90a0f46339a4d802b6062de8c3fdbdd0c6a65af.scope: Deactivated successfully.
Nov 29 04:07:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:31.936303768 +0000 UTC m=+0.039673004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:32.255178882 +0000 UTC m=+0.358548028 container create a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:32 np0005539550 systemd[1]: Started libpod-conmon-a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac.scope.
Nov 29 04:07:32 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:32.794176625 +0000 UTC m=+0.897545881 container init a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:32.807182872 +0000 UTC m=+0.910552068 container start a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:07:32 np0005539550 eloquent_gauss[423726]: 167 167
Nov 29 04:07:32 np0005539550 systemd[1]: libpod-a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac.scope: Deactivated successfully.
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:32.865008619 +0000 UTC m=+0.968377795 container attach a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:07:32 np0005539550 podman[423710]: 2025-11-29 09:07:32.865419857 +0000 UTC m=+0.968789003 container died a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 29 04:07:32 np0005539550 systemd[1]: var-lib-containers-storage-overlay-3eb23ebe7225d3e1f4fcfee64e2570ff038b5a74c0ca3b085cab65cb3c6368f8-merged.mount: Deactivated successfully.
Nov 29 04:07:33 np0005539550 podman[423710]: 2025-11-29 09:07:33.081784555 +0000 UTC m=+1.185153701 container remove a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_gauss, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:33 np0005539550 systemd[1]: libpod-conmon-a118fdde47e39863137303451b0dbfdb745512a1903e2c6912e73d9f7e7b2bac.scope: Deactivated successfully.
Nov 29 04:07:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:33 np0005539550 podman[423753]: 2025-11-29 09:07:33.361047436 +0000 UTC m=+0.102132179 container create 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:33 np0005539550 podman[423753]: 2025-11-29 09:07:33.281666209 +0000 UTC m=+0.022750942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:33 np0005539550 systemd[1]: Started libpod-conmon-2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008.scope.
Nov 29 04:07:33 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae43975bdc6fd4808097cf88898751ae8ce8c84fb9a206c391a980c97c1be2ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae43975bdc6fd4808097cf88898751ae8ce8c84fb9a206c391a980c97c1be2ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae43975bdc6fd4808097cf88898751ae8ce8c84fb9a206c391a980c97c1be2ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:33 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae43975bdc6fd4808097cf88898751ae8ce8c84fb9a206c391a980c97c1be2ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:33 np0005539550 podman[423753]: 2025-11-29 09:07:33.470170728 +0000 UTC m=+0.211255451 container init 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:07:33 np0005539550 podman[423753]: 2025-11-29 09:07:33.488325273 +0000 UTC m=+0.229410006 container start 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:07:33 np0005539550 podman[423753]: 2025-11-29 09:07:33.492808968 +0000 UTC m=+0.233893741 container attach 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 29 04:07:33 np0005539550 nova_compute[257631]: 2025-11-29 09:07:33.775 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:34 np0005539550 angry_ellis[423769]: {
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:    "0": [
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:        {
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "devices": [
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "/dev/loop3"
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            ],
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "lv_name": "ceph_lv0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "lv_size": "7511998464",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "name": "ceph_lv0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "tags": {
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.cluster_name": "ceph",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.crush_device_class": "",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.encrypted": "0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.osd_id": "0",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.type": "block",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:                "ceph.vdo": "0"
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            },
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "type": "block",
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:            "vg_name": "ceph_vg0"
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:        }
Nov 29 04:07:34 np0005539550 angry_ellis[423769]:    ]
Nov 29 04:07:34 np0005539550 angry_ellis[423769]: }
Nov 29 04:07:34 np0005539550 systemd[1]: libpod-2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008.scope: Deactivated successfully.
Nov 29 04:07:34 np0005539550 podman[423778]: 2025-11-29 09:07:34.325050888 +0000 UTC m=+0.029360599 container died 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:07:34 np0005539550 systemd[1]: var-lib-containers-storage-overlay-ae43975bdc6fd4808097cf88898751ae8ce8c84fb9a206c391a980c97c1be2ee-merged.mount: Deactivated successfully.
Nov 29 04:07:34 np0005539550 podman[423778]: 2025-11-29 09:07:34.388308709 +0000 UTC m=+0.092618330 container remove 2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:07:34 np0005539550 systemd[1]: libpod-conmon-2ee755b2f50fca660c96c62cc2ed887eae0584f85172cf8b58fc557b2960e008.scope: Deactivated successfully.
Nov 29 04:07:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:34 np0005539550 nova_compute[257631]: 2025-11-29 09:07:34.925 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.175714987 +0000 UTC m=+0.053027938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.377358505 +0000 UTC m=+0.254671446 container create 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:35 np0005539550 systemd[1]: Started libpod-conmon-25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f.scope.
Nov 29 04:07:35 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.489665847 +0000 UTC m=+0.366978808 container init 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.502702465 +0000 UTC m=+0.380015356 container start 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 04:07:35 np0005539550 adoring_kapitsa[423952]: 167 167
Nov 29 04:07:35 np0005539550 systemd[1]: libpod-25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f.scope: Deactivated successfully.
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.541488311 +0000 UTC m=+0.418801292 container attach 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.542181434 +0000 UTC m=+0.419494385 container died 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:07:35 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d37ba2a7bbed936e36c416e20e0af5f713544b9b70e66c8a18f0c41f7f106d30-merged.mount: Deactivated successfully.
Nov 29 04:07:35 np0005539550 podman[423936]: 2025-11-29 09:07:35.772239582 +0000 UTC m=+0.649552523 container remove 25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 29 04:07:35 np0005539550 systemd[1]: libpod-conmon-25a944e2f3901f3ed2fd6cc958062ece8e687aad0e72638f8daaa33bdba1380f.scope: Deactivated successfully.
Nov 29 04:07:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:36 np0005539550 podman[423977]: 2025-11-29 09:07:36.071791989 +0000 UTC m=+0.118499241 container create 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:07:36 np0005539550 podman[423977]: 2025-11-29 09:07:36.001556845 +0000 UTC m=+0.048264167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:07:36 np0005539550 systemd[1]: Started libpod-conmon-9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56.scope.
Nov 29 04:07:36 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:07:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7415b4aaffdb854e7ff640419ebe77c375f76cf857ef45130ebe8a5889f38cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7415b4aaffdb854e7ff640419ebe77c375f76cf857ef45130ebe8a5889f38cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7415b4aaffdb854e7ff640419ebe77c375f76cf857ef45130ebe8a5889f38cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:36 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7415b4aaffdb854e7ff640419ebe77c375f76cf857ef45130ebe8a5889f38cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:07:36 np0005539550 podman[423977]: 2025-11-29 09:07:36.172548591 +0000 UTC m=+0.219255863 container init 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 29 04:07:36 np0005539550 podman[423977]: 2025-11-29 09:07:36.185462977 +0000 UTC m=+0.232170249 container start 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:07:36 np0005539550 podman[423977]: 2025-11-29 09:07:36.311922147 +0000 UTC m=+0.358629389 container attach 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:07:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:36.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]: {
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:        "osd_id": 0,
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:        "type": "bluestore"
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]:    }
Nov 29 04:07:37 np0005539550 gifted_driscoll[423993]: }
Nov 29 04:07:37 np0005539550 systemd[1]: libpod-9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56.scope: Deactivated successfully.
Nov 29 04:07:37 np0005539550 podman[423977]: 2025-11-29 09:07:37.033370524 +0000 UTC m=+1.080077776 container died 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:07:37 np0005539550 systemd[1]: var-lib-containers-storage-overlay-c7415b4aaffdb854e7ff640419ebe77c375f76cf857ef45130ebe8a5889f38cc-merged.mount: Deactivated successfully.
Nov 29 04:07:37 np0005539550 podman[423977]: 2025-11-29 09:07:37.102218431 +0000 UTC m=+1.148925683 container remove 9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:07:37 np0005539550 systemd[1]: libpod-conmon-9fdfe4b0ff9517500baa7d1277141056b663d106a215f163dae90bb7c7623c56.scope: Deactivated successfully.
Nov 29 04:07:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:07:37 np0005539550 podman[424017]: 2025-11-29 09:07:37.16535692 +0000 UTC m=+0.085100737 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 04:07:37 np0005539550 podman[424025]: 2025-11-29 09:07:37.190416456 +0000 UTC m=+0.110004470 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:07:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:37 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:07:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:37 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev b57372e3-6d6f-426d-9820-63c33e37ee18 does not exist
Nov 29 04:07:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 9601b033-5998-43a1-a812-42d2798e0536 does not exist
Nov 29 04:07:37 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev d91c6122-1e2b-461c-8fec-ff83ab9bcfa1 does not exist
Nov 29 04:07:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000019s ======
Nov 29 04:07:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Nov 29 04:07:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:38 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:07:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:38.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:38 np0005539550 nova_compute[257631]: 2025-11-29 09:07:38.780 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:39 np0005539550 nova_compute[257631]: 2025-11-29 09:07:39.928 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:07:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:07:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:07:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:42.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:07:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:43 np0005539550 nova_compute[257631]: 2025-11-29 09:07:43.781 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:07:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:44.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:07:44 np0005539550 nova_compute[257631]: 2025-11-29 09:07:44.969 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000024s ======
Nov 29 04:07:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:46.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Nov 29 04:07:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:46.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:48.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:48.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:48 np0005539550 nova_compute[257631]: 2025-11-29 09:07:48.785 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:48 np0005539550 nova_compute[257631]: 2025-11-29 09:07:48.915 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:07:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:49 np0005539550 nova_compute[257631]: 2025-11-29 09:07:49.972 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:50.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:50 np0005539550 podman[424173]: 2025-11-29 09:07:50.371886128 +0000 UTC m=+0.102228249 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 04:07:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:50.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:07:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:52.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:07:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:53 np0005539550 nova_compute[257631]: 2025-11-29 09:07:53.786 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:54.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:55 np0005539550 nova_compute[257631]: 2025-11-29 09:07:55.014 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:56.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:07:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:07:58.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:07:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:07:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:07:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:07:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:07:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:07:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:07:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:07:58 np0005539550 nova_compute[257631]: 2025-11-29 09:07:58.788 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:07:59
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'backups', '.mgr', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Nov 29 04:07:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:08:00 np0005539550 nova_compute[257631]: 2025-11-29 09:08:00.016 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:02.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:02.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:03 np0005539550 nova_compute[257631]: 2025-11-29 09:08:03.789 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:04.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:04.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:05 np0005539550 nova_compute[257631]: 2025-11-29 09:08:05.019 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:06.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:06.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:07 np0005539550 podman[424261]: 2025-11-29 09:08:07.334530498 +0000 UTC m=+0.068528692 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 04:08:07 np0005539550 podman[424260]: 2025-11-29 09:08:07.348512339 +0000 UTC m=+0.089926950 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 04:08:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:08.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:08 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 29 04:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:08:08 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:08:08 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:08 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:08 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:08.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:08 np0005539550 nova_compute[257631]: 2025-11-29 09:08:08.792 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 29 04:08:09 np0005539550 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 29 04:08:10 np0005539550 nova_compute[257631]: 2025-11-29 09:08:10.021 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:10.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:10 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:10 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:10.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:10 np0005539550 nova_compute[257631]: 2025-11-29 09:08:10.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:10 np0005539550 nova_compute[257631]: 2025-11-29 09:08:10.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 04:08:11 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:12 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:12 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:12 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:12 np0005539550 nova_compute[257631]: 2025-11-29 09:08:12.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:13 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:13 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:13 np0005539550 nova_compute[257631]: 2025-11-29 09:08:13.794 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:14 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:14 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:14 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:14.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:14 np0005539550 nova_compute[257631]: 2025-11-29 09:08:14.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:14 np0005539550 nova_compute[257631]: 2025-11-29 09:08:14.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:15 np0005539550 nova_compute[257631]: 2025-11-29 09:08:15.022 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:15 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:16.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:16 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:16 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:16 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:16.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:17 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:18.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:18 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:18 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:18 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:18.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:18 np0005539550 nova_compute[257631]: 2025-11-29 09:08:18.798 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:08:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:08:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:18 np0005539550 ovn_metadata_agent[158973]: 2025-11-29 09:08:18.997 158978 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:19 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:19 np0005539550 nova_compute[257631]: 2025-11-29 09:08:19.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:19 np0005539550 nova_compute[257631]: 2025-11-29 09:08:19.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 04:08:19 np0005539550 nova_compute[257631]: 2025-11-29 09:08:19.921 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 04:08:19 np0005539550 nova_compute[257631]: 2025-11-29 09:08:19.940 257641 DEBUG nova.compute.manager [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 04:08:19 np0005539550 nova_compute[257631]: 2025-11-29 09:08:19.941 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:20 np0005539550 nova_compute[257631]: 2025-11-29 09:08:20.024 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:20 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:20 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:20 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:20.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021614147124511445 of space, bias 1.0, pg target 0.6484244137353433 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Nov 29 04:08:20 np0005539550 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Nov 29 04:08:21 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:21 np0005539550 podman[424305]: 2025-11-29 09:08:21.370745451 +0000 UTC m=+0.108886016 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.920 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.952 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.953 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.954 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.955 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 04:08:21 np0005539550 nova_compute[257631]: 2025-11-29 09:08:21.955 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:22.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:22 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:22 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1250880137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.558 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:22 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:22 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:22 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:22.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.785 257641 WARNING nova.virt.libvirt.driver [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.787 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4055MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.787 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.788 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.875 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 04:08:22 np0005539550 nova_compute[257631]: 2025-11-29 09:08:22.875 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.171 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 04:08:23 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:23 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 29 04:08:23 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4022179795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.659 257641 DEBUG oslo_concurrency.processutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.667 257641 DEBUG nova.compute.provider_tree [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed in ProviderTree for provider: a73c606e-2495-4af4-b703-8d4b3001fdf5 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.691 257641 DEBUG nova.scheduler.client.report [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Inventory has not changed for provider a73c606e-2495-4af4-b703-8d4b3001fdf5 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.696 257641 DEBUG nova.compute.resource_tracker [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.697 257641 DEBUG oslo_concurrency.lockutils [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 04:08:23 np0005539550 nova_compute[257631]: 2025-11-29 09:08:23.799 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:24.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:24 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:24 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:24 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:24.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:25 np0005539550 nova_compute[257631]: 2025-11-29 09:08:25.026 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:25 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:26 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:26 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:26 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:26.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:26 np0005539550 nova_compute[257631]: 2025-11-29 09:08:26.693 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:27 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:27 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:28 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:28 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:28 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:28 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:28 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:28.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:28 np0005539550 nova_compute[257631]: 2025-11-29 09:08:28.801 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:29 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:30 np0005539550 nova_compute[257631]: 2025-11-29 09:08:30.029 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:30 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:30 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:30 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:30.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:31 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:31 np0005539550 nova_compute[257631]: 2025-11-29 09:08:31.919 257641 DEBUG oslo_service.periodic_task [None req-72746180-c18a-4012-8800-8d6a73ee4059 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 04:08:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:32.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:32 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:32 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:32 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:32.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:33 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:33 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:33 np0005539550 nova_compute[257631]: 2025-11-29 09:08:33.803 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:34.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:34 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:34 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:34 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:35 np0005539550 nova_compute[257631]: 2025-11-29 09:08:35.031 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:35 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:36 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:36 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:36 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:36.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:37 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:38 np0005539550 podman[424458]: 2025-11-29 09:08:38.056948399 +0000 UTC m=+0.064490401 container health_status 870dcc8833c68b8089a79dc91607bd5aee0b5e41d06a845e78ea8cd19c3ccd00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:08:38 np0005539550 podman[424457]: 2025-11-29 09:08:38.064504889 +0000 UTC m=+0.071555248 container health_status 32abb6cb4a8e2dd34d16593b2a1c037047051b086808c8f6a111c9b2279dad9f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 04:08:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:38 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:38 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:38 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:38.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 29 04:08:38 np0005539550 nova_compute[257631]: 2025-11-29 09:08:38.806 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 1aeeb324-5fd6-4bf5-af8f-b4f12d3985a9 does not exist
Nov 29 04:08:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a605b203-f3cd-4530-b2d4-4531e135e816 does not exist
Nov 29 04:08:38 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 17b6409e-df07-4f7a-abd7-cffb15c19061 does not exist
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:08:38 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:08:39 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.65577173 +0000 UTC m=+0.046625792 container create fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 29 04:08:39 np0005539550 systemd[1]: Started libpod-conmon-fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61.scope.
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.631592643 +0000 UTC m=+0.022446735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:39 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.748389177 +0000 UTC m=+0.139243339 container init fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.756648094 +0000 UTC m=+0.147502166 container start fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.759817304 +0000 UTC m=+0.150671406 container attach fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 04:08:39 np0005539550 recursing_darwin[424760]: 167 167
Nov 29 04:08:39 np0005539550 systemd[1]: libpod-fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61.scope: Deactivated successfully.
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.765010444 +0000 UTC m=+0.155864506 container died fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:08:39 np0005539550 systemd[1]: var-lib-containers-storage-overlay-a643d2e8fb23c47003801539a03e8853dad87d471bea107846171be18a290729-merged.mount: Deactivated successfully.
Nov 29 04:08:39 np0005539550 podman[424744]: 2025-11-29 09:08:39.805261745 +0000 UTC m=+0.196115807 container remove fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 29 04:08:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 29 04:08:39 np0005539550 systemd[1]: libpod-conmon-fef7f23ad2255a086b7e7e155606610179f6837b4112414aeb5c8e44f23a0e61.scope: Deactivated successfully.
Nov 29 04:08:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:39 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 29 04:08:39 np0005539550 podman[424786]: 2025-11-29 09:08:39.984724343 +0000 UTC m=+0.047768391 container create d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 29 04:08:40 np0005539550 systemd[1]: Started libpod-conmon-d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94.scope.
Nov 29 04:08:40 np0005539550 nova_compute[257631]: 2025-11-29 09:08:40.032 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:40 np0005539550 systemd-logind[788]: New session 78 of user zuul.
Nov 29 04:08:40 np0005539550 podman[424786]: 2025-11-29 09:08:39.96467962 +0000 UTC m=+0.027723748 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:40 np0005539550 systemd[1]: Started Session 78 of User zuul.
Nov 29 04:08:40 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:40 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:40.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:40 np0005539550 podman[424786]: 2025-11-29 09:08:40.404333164 +0000 UTC m=+0.467377302 container init d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 29 04:08:40 np0005539550 podman[424786]: 2025-11-29 09:08:40.418609193 +0000 UTC m=+0.481653261 container start d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 29 04:08:40 np0005539550 podman[424786]: 2025-11-29 09:08:40.470518916 +0000 UTC m=+0.533562984 container attach d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 29 04:08:40 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:40 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:40 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:40.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:41 np0005539550 tender_chandrasekhar[424805]: --> passed data devices: 0 physical, 1 LVM
Nov 29 04:08:41 np0005539550 tender_chandrasekhar[424805]: --> relative data size: 1.0
Nov 29 04:08:41 np0005539550 tender_chandrasekhar[424805]: --> All data devices are unavailable
Nov 29 04:08:41 np0005539550 podman[424786]: 2025-11-29 09:08:41.205444148 +0000 UTC m=+1.268488206 container died d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:08:41 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:41 np0005539550 systemd[1]: libpod-d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94.scope: Deactivated successfully.
Nov 29 04:08:41 np0005539550 systemd[1]: var-lib-containers-storage-overlay-2a8181765f6e26fea37707c837bb74fc85d3e03642fd9542928a47b826370177-merged.mount: Deactivated successfully.
Nov 29 04:08:41 np0005539550 podman[424786]: 2025-11-29 09:08:41.337188487 +0000 UTC m=+1.400232535 container remove d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:41 np0005539550 systemd[1]: libpod-conmon-d90b6cd5ce2dcc8165b495b909995a719b3f0946edca23434541d6d910271b94.scope: Deactivated successfully.
Nov 29 04:08:41 np0005539550 podman[425072]: 2025-11-29 09:08:41.962788072 +0000 UTC m=+0.051841253 container create 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 29 04:08:41 np0005539550 systemd[1]: Started libpod-conmon-3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0.scope.
Nov 29 04:08:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:41.934583034 +0000 UTC m=+0.023636235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:42.039806177 +0000 UTC m=+0.128859358 container init 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:42.048117616 +0000 UTC m=+0.137170797 container start 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:42.051685885 +0000 UTC m=+0.140739086 container attach 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:42 np0005539550 stupefied_brattain[425091]: 167 167
Nov 29 04:08:42 np0005539550 systemd[1]: libpod-3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0.scope: Deactivated successfully.
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:42.058117157 +0000 UTC m=+0.147170348 container died 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Nov 29 04:08:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:42.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:42 np0005539550 systemd[1]: var-lib-containers-storage-overlay-177313d19ed134dad5dcb30fdcd755885ffeb80bc37f6b7d0f20db0ccffebe1b-merged.mount: Deactivated successfully.
Nov 29 04:08:42 np0005539550 podman[425072]: 2025-11-29 09:08:42.40179098 +0000 UTC m=+0.490844161 container remove 3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 29 04:08:42 np0005539550 systemd[1]: libpod-conmon-3d38b80e674655987e8e0f11c3eede259fe7013e88a73199d40347fa748453b0.scope: Deactivated successfully.
Nov 29 04:08:42 np0005539550 podman[425168]: 2025-11-29 09:08:42.560299701 +0000 UTC m=+0.024261650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:42 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:42 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:42 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:42.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:42 np0005539550 podman[425168]: 2025-11-29 09:08:42.683889966 +0000 UTC m=+0.147851815 container create ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 29 04:08:42 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46637 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:42 np0005539550 systemd[1]: Started libpod-conmon-ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c.scope.
Nov 29 04:08:42 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9afc838092be0878a70f483cb6cf389832192b012990a3614cf23d25c48c1da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9afc838092be0878a70f483cb6cf389832192b012990a3614cf23d25c48c1da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9afc838092be0878a70f483cb6cf389832192b012990a3614cf23d25c48c1da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:42 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9afc838092be0878a70f483cb6cf389832192b012990a3614cf23d25c48c1da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:42 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38358 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:42 np0005539550 podman[425168]: 2025-11-29 09:08:42.890400094 +0000 UTC m=+0.354361993 container init ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:42 np0005539550 podman[425168]: 2025-11-29 09:08:42.898670501 +0000 UTC m=+0.362632340 container start ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 29 04:08:42 np0005539550 podman[425168]: 2025-11-29 09:08:42.907992115 +0000 UTC m=+0.371954044 container attach ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 04:08:43 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46643 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:43 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:43 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38364 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:43 np0005539550 epic_thompson[425203]: {
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:    "0": [
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:        {
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "devices": [
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "/dev/loop3"
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            ],
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "lv_name": "ceph_lv0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "lv_size": "7511998464",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=b66774a7-56d9-5535-bd8c-681234404870,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5dd67027-4f06-4800-93bd-47ed1a74c5e6,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "lv_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "name": "ceph_lv0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "tags": {
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.block_uuid": "Kee7OW-tgUY-IeIC-c28l-Xvhj-F0mE-z3PS1V",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.cephx_lockbox_secret": "",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.cluster_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.cluster_name": "ceph",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.crush_device_class": "",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.encrypted": "0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.osd_fsid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.osd_id": "0",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.type": "block",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:                "ceph.vdo": "0"
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            },
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "type": "block",
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:            "vg_name": "ceph_vg0"
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:        }
Nov 29 04:08:43 np0005539550 epic_thompson[425203]:    ]
Nov 29 04:08:43 np0005539550 epic_thompson[425203]: }
Nov 29 04:08:43 np0005539550 systemd[1]: libpod-ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c.scope: Deactivated successfully.
Nov 29 04:08:43 np0005539550 podman[425168]: 2025-11-29 09:08:43.735092501 +0000 UTC m=+1.199054380 container died ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 29 04:08:43 np0005539550 systemd[1]: var-lib-containers-storage-overlay-d9afc838092be0878a70f483cb6cf389832192b012990a3614cf23d25c48c1da-merged.mount: Deactivated successfully.
Nov 29 04:08:43 np0005539550 podman[425168]: 2025-11-29 09:08:43.802166496 +0000 UTC m=+1.266128345 container remove ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_thompson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 29 04:08:43 np0005539550 systemd[1]: libpod-conmon-ccb92ef35d042838e8392fcf4566aa18be4e3f3ed6804f932260554db472ce4c.scope: Deactivated successfully.
Nov 29 04:08:43 np0005539550 nova_compute[257631]: 2025-11-29 09:08:43.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:43 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 29 04:08:43 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4072365032' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 29 04:08:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46252 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.466594636 +0000 UTC m=+0.044707544 container create 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:44 np0005539550 systemd[1]: Started libpod-conmon-8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078.scope.
Nov 29 04:08:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.448135143 +0000 UTC m=+0.026248071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.557336766 +0000 UTC m=+0.135449674 container init 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.563159402 +0000 UTC m=+0.141272320 container start 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.566196078 +0000 UTC m=+0.144308986 container attach 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 29 04:08:44 np0005539550 dazzling_sanderson[425512]: 167 167
Nov 29 04:08:44 np0005539550 systemd[1]: libpod-8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078.scope: Deactivated successfully.
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.570590639 +0000 UTC m=+0.148703557 container died 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 29 04:08:44 np0005539550 systemd[1]: var-lib-containers-storage-overlay-259c25bfb311bcbf9babc1141541e243c7b3835b1a362891296e9a7d959cc7bf-merged.mount: Deactivated successfully.
Nov 29 04:08:44 np0005539550 podman[425496]: 2025-11-29 09:08:44.607577098 +0000 UTC m=+0.185689996 container remove 8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_sanderson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 04:08:44 np0005539550 systemd[1]: libpod-conmon-8da1978185c561135012c8fe4e14eff4f0aa6414940d998b8b64f68a0d24f078.scope: Deactivated successfully.
Nov 29 04:08:44 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:44 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:44 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:44.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:44 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46258 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:44 np0005539550 podman[425539]: 2025-11-29 09:08:44.762324475 +0000 UTC m=+0.042910509 container create 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 29 04:08:44 np0005539550 systemd[1]: Started libpod-conmon-37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20.scope.
Nov 29 04:08:44 np0005539550 systemd[1]: Started libcrun container.
Nov 29 04:08:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e92293aa525361125d09f262bc2492e8ebd0ef7664840eba4db202b728d561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e92293aa525361125d09f262bc2492e8ebd0ef7664840eba4db202b728d561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e92293aa525361125d09f262bc2492e8ebd0ef7664840eba4db202b728d561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:44 np0005539550 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84e92293aa525361125d09f262bc2492e8ebd0ef7664840eba4db202b728d561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 29 04:08:44 np0005539550 podman[425539]: 2025-11-29 09:08:44.743086552 +0000 UTC m=+0.023672606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 29 04:08:44 np0005539550 podman[425539]: 2025-11-29 09:08:44.846010687 +0000 UTC m=+0.126596751 container init 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 29 04:08:44 np0005539550 podman[425539]: 2025-11-29 09:08:44.852653084 +0000 UTC m=+0.133239118 container start 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Nov 29 04:08:44 np0005539550 podman[425539]: 2025-11-29 09:08:44.85647024 +0000 UTC m=+0.137056304 container attach 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 29 04:08:45 np0005539550 nova_compute[257631]: 2025-11-29 09:08:45.035 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:45 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:45 np0005539550 serene_tharp[425556]: {
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:    "5dd67027-4f06-4800-93bd-47ed1a74c5e6": {
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:        "ceph_fsid": "b66774a7-56d9-5535-bd8c-681234404870",
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:        "osd_id": 0,
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:        "osd_uuid": "5dd67027-4f06-4800-93bd-47ed1a74c5e6",
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:        "type": "bluestore"
Nov 29 04:08:45 np0005539550 serene_tharp[425556]:    }
Nov 29 04:08:45 np0005539550 serene_tharp[425556]: }
Nov 29 04:08:45 np0005539550 systemd[1]: libpod-37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20.scope: Deactivated successfully.
Nov 29 04:08:45 np0005539550 podman[425581]: 2025-11-29 09:08:45.808761851 +0000 UTC m=+0.031251456 container died 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 29 04:08:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.004000100s ======
Nov 29 04:08:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:46.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000100s
Nov 29 04:08:46 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:46 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:46 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:46.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:46 np0005539550 ovs-vsctl[425623]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 04:08:47 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:47 np0005539550 systemd[1]: var-lib-containers-storage-overlay-84e92293aa525361125d09f262bc2492e8ebd0ef7664840eba4db202b728d561-merged.mount: Deactivated successfully.
Nov 29 04:08:48 np0005539550 podman[425581]: 2025-11-29 09:08:48.141771615 +0000 UTC m=+2.364261210 container remove 37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_tharp, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 29 04:08:48 np0005539550 systemd[1]: libpod-conmon-37851f38bcae63b65e488e2f264b25fb51489cbe0aa19601281ed74864304b20.scope: Deactivated successfully.
Nov 29 04:08:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:48.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 29 04:08:48 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46655 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:48 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:48 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 29 04:08:48 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:48 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:48 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:48.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:48 np0005539550 nova_compute[257631]: 2025-11-29 09:08:48.809 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:48 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 04:08:49 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 04:08:49 np0005539550 virtqemud[256287]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 04:08:49 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46664 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:49 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:08:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:08:49 np0005539550 ceph-mon[74435]: log_channel(audit) log [INF] : from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 17a0a425-d187-492e-af40-aa2307ab1d73 does not exist
Nov 29 04:08:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev 83c3ff4a-54bd-4527-8222-8cae9ec1783b does not exist
Nov 29 04:08:49 np0005539550 ceph-mgr[74726]: [progress WARNING root] complete: ev a205dd24-a7e2-4d55-9e9b-340671f32bf7 does not exist
Nov 29 04:08:49 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:49 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: cache status {prefix=cache status} (starting...)
Nov 29 04:08:49 np0005539550 lvm[425997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 29 04:08:49 np0005539550 lvm[425997]: VG ceph_vg0 finished
Nov 29 04:08:49 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: client ls {prefix=client ls} (starting...)
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46273 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:50 np0005539550 nova_compute[257631]: 2025-11-29 09:08:50.038 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:50.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46685 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:50.298+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46285 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: from='mgr.14128 192.168.122.100:0/2436334350' entity='mgr.compute-0.pdhsqi' 
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: damage ls {prefix=damage ls} (starting...)
Nov 29 04:08:50 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:50 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:50 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:50.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:50 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump loads {prefix=dump loads} (starting...)
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 29 04:08:50 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203379054' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38397 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:50 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46312 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:51 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:51.151+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 29 04:08:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947625235' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:51 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46730 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38418 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:51.586+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:51 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 29 04:08:51 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2228642152' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46748 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:51 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46342 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:52 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: ops {prefix=ops} (starting...)
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062484839' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784177504' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 29 04:08:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46357 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:52 np0005539550 podman[426329]: 2025-11-29 09:08:52.388093921 +0000 UTC m=+0.123246567 container health_status a934e36cc85bc026d5a0180c71204ca94bdb3a8ac2c47df8793a50961350acd6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058163353' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:08:52 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:52 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:52 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:52.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:52 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: session ls {prefix=session ls} (starting...)
Nov 29 04:08:52 np0005539550 ceph-mds[93677]: mds.cephfs.compute-0.qcwnhf asok_command: status {prefix=status} (starting...)
Nov 29 04:08:52 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46769 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:52 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457128612' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38475 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46402 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:53.526+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995453598' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1004244981' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 29 04:08:53 np0005539550 nova_compute[257631]: 2025-11-29 09:08:53.813 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:53 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1365981784' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46820 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:53 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:53.991+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:53 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2152507495' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064492046' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 29 04:08:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38529 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:54 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:54.542+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:54 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 29 04:08:54 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:54 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000026s ======
Nov 29 04:08:54 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:54.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1745423051' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 29 04:08:54 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398043330' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 29 04:08:54 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46862 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46456 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 nova_compute[257631]: 2025-11-29 09:08:55.040 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38559 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46874 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2955263987' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770865317' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46886 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38574 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46468 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 29 04:08:55 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/61183061' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 29 04:08:55 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46901 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:56.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46480 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 385 heartbeat osd_stat(store_statfs(0x1a5815000/0x0/0x1bfc00000, data 0x4244ece/0x4439000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.682620049s of 15.909779549s, submitted: 44
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 377806848 unmapped: 63176704 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2dec9a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a5014000/0x0/0x1bfc00000, data 0x4a44ef1/0x4c3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dec0800 session 0x556d2ab88d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 62627840 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9e2b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2ab8d0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cfef860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378355712 unmapped: 62627840 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2cfef2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2f943400 session 0x556d2c72ba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1021e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2cfd6960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378421248 unmapped: 62562304 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4399445 data_alloc: 234881024 data_used: 33529856
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 378486784 unmapped: 62496768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a3ce0000/0x0/0x1bfc00000, data 0x5963cf8/0x5b5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 379281408 unmapped: 61702144 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d2905a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 383811584 unmapped: 57171968 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2b50e780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 384851968 unmapped: 56131584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2dec9c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2b8cd680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2ca31c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389365760 unmapped: 51617792 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a3c95000/0x0/0x1bfc00000, data 0x59aecf8/0x5ba9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,3,0,1,0,15,2])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2cfee960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4456051 data_alloc: 251658240 data_used: 42823680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386326528 unmapped: 54657024 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.058428764s of 10.070032120s, submitted: 99
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d900960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdd400 session 0x556d2ab8d0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386351104 unmapped: 54632448 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2db51400 session 0x556d2b8cc000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2de8e000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2acaaf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393306112 unmapped: 47677440 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393338880 unmapped: 47644672 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dbbe000 session 0x556d2cfd7680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2cfef680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 42377216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2cad6780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d0bc3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfef860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a31e0000/0x0/0x1bfc00000, data 0x6462d21/0x665e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4690952 data_alloc: 251658240 data_used: 53731328
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393846784 unmapped: 47136768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691244 data_alloc: 251658240 data_used: 53731328
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a2555000/0x0/0x1bfc00000, data 0x70edd5a/0x72e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2db51400 session 0x556d2d9045a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dbbe000 session 0x556d2d0bde00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393863168 unmapped: 47120384 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.364130020s of 12.096626282s, submitted: 100
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b626000 session 0x556d2de9f680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfd7e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396918784 unmapped: 44064768 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1b6e000/0x0/0x1bfc00000, data 0x7ad2d8d/0x7cd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396943360 unmapped: 44040192 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4787403 data_alloc: 268435456 data_used: 55635968
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 397819904 unmapped: 43163648 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1aed000/0x0/0x1bfc00000, data 0x7b53d8d/0x7d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1aed000/0x0/0x1bfc00000, data 0x7b53d8d/0x7d51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400220160 unmapped: 40763392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d30619c00 session 0x556d2df88960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2debc000 session 0x556d2b50e3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4814324 data_alloc: 268435456 data_used: 59850752
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2dec7800 session 0x556d2c72ba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ca87400 session 0x556d2ca31e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 400228352 unmapped: 40755200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d30619c00 session 0x556d2b4d90e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2debc000 session 0x556d2cbfb860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401080320 unmapped: 39903232 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.222202778s of 10.293131828s, submitted: 128
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403972096 unmapped: 37011456 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a1acc000/0x0/0x1bfc00000, data 0x7b73def/0x7d72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,2,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 31662080 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4911206 data_alloc: 285212672 data_used: 72708096
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 31637504 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 31629312 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409370624 unmapped: 31612928 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab33000 session 0x556d2ab88d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2ab35400 session 0x556d2d904780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 heartbeat osd_stat(store_statfs(0x1a16e9000/0x0/0x1bfc00000, data 0x7f48def/0x8147000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410607616 unmapped: 30375936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 ms_handle_reset con 0x556d2b34a800 session 0x556d2cfefc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 30834688 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ca87400 session 0x556d2dabef00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabfa40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a16ed000/0x0/0x1bfc00000, data 0x7f4eb74/0x814f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [3,2,3,1,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4968211 data_alloc: 285212672 data_used: 78028800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422912000 unmapped: 18071552 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416800768 unmapped: 24182784 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d30619c00 session 0x556d2cfd74a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2debc000 session 0x556d2d9054a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418021376 unmapped: 22962176 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabef00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 heartbeat osd_stat(store_statfs(0x1a0a6e000/0x0/0x1bfc00000, data 0x8bccbd6/0x8dce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.058282375s of 10.044799805s, submitted: 101
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422068224 unmapped: 18915328 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 387 handle_osd_map epochs [388,388], i have 388, src has [1,388]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2ab35400 session 0x556d2cfefc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dec83c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 388 ms_handle_reset con 0x556d2db53000 session 0x556d2ac19a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421019648 unmapped: 19963904 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 388 heartbeat osd_stat(store_statfs(0x19f85f000/0x0/0x1bfc00000, data 0x9ddd904/0x9fdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5217874 data_alloc: 285212672 data_used: 84238336
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421052416 unmapped: 19931136 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2ab33000 session 0x556d2df881e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2b34a800 session 0x556d2c9e23c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 heartbeat osd_stat(store_statfs(0x19fdec000/0x0/0x1bfc00000, data 0x8ee2643/0x90e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2ab35400 session 0x556d2de8f4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 16515072 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 ms_handle_reset con 0x556d2db53000 session 0x556d2cfeeb40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 16039936 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5068775 data_alloc: 285212672 data_used: 74317824
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 16007168 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 16007168 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 390 heartbeat osd_stat(store_statfs(0x1a04b1000/0x0/0x1bfc00000, data 0x9189275/0x938c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2debc000 session 0x556d2da685a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d2905a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2db42400 session 0x556d2d1cdc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ca87400 session 0x556d2cfef860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411475968 unmapped: 29507584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ab35400 session 0x556d2da69860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.420536995s of 10.264038086s, submitted: 352
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411475968 unmapped: 29507584 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2b34a800 session 0x556d2b454d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411484160 unmapped: 29499392 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a1a01000/0x0/0x1bfc00000, data 0x7c3af7a/0x7e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4799495 data_alloc: 251658240 data_used: 51978240
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2dec85a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ad5d000 session 0x556d2de9fe00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 ms_handle_reset con 0x556d2ca87400 session 0x556d2d0bde00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4820199 data_alloc: 268435456 data_used: 55132160
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411492352 unmapped: 29491200 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 heartbeat osd_stat(store_statfs(0x1a19fe000/0x0/0x1bfc00000, data 0x7c3df7a/0x7e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 29483008 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 392 heartbeat osd_stat(store_statfs(0x1a19fa000/0x0/0x1bfc00000, data 0x7c3fb83/0x7e43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 411500544 unmapped: 29483008 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db42400 session 0x556d2d904000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db53000 session 0x556d2c9821e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.621694565s of 10.127067566s, submitted: 42
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419889152 unmapped: 21094400 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 392 ms_handle_reset con 0x556d2db53000 session 0x556d2b5583c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 21086208 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4944626 data_alloc: 268435456 data_used: 68145152
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419897344 unmapped: 21086208 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418865152 unmapped: 22118400 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419962880 unmapped: 21020672 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a0942000/0x0/0x1bfc00000, data 0x8cf1be5/0x8ef6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 ms_handle_reset con 0x556d2ca87400 session 0x556d2d9043c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4933270 data_alloc: 268435456 data_used: 62816256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421117952 unmapped: 19865600 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421126144 unmapped: 19857408 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1536000/0x0/0x1bfc00000, data 0x81038fa/0x8308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421134336 unmapped: 19849216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1536000/0x0/0x1bfc00000, data 0x81038fa/0x8308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421134336 unmapped: 19849216 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.330789566s of 10.588541031s, submitted: 151
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4927694 data_alloc: 268435456 data_used: 62820352
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 heartbeat osd_stat(store_statfs(0x1a1515000/0x0/0x1bfc00000, data 0x81248fa/0x8329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1515000/0x0/0x1bfc00000, data 0x81248fa/0x8329000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421273600 unmapped: 19709952 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4938804 data_alloc: 268435456 data_used: 63123456
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421289984 unmapped: 19693568 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1511000/0x0/0x1bfc00000, data 0x8126503/0x832c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4938964 data_alloc: 268435456 data_used: 63127552
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.563270569s of 13.047464371s, submitted: 30
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a150f000/0x0/0x1bfc00000, data 0x8129503/0x832f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4937964 data_alloc: 268435456 data_used: 63139840
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0bd4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421298176 unmapped: 19685376 heap: 440983552 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a14f9000/0x0/0x1bfc00000, data 0x813f503/0x8345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dd45c00 session 0x556d2d291680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dec3000 session 0x556d2cbfb860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a14f9000/0x0/0x1bfc00000, data 0x813f503/0x8345000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cbfa780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5008471 data_alloc: 268435456 data_used: 63135744
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2b5132c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421683200 unmapped: 23502848 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ca87400 session 0x556d2de9fc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.587997437s of 10.050396919s, submitted: 37
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db53000 session 0x556d2c72b0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421707776 unmapped: 23478272 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a0c51000/0x0/0x1bfc00000, data 0x89e84f3/0x8bed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421707776 unmapped: 23478272 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dd45c00 session 0x556d2ab892c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 421715968 unmapped: 23470080 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5018024 data_alloc: 268435456 data_used: 64372736
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 422305792 unmapped: 22880256 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a0c4c000/0x0/0x1bfc00000, data 0x89ed4f3/0x8bf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 423370752 unmapped: 21815296 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db42400 session 0x556d2d10f680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db53000 session 0x556d2de9e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15ca000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4943682 data_alloc: 268435456 data_used: 64090112
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab35400 session 0x556d2dec94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab33000 session 0x556d2c982000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15cb000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 28262400 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.307431221s of 10.017060280s, submitted: 61
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2b34a000 session 0x556d2dabf4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab32400 session 0x556d2d900960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 30056448 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a15cb000/0x0/0x1bfc00000, data 0x806f491/0x8273000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab33000 session 0x556d2d904000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415137792 unmapped: 30048256 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2ab35400 session 0x556d2dec85a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4837114 data_alloc: 268435456 data_used: 60567552
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415145984 unmapped: 30040064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a20f8000/0x0/0x1bfc00000, data 0x75443bd/0x7745000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 27942912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412614656 unmapped: 32571392 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1aff000/0x0/0x1bfc00000, data 0x7b3e3bd/0x7d3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 32129024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4890160 data_alloc: 268435456 data_used: 61841408
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1a7c000/0x0/0x1bfc00000, data 0x7bc13bd/0x7dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413057024 unmapped: 32129024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a1a7c000/0x0/0x1bfc00000, data 0x7bc13bd/0x7dc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d300ddc00 session 0x556d2b9f1c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2db42400 session 0x556d2b562000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.311465263s of 11.517497063s, submitted: 143
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 heartbeat osd_stat(store_statfs(0x1a21a1000/0x0/0x1bfc00000, data 0x749c3bd/0x769d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4826815 data_alloc: 268435456 data_used: 59699200
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2cfef680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 412991488 unmapped: 32194560 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabe960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab32400 session 0x556d2c9e3a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 32178176 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 395 ms_handle_reset con 0x556d2ab35400 session 0x556d2dce4f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2179000/0x0/0x1bfc00000, data 0x74c10e0/0x76c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 32178176 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d300ddc00 session 0x556d2c9e3680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 32169984 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d2ab32400 session 0x556d2dec92c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2175000/0x0/0x1bfc00000, data 0x74c2e57/0x76c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4832126 data_alloc: 268435456 data_used: 59707392
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 32169984 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2172000/0x0/0x1bfc00000, data 0x74c8df5/0x76cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4829554 data_alloc: 268435456 data_used: 59707392
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 heartbeat osd_stat(store_statfs(0x1a2171000/0x0/0x1bfc00000, data 0x74c9df5/0x76cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46919 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.398031235s of 11.863055229s, submitted: 72
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabeb40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 32153600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 32145408 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db53000 session 0x556d2da69e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dd45800 session 0x556d2cfef860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2da69860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2d102000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413040640 unmapped: 32145408 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2d9041e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dce4d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db53000 session 0x556d2de9e3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2dabda40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2b513c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2dabe780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4905470 data_alloc: 268435456 data_used: 59715584
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2d900d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdcc00 session 0x556d2c872780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2dec8f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413564928 unmapped: 31621120 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce43c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a1931000/0x0/0x1bfc00000, data 0x7d099fe/0x7f0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2b8cc000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415973376 unmapped: 29212672 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dec7800 session 0x556d2d9001e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dd45800 session 0x556d2d905e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2de9eb40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2dd33e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2de9f860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414081024 unmapped: 31105024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a168a000/0x0/0x1bfc00000, data 0x7fb09fe/0x81b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4932596 data_alloc: 268435456 data_used: 59719680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414081024 unmapped: 31105024 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2cfd7860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.765192986s of 11.799226761s, submitted: 81
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dcdc800 session 0x556d2d1ccd20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2b56ad20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415817728 unmapped: 29368320 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a168b000/0x0/0x1bfc00000, data 0x7fb09cb/0x81b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 29270016 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 4989851 data_alloc: 268435456 data_used: 64364544
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416342016 unmapped: 28844032 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2b5585a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 413138944 unmapped: 32047104 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2c9e2f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2dec7800 session 0x556d2b50e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409034752 unmapped: 36151296 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab35400 session 0x556d2da69680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2b626000 session 0x556d2d901860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2db51400 session 0x556d2cfd6960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 ms_handle_reset con 0x556d2ab32400 session 0x556d2de8f4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a44e7000/0x0/0x1bfc00000, data 0x5155969/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4485045 data_alloc: 251658240 data_used: 43249664
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 39542784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 heartbeat osd_stat(store_statfs(0x1a44e7000/0x0/0x1bfc00000, data 0x5155969/0x5356000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.329918861s of 10.000563622s, submitted: 145
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 ms_handle_reset con 0x556d2dec7800 session 0x556d2dfc9860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 405528576 unmapped: 39657472 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a445b000/0x0/0x1bfc00000, data 0x51e06bd/0x53e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x163bf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406577152 unmapped: 38608896 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a3fa3000/0x0/0x1bfc00000, data 0x52876bd/0x5488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4510129 data_alloc: 251658240 data_used: 36737024
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 heartbeat osd_stat(store_statfs(0x1a3fa3000/0x0/0x1bfc00000, data 0x52876bd/0x5488000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407232512 unmapped: 37953536 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 ms_handle_reset con 0x556d38d0dc00 session 0x556d2ab8d0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4544035 data_alloc: 251658240 data_used: 36745216
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a39f4000/0x0/0x1bfc00000, data 0x58372c6/0x5a39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406831104 unmapped: 38354944 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406839296 unmapped: 38346752 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.460285187s of 10.763207436s, submitted: 123
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2b513c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2dce4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4580727 data_alloc: 251658240 data_used: 36823040
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 38002688 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3647000/0x0/0x1bfc00000, data 0x5be42d6/0x5de7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 38019072 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2c9830e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2d1ea5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db51400 session 0x556d2de9ed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2b512b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d1eb680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2cfd70e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2d9001e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4928000/0x0/0x1bfc00000, data 0x45c42d6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4382559 data_alloc: 234881024 data_used: 33337344
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396419072 unmapped: 48766976 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2df88780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9f2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 50962432 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394223616 unmapped: 50962432 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.770096779s of 10.134560585s, submitted: 58
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394199040 unmapped: 50987008 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394199040 unmapped: 50987008 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2de8e780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabcd20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4333232 data_alloc: 234881024 data_used: 31100928
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386596864 unmapped: 58589184 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d9041e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a51c7000/0x0/0x1bfc00000, data 0x40642d6/0x4267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4152146 data_alloc: 234881024 data_used: 21069824
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 386613248 unmapped: 58572800 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6111000/0x0/0x1bfc00000, data 0x311a2d6/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.270173073s of 12.335928917s, submitted: 25
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4190044 data_alloc: 234881024 data_used: 21090304
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391307264 unmapped: 53878784 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a57c5000/0x0/0x1bfc00000, data 0x3a662d6/0x3c69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391643136 unmapped: 53542912 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dcd7c00 session 0x556d2dec85a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391528448 unmapped: 53657600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5724000/0x0/0x1bfc00000, data 0x3b072d6/0x3d0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4241134 data_alloc: 234881024 data_used: 22396928
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391528448 unmapped: 53657600 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d904000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d900960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dabf4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2c982000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2dec94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2de9f2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2df88780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2d9001e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2d1eb680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 53592064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9ed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2d1ea5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391593984 unmapped: 53592064 heap: 445186048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2c9830e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2dce4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2b513c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec7800 session 0x556d2b50e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2acaad20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a49ac000/0x0/0x1bfc00000, data 0x487c3aa/0x4a82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x167cf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 56664064 heap: 448864256 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d1ccd20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2de9f860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dd33e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2de9eb40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2da69680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b558960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392560640 unmapped: 57925632 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2da692c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2de8fa40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2dd32f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dabde00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dfc8d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2dfc81e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444547 data_alloc: 234881024 data_used: 22429696
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 58441728 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2d0b4780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.393129349s of 11.138072968s, submitted: 184
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2cc42000 session 0x556d2de9e1e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392044544 unmapped: 58441728 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392060928 unmapped: 58425344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392060928 unmapped: 58425344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e64000/0x0/0x1bfc00000, data 0x54023dd/0x560a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392069120 unmapped: 58417152 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4471720 data_alloc: 234881024 data_used: 25657344
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392069120 unmapped: 58417152 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2de9fa40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4e59000/0x0/0x1bfc00000, data 0x540d3dd/0x5615000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392077312 unmapped: 58408960 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2d1cde00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2dabf680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2b56b2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 56664064 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ca87400 session 0x556d2cbfa5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393822208 unmapped: 56664064 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2c9e2780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390135808 unmapped: 60350464 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4452843 data_alloc: 234881024 data_used: 32940032
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391036928 unmapped: 59449344 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.089821815s of 10.173573494s, submitted: 29
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393633792 unmapped: 56852480 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a580e000/0x0/0x1bfc00000, data 0x4a5038b/0x4c58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393699328 unmapped: 56786944 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394764288 unmapped: 55721984 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395567104 unmapped: 54919168 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4563269 data_alloc: 234881024 data_used: 37036032
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d291680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2d9054a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395583488 unmapped: 54902784 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2dfc94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395599872 unmapped: 54886400 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a51b9000/0x0/0x1bfc00000, data 0x49c92f6/0x4bce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395608064 unmapped: 54878208 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2d5e3400 session 0x556d2d0bd4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2b50f680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4428925 data_alloc: 234881024 data_used: 33460224
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2d291e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c48000/0x0/0x1bfc00000, data 0x46212f6/0x4826000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,1,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396869632 unmapped: 53616640 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.676429749s of 10.251730919s, submitted: 194
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398491648 unmapped: 51994624 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398499840 unmapped: 51986432 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2d1ea000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a538b000/0x0/0x1bfc00000, data 0x4ede2f6/0x50e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [0,0,0,0,0,2,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398196736 unmapped: 52289536 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dec81e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab32400 session 0x556d2ab95860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393633792 unmapped: 56852480 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4418939 data_alloc: 234881024 data_used: 29736960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x45c22f6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a592d000/0x0/0x1bfc00000, data 0x45c22f6/0x47c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 56811520 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2dd32f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2dd32780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5ca8000/0x0/0x1bfc00000, data 0x45c22e6/0x47c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4415291 data_alloc: 234881024 data_used: 29761536
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5c87000/0x0/0x1bfc00000, data 0x45e32e6/0x47e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393543680 unmapped: 56942592 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2b5132c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.973278999s of 13.644609451s, submitted: 82
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230151 data_alloc: 234881024 data_used: 21344256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d32e98000 session 0x556d2da68960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6cde000/0x0/0x1bfc00000, data 0x358c2e6/0x3790000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4230151 data_alloc: 234881024 data_used: 21344256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 387997696 unmapped: 62488576 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6cd8000/0x0/0x1bfc00000, data 0x35922e6/0x3796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389046272 unmapped: 61440000 heap: 450486272 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.106501579s of 10.134304047s, submitted: 13
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4283103 data_alloc: 234881024 data_used: 21344256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc8780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389128192 unmapped: 65028096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389128192 unmapped: 65028096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306839 data_alloc: 234881024 data_used: 21344256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389136384 unmapped: 65019904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2c8730e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389152768 unmapped: 65003520 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620e000/0x0/0x1bfc00000, data 0x405c2e6/0x4260000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 389152768 unmapped: 65003520 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4386532 data_alloc: 234881024 data_used: 28667904
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.490860939s of 10.591947556s, submitted: 13
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a620c000/0x0/0x1bfc00000, data 0x405d2e6/0x4261000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4387392 data_alloc: 234881024 data_used: 28667904
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a6207000/0x0/0x1bfc00000, data 0x40632e6/0x4267000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ac183c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2c9832c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390258688 unmapped: 63897600 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4465576 data_alloc: 234881024 data_used: 28741632
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5806000/0x0/0x1bfc00000, data 0x4a63348/0x4c68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a5803000/0x0/0x1bfc00000, data 0x4a66348/0x4c6b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4f800 session 0x556d2dfc9e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2d291a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2de9e000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390922240 unmapped: 63234048 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b563a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.726837158s of 11.113820076s, submitted: 39
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dabf680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2aafd800 session 0x556d2dce5680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabf0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2de9fe00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2dfc83c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2b5134a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 391143424 unmapped: 63012864 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489706 data_alloc: 234881024 data_used: 28745728
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2cfef0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a55a6000/0x0/0x1bfc00000, data 0x4cc137b/0x4ec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2c9e21e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a55a6000/0x0/0x1bfc00000, data 0x4cc137b/0x4ec8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2b559a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b626000 session 0x556d2cad70e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392200192 unmapped: 61956096 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2dce5a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad53400 session 0x556d2ac183c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2c8730e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4346826 data_alloc: 234881024 data_used: 26660864
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392208384 unmapped: 61947904 heap: 454156288 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a644c000/0x0/0x1bfc00000, data 0x3e1b37b/0x4022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2dd32780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dd32f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2ab95860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.578354836s of 10.390926361s, submitted: 44
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2b627c00 session 0x556d2dec81e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1ea000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393510912 unmapped: 64847872 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 64684032 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2d1021e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dec2000 session 0x556d2d9003c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 393674752 unmapped: 64684032 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4366596 data_alloc: 234881024 data_used: 25120768
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cbfba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 390529024 unmapped: 67829760 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394231808 unmapped: 64126976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392298496 unmapped: 66060288 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4912000/0x0/0x1bfc00000, data 0x47b735b/0x49bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a4912000/0x0/0x1bfc00000, data 0x47b735b/0x49bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 66125824 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392232960 unmapped: 66125824 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a48eb000/0x0/0x1bfc00000, data 0x47dd35b/0x49e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4462371 data_alloc: 234881024 data_used: 25239552
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db41c00 session 0x556d2dec94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a48eb000/0x0/0x1bfc00000, data 0x47dd35b/0x49e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 392429568 unmapped: 65929216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394657792 unmapped: 63700992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.439727783s of 12.403476715s, submitted: 96
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 394657792 unmapped: 63700992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4607603 data_alloc: 234881024 data_used: 35815424
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396345344 unmapped: 62013440 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3a8b000/0x0/0x1bfc00000, data 0x563e35b/0x5843000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,6])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2de9e3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 62930944 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395427840 unmapped: 62930944 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 395681792 unmapped: 62676992 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4653122 data_alloc: 234881024 data_used: 36085760
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a37fc000/0x0/0x1bfc00000, data 0x58cd35b/0x5ad2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396001280 unmapped: 62357504 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 396009472 unmapped: 62349312 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.237027168s of 10.552100182s, submitted: 74
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2cbfb4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2da69c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2c9e3c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4656518 data_alloc: 234881024 data_used: 36139008
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 397377536 unmapped: 60981248 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d3b587400 session 0x556d2da690e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398327808 unmapped: 60030976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3277000/0x0/0x1bfc00000, data 0x5e4a35b/0x604f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398327808 unmapped: 60030976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a3277000/0x0/0x1bfc00000, data 0x5e4a35b/0x604f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,12])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401965056 unmapped: 56393728 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2ecc000/0x0/0x1bfc00000, data 0x61f436b/0x63fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398450688 unmapped: 59908096 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a2e93000/0x0/0x1bfc00000, data 0x623536b/0x643b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4727187 data_alloc: 234881024 data_used: 36171776
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 398458880 unmapped: 59899904 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406519808 unmapped: 51838976 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2de9eb40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dabf0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2b454d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 59138048 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 59121664 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 59113472 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d300dd400 session 0x556d2b5132c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4b800 session 0x556d2d0bda40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4795751 data_alloc: 234881024 data_used: 36540416
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2db4b800 session 0x556d2d904960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.026745796s of 10.590575218s, submitted: 91
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 59105280 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a25b6000/0x0/0x1bfc00000, data 0x6b1036b/0x6d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2ab33000 session 0x556d2cfd65a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a25b6000/0x0/0x1bfc00000, data 0x6b1036b/0x6d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1692f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401416192 unmapped: 56942592 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dd44c00 session 0x556d2dce5c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2debf000 session 0x556d2dabf4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 401424384 unmapped: 56934400 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4840481 data_alloc: 251658240 data_used: 41115648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 54697984 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a1384000/0x0/0x1bfc00000, data 0x6ba337b/0x6daa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403660800 unmapped: 54697984 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 heartbeat osd_stat(store_statfs(0x1a1146000/0x0/0x1bfc00000, data 0x6ddf57b/0x6fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 ms_handle_reset con 0x556d2dbbe400 session 0x556d2c9e30e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403685376 unmapped: 54673408 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4857271 data_alloc: 251658240 data_used: 41127936
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403693568 unmapped: 54665216 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.675286293s of 10.106954575s, submitted: 72
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 400 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403701760 unmapped: 54657024 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403701760 unmapped: 54657024 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 400 ms_handle_reset con 0x556d2db4b800 session 0x556d2b562d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 403718144 unmapped: 54640640 heap: 458358784 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2dbbe400 session 0x556d2b563860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2dd44c00 session 0x556d2d81af00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2b34a800 session 0x556d2b522f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2debf000 session 0x556d2c9e3e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 heartbeat osd_stat(store_statfs(0x1a1132000/0x0/0x1bfc00000, data 0x6df2300/0x6ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2db11000 session 0x556d2df89e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 ms_handle_reset con 0x556d2ab33000 session 0x556d2c8732c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428711936 unmapped: 41689088 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 402 ms_handle_reset con 0x556d2b34a800 session 0x556d2acaba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 402 ms_handle_reset con 0x556d2db4b800 session 0x556d2d81a780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5198280 data_alloc: 268435456 data_used: 60989440
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428744704 unmapped: 41656320 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2d901c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4b800 session 0x556d2ad30f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425000960 unmapped: 45400064 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2b4d3860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2df88000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2dc08800 session 0x556d2d81a1e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2dabf4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2dce5c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d0bda40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7fc000/0x0/0x1bfc00000, data 0x9721bad/0x9932000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 44204032 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2dfc85a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4b800 session 0x556d2dabf0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2da69c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7c6000/0x0/0x1bfc00000, data 0x9756bbd/0x9968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5256784 data_alloc: 268435456 data_used: 62124032
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.167340279s of 10.341951370s, submitted: 175
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 44195840 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5401.4 total, 600.0 interval#012Cumulative writes: 56K writes, 211K keys, 56K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.04 MB/s#012Cumulative WAL: 56K writes, 20K syncs, 2.70 writes per sync, written: 0.20 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8979 writes, 32K keys, 8979 commit groups, 1.0 writes per commit group, ingest: 34.71 MB, 0.06 MB/s#012Interval WAL: 8979 writes, 3547 syncs, 2.53 writes per sync, written: 0.03 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2dec94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2d9003c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d1021e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 heartbeat osd_stat(store_statfs(0x19e7c6000/0x0/0x1bfc00000, data 0x9756bbd/0x9968000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db40400 session 0x556d2d1ea000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2cad6f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2dd32f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2ac183c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d905e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 44179456 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db4d000 session 0x556d2cbfaf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ab33000 session 0x556d2df885a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2ad52400 session 0x556d2d0b4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db11c00 session 0x556d2cfd63c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 44179456 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 ms_handle_reset con 0x556d2db42000 session 0x556d2d291e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2b4d30e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db4d000 session 0x556d2b56a780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2b5221e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2dfc9e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2dce5e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 44171264 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5260674 data_alloc: 268435456 data_used: 62132224
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 44171264 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b4543c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2dec2000 session 0x556d2dabd0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2ca30000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2d0b4000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19e7be000/0x0/0x1bfc00000, data 0x975a838/0x996f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 44154880 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2cfd63c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d300dd400 session 0x556d2c982000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2ad31680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ab33000 session 0x556d2ca303c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426287104 unmapped: 44113920 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2d0b4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad52400 session 0x556d2ca310e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2dec2000 session 0x556d2d1030e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fbf0000/0x0/0x1bfc00000, data 0x832a828/0x853e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2ad5d000 session 0x556d2b50f680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426303488 unmapped: 44097536 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2a0225a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fbf1000/0x0/0x1bfc00000, data 0x832a818/0x853d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a400 session 0x556d2d900b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426311680 unmapped: 44089344 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5075474 data_alloc: 268435456 data_used: 58626048
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426319872 unmapped: 44081152 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc ms_handle_reset ms_handle_reset con 0x556d2ad5c400
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2c8d9400 session 0x556d2de9fa40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426459136 unmapped: 43941888 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.647686958s of 10.896045685s, submitted: 84
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db42000 session 0x556d2ac183c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426860544 unmapped: 43540480 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5047087 data_alloc: 268435456 data_used: 68329472
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d68a800 session 0x556d2c872d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5047087 data_alloc: 268435456 data_used: 68329472
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b7000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428564480 unmapped: 41836544 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.866680145s of 10.913626671s, submitted: 22
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 432676864 unmapped: 37724160 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a08b8000/0x0/0x1bfc00000, data 0x7663805/0x7876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5116831 data_alloc: 285212672 data_used: 74747904
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a05ed000/0x0/0x1bfc00000, data 0x792e805/0x7b41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433020928 unmapped: 37380096 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x1a05ed000/0x0/0x1bfc00000, data 0x792e805/0x7b41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dabf4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2cc42c00 session 0x556d2acaba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5202650 data_alloc: 285212672 data_used: 74747904
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4b000/0x0/0x1bfc00000, data 0x82cf867/0x84e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433913856 unmapped: 36487168 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5202650 data_alloc: 285212672 data_used: 74747904
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.437170029s of 12.675600052s, submitted: 60
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19fc4a000/0x0/0x1bfc00000, data 0x82d0867/0x84e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5198846 data_alloc: 285212672 data_used: 74752000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433586176 unmapped: 36814848 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2d5e3800 session 0x556d2d81af00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 433676288 unmapped: 36724736 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 436666368 unmapped: 33734656 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 437846016 unmapped: 32555008 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19f36d000/0x0/0x1bfc00000, data 0x8bad867/0x8dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438755328 unmapped: 31645696 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5335234 data_alloc: 285212672 data_used: 84652032
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438755328 unmapped: 31645696 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438763520 unmapped: 31637504 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 heartbeat osd_stat(store_statfs(0x19f36d000/0x0/0x1bfc00000, data 0x8bad867/0x8dc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17acf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 31113216 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 439287808 unmapped: 31113216 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 ms_handle_reset con 0x556d2db47000 session 0x556d2b50e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.948060989s of 14.110429764s, submitted: 28
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442073088 unmapped: 28327936 heap: 470401024 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2d68b400 session 0x556d2da69680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dec8b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2d68b400 session 0x556d2df89a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5402262 data_alloc: 301989888 data_used: 93487104
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 405 ms_handle_reset con 0x556d2cc42c00 session 0x556d2c9e25a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 451633152 unmapped: 27172864 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 406 ms_handle_reset con 0x556d2d5e3800 session 0x556d2dabcf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 406 heartbeat osd_stat(store_statfs(0x19d350000/0x0/0x1bfc00000, data 0xa7b6311/0xa9cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 451756032 unmapped: 27049984 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2db47000 session 0x556d2ca30960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 452952064 unmapped: 25853952 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 455901184 unmapped: 22904832 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2c8d9400 session 0x556d2de8ef00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2cc42c00 session 0x556d2de8f0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 ms_handle_reset con 0x556d2d5e3800 session 0x556d2de8fc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 456187904 unmapped: 22618112 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ca43000/0x0/0x1bfc00000, data 0xb0c00b2/0xb2d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5688051 data_alloc: 301989888 data_used: 97107968
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 heartbeat osd_stat(store_statfs(0x19ca3b000/0x0/0x1bfc00000, data 0xb0ca050/0xb2e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 458309632 unmapped: 20496384 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2db47000 session 0x556d2df883c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2d68b400 session 0x556d2de8e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 18333696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460472320 unmapped: 18333696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2ab33000 session 0x556d2d1025a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 408 ms_handle_reset con 0x556d2ad52400 session 0x556d2cbfaf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2db4d000 session 0x556d2d904d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2db11c00 session 0x556d2b9f10e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460488704 unmapped: 18317312 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2c8d9400 session 0x556d2d1ea780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 heartbeat osd_stat(store_statfs(0x19e63f000/0x0/0x1bfc00000, data 0x94c6dd3/0x96df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [0,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2cc42c00 session 0x556d2dce52c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 460578816 unmapped: 18227200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.057971
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1157627904 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5467115 data_alloc: 301989888 data_used: 96972800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.246646881s of 11.109315872s, submitted: 228
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 461651968 unmapped: 17154048 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 461733888 unmapped: 17072128 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 heartbeat osd_stat(store_statfs(0x19e6ae000/0x0/0x1bfc00000, data 0x9453976/0x966a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2ad52400 session 0x556d2b8cd680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 409 ms_handle_reset con 0x556d2c8d9400 session 0x556d2c8730e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 462348288 unmapped: 16457728 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 410 ms_handle_reset con 0x556d2ab33000 session 0x556d2a0225a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 19800064 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 459005952 unmapped: 19800064 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 heartbeat osd_stat(store_statfs(0x19f778000/0x0/0x1bfc00000, data 0x838f24e/0x85a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2db11c00 session 0x556d2d1032c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0571429
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1174405120 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1107296256 meta_used: 5091980 data_alloc: 268435456 data_used: 70852608
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2db41c00 session 0x556d2dce4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2ab33000 session 0x556d2da68f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 450420736 unmapped: 28385280 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 ms_handle_reset con 0x556d2c8d9400 session 0x556d2dabfc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a1f29000/0x0/0x1bfc00000, data 0x5bdff53/0x5df5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 413 ms_handle_reset con 0x556d2ad52400 session 0x556d2cfd6960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 442015744 unmapped: 36790272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2b626000 session 0x556d2dfc8780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4835556 data_alloc: 251658240 data_used: 53211136
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.235727310s of 10.002073288s, submitted: 231
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2ab33000 session 0x556d2d81b2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426352640 unmapped: 52453376 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2c8d7800 session 0x556d2da69a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2f943000 session 0x556d2acaad20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d300dd000 session 0x556d2b5472c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2dec1400 session 0x556d2c9821e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 ms_handle_reset con 0x556d2ab33000 session 0x556d2df88000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427401216 unmapped: 51404800 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a2db5000/0x0/0x1bfc00000, data 0x4d52917/0x4f69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427401216 unmapped: 51404800 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427409408 unmapped: 51396608 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427409408 unmapped: 51396608 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dfc8b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4713492 data_alloc: 251658240 data_used: 37957632
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2b5623c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2d1eb0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d300dd000 session 0x556d2d1eba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3696000/0x0/0x1bfc00000, data 0x44712f6/0x4687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2b9f0000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dec9680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2dabf0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3696000/0x0/0x1bfc00000, data 0x44712f6/0x4687000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2b9f0000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416563200 unmapped: 62242816 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4534601 data_alloc: 234881024 data_used: 26701824
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416571392 unmapped: 62234624 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416530432 unmapped: 62275584 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416538624 unmapped: 62267392 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4570921 data_alloc: 234881024 data_used: 29847552
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.859339714s of 15.267506599s, submitted: 73
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4571209 data_alloc: 234881024 data_used: 29855744
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3695000/0x0/0x1bfc00000, data 0x4471328/0x4689000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416546816 unmapped: 62259200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 60022784 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418783232 unmapped: 60022784 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628243 data_alloc: 234881024 data_used: 29999104
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a2faa000/0x0/0x1bfc00000, data 0x4b5c328/0x4d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a2faa000/0x0/0x1bfc00000, data 0x4b5c328/0x4d74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4628243 data_alloc: 234881024 data_used: 29999104
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dec0400 session 0x556d2c975860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2b5623c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418824192 unmapped: 59981824 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.229384422s of 15.399053574s, submitted: 41
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2cfd7860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2da69a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2da68f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3464000/0x0/0x1bfc00000, data 0x46a2358/0x48b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418553856 unmapped: 60252160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4562666 data_alloc: 234881024 data_used: 24821760
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2d68a400 session 0x556d2b522f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ab33000 session 0x556d2a0225a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3464000/0x0/0x1bfc00000, data 0x46a2358/0x48b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d2f6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363470 data_alloc: 218103808 data_used: 17207296
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407224320 unmapped: 71581696 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.333456993s of 11.540062904s, submitted: 69
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 71639040 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d2f6/0x3673000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2cfd70e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b9f10e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416818 data_alloc: 234881024 data_used: 24432640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46a9000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416818 data_alloc: 234881024 data_used: 24432640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407183360 unmapped: 71622656 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.415445328s of 10.419466972s, submitted: 1
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407166976 unmapped: 71639040 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407199744 unmapped: 71606272 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407257088 unmapped: 71548928 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2d1eb680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416466 data_alloc: 234881024 data_used: 24432640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2f943000 session 0x556d2d81b2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2a80b000 session 0x556d2dabed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 71524352 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46aa000/0x0/0x1bfc00000, data 0x345d306/0x3674000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b4d90e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.516037941s of 14.591504097s, submitted: 348
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2d0bda40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 71516160 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a46a9000/0x0/0x1bfc00000, data 0x345d316/0x3675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4418944 data_alloc: 234881024 data_used: 24449024
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410017792 unmapped: 68788224 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2dce5a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406306816 unmapped: 72499200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84400 session 0x556d2ca30000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84000 session 0x556d2dce4780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d0b4000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2df894a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2ac09e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 406306816 unmapped: 72499200 heap: 478806016 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd84400 session 0x556d2d9041e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2dfc8b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a3f44000/0x0/0x1bfc00000, data 0x378d316/0x39a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396515 data_alloc: 218103808 data_used: 17211392
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2de9e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2b627400 session 0x556d2d9003c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 408150016 unmapped: 74850304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2c8732c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407314432 unmapped: 75685888 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437a000/0x0/0x1bfc00000, data 0x378d306/0x39a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 75702272 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b5221e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.527547836s of 11.853938103s, submitted: 113
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d8000 session 0x556d2dabd2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4392030 data_alloc: 218103808 data_used: 17207296
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 75735040 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 75726848 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407126016 unmapped: 75874304 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 75841536 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4466270 data_alloc: 234881024 data_used: 27590656
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 407158784 unmapped: 75841536 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d904f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2ca86800 session 0x556d2cbfbc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a437b000/0x0/0x1bfc00000, data 0x378d2f6/0x39a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409640960 unmapped: 73359360 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4489310 data_alloc: 234881024 data_used: 34934784
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.280815125s of 11.295031548s, submitted: 2
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2dd70c00 session 0x556d2dabfc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409903104 unmapped: 73097216 heap: 483000320 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d1eb0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409911296 unmapped: 76767232 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd84400 session 0x556d2df88000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409919488 unmapped: 76759040 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38b1000/0x0/0x1bfc00000, data 0x42572f6/0x446d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a38ac000/0x0/0x1bfc00000, data 0x425907b/0x4471000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 409919488 unmapped: 76759040 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 410853376 unmapped: 75825152 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4638340 data_alloc: 234881024 data_used: 36003840
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414646272 unmapped: 72032256 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3085000/0x0/0x1bfc00000, data 0x4a8107b/0x4c99000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d1ccd20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2b50e3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4657302 data_alloc: 234881024 data_used: 36233216
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415105024 unmapped: 71573504 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.646044731s of 11.046380997s, submitted: 123
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415121408 unmapped: 71557120 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3084000/0x0/0x1bfc00000, data 0x4a8108a/0x4c9a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a3060000/0x0/0x1bfc00000, data 0x4aa508a/0x4cbe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651338 data_alloc: 234881024 data_used: 36237312
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2d9043c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 71548928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2ab8c960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2db10c00 session 0x556d2d10e000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415891456 unmapped: 70787072 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2db10c00 session 0x556d2d10fc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d10e960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2dec9c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2dec8000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2dec9a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2c983860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2c983a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2acaaf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418250752 unmapped: 68427776 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2ca86800 session 0x556d2b5234a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a282e000/0x0/0x1bfc00000, data 0x52d510f/0x54f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd70c00 session 0x556d2d904960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 66748416 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d904780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4834861 data_alloc: 251658240 data_used: 47263744
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419930112 unmapped: 66748416 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 heartbeat osd_stat(store_statfs(0x1a23cf000/0x0/0x1bfc00000, data 0x57330fc/0x594e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419938304 unmapped: 66740224 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4030570316' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.567632675s of 10.894236565s, submitted: 71
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2b4d2000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420085760 unmapped: 66592768 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2dce54a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420085760 unmapped: 66592768 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dc09800 session 0x556d2de8f0e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2a80b400 session 0x556d2dce4000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2a80b400 session 0x556d2c8732c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 62185472 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dabe960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dc09800 session 0x556d2d81a780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 ms_handle_reset con 0x556d2dd71800 session 0x556d2c9e3e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4899776 data_alloc: 268435456 data_used: 55554048
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d81a960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 61874176 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a238a000/0x0/0x1bfc00000, data 0x577b08b/0x5994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2b558960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 61693952 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a2e51000/0x0/0x1bfc00000, data 0x4cb2e02/0x4ecd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4780724 data_alloc: 251658240 data_used: 48349184
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 61497344 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2a80b400 session 0x556d2acaba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dfc9680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.362530708s of 11.454614639s, submitted: 34
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426754048 unmapped: 59924480 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2e46000/0x0/0x1bfc00000, data 0x4cbba0b/0x4ed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x17edf9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426819584 unmapped: 59858944 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4783006 data_alloc: 251658240 data_used: 48893952
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2dc09800 session 0x556d2de9e000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2dd71800 session 0x556d2b56ad20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429252608 unmapped: 57425920 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a23d0000/0x0/0x1bfc00000, data 0x531aa0b/0x5536000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429047808 unmapped: 57630720 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429064192 unmapped: 57614336 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2101000/0x0/0x1bfc00000, data 0x55f1a0b/0x580d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431521792 unmapped: 55156736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 55148544 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4912544 data_alloc: 251658240 data_used: 49897472
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431529984 unmapped: 55148544 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431538176 unmapped: 55140352 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a1e08000/0x0/0x1bfc00000, data 0x58eaa0b/0x5b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.472081184s of 10.912376404s, submitted: 166
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908188 data_alloc: 251658240 data_used: 49905664
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2debd000 session 0x556d2ac08960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2b951c00 session 0x556d2d1021e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431677440 unmapped: 55001088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec83c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a2183000/0x0/0x1bfc00000, data 0x4f6c9fb/0x5187000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2db10c00 session 0x556d2de9e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 ms_handle_reset con 0x556d2ca86800 session 0x556d2b562d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427573248 unmapped: 59105280 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4787244 data_alloc: 251658240 data_used: 44290048
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2a80b400 session 0x556d2b5583c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427581440 unmapped: 59097088 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2b951c00 session 0x556d2d901680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2b627400 session 0x556d2ac192c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 419 ms_handle_reset con 0x556d2ca86800 session 0x556d2dce4f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 420 ms_handle_reset con 0x556d2debd000 session 0x556d2d1cdc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438558720 unmapped: 48119808 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 420 ms_handle_reset con 0x556d2db10c00 session 0x556d2b5234a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 438566912 unmapped: 48111616 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a2f66000/0x0/0x1bfc00000, data 0x43d7433/0x45f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 420 handle_osd_map epochs [421,421], i have 421, src has [1,421]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 421 ms_handle_reset con 0x556d2a80b400 session 0x556d2d10e960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 55459840 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a32ed000/0x0/0x1bfc00000, data 0x4403172/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431218688 unmapped: 55459840 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a32ed000/0x0/0x1bfc00000, data 0x4403172/0x4620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4649164 data_alloc: 251658240 data_used: 39026688
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 55451648 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431226880 unmapped: 55451648 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.133255005s of 12.484390259s, submitted: 105
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420233216 unmapped: 66445312 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2ca86800 session 0x556d2b5623c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479290 data_alloc: 234881024 data_used: 25804800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420249600 unmapped: 66428928 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4479290 data_alloc: 234881024 data_used: 25804800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a3d6f000/0x0/0x1bfc00000, data 0x35d5d19/0x37f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 420257792 unmapped: 66420736 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 423444480 unmapped: 63234048 heap: 486678528 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2c8d7800 session 0x556d2df89680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.028622627s of 10.275989532s, submitted: 80
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2ab894a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431849472 unmapped: 58810368 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x4a99d19/0x4cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670546 data_alloc: 234881024 data_used: 28876800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424091648 unmapped: 66568192 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a2c57000/0x0/0x1bfc00000, data 0x4a99d19/0x4cb7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 423 ms_handle_reset con 0x556d2c8d9c00 session 0x556d2d9052c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417521664 unmapped: 73138176 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417538048 unmapped: 73121792 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4644654 data_alloc: 234881024 data_used: 28934144
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a3325000/0x0/0x1bfc00000, data 0x43c9a90/0x45e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 72990720 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.459264755s of 10.672084808s, submitted: 53
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418717696 unmapped: 71942144 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4647720 data_alloc: 234881024 data_used: 28934144
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3317000/0x0/0x1bfc00000, data 0x43d6699/0x45f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418799616 unmapped: 71860224 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4651880 data_alloc: 234881024 data_used: 29356032
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3317000/0x0/0x1bfc00000, data 0x43d6699/0x45f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3315000/0x0/0x1bfc00000, data 0x43d9699/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 418807808 unmapped: 71852032 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3315000/0x0/0x1bfc00000, data 0x43d9699/0x45f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 ms_handle_reset con 0x556d2a80b400 session 0x556d2b9f0000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4655828 data_alloc: 234881024 data_used: 30093312
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.640201569s of 12.666808128s, submitted: 20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d81be00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 75866112 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414793728 unmapped: 75866112 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ca86800 session 0x556d2dfc9e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414810112 unmapped: 75849728 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 75841536 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414818304 unmapped: 75841536 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3e30000/0x0/0x1bfc00000, data 0x38ba580/0x3add000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540438 data_alloc: 234881024 data_used: 21798912
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db10c00 session 0x556d2d9054a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 414400512 unmapped: 76259328 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2c6661e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2dabd2c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dec1000 session 0x556d2cbfad20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416079872 unmapped: 74579968 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2ca31e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2e1a2800 session 0x556d2dec8d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2d904d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2ca301e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2cbfa3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415916032 unmapped: 74743808 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2c9e30e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601573 data_alloc: 234881024 data_used: 24948736
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 74735616 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3a85000/0x0/0x1bfc00000, data 0x404d625/0x3e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 74735616 heap: 490659840 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.550464630s of 11.660453796s, submitted: 28
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd42000 session 0x556d2acab860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec8b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 78389248 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2d10f860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a3a85000/0x0/0x1bfc00000, data 0x404d625/0x3e89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2d10e780
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4721983 data_alloc: 234881024 data_used: 28655616
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db51000 session 0x556d2dce45a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 78381056 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dc08000 session 0x556d2dabe000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2dec9860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2de2000/0x0/0x1bfc00000, data 0x4cf15c3/0x4b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416309248 unmapped: 78028800 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4726105 data_alloc: 234881024 data_used: 28737536
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2de2000/0x0/0x1bfc00000, data 0x4cf15c3/0x4b2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 416317440 unmapped: 78020608 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.725426674s of 11.887782097s, submitted: 30
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419667968 unmapped: 74670080 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770169 data_alloc: 234881024 data_used: 28729344
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 74661888 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2839000/0x0/0x1bfc00000, data 0x52925c3/0x50cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 419676160 unmapped: 74661888 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 68239360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 68239360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 68231168 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4864877 data_alloc: 251658240 data_used: 41721856
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a283f000/0x0/0x1bfc00000, data 0x52945c3/0x50cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 68902912 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a283f000/0x0/0x1bfc00000, data 0x52945c3/0x50cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.929699898s of 10.120367050s, submitted: 52
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427270144 unmapped: 67067904 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4908561 data_alloc: 251658240 data_used: 44994560
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d1eb680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dec1000 session 0x556d2b513c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427270144 unmapped: 67067904 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2b50e5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2638000/0x0/0x1bfc00000, data 0x549b5c3/0x52d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 70303744 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 70303744 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2688000/0x0/0x1bfc00000, data 0x5445590/0x527e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886082 data_alloc: 251658240 data_used: 41627648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2686000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.330945969s of 10.648733139s, submitted: 88
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4886098 data_alloc: 251658240 data_used: 41627648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a2686000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 427548672 unmapped: 66789376 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2dd84800 session 0x556d2df89e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4880610 data_alloc: 251658240 data_used: 41627648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426876928 unmapped: 67461120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2de8f680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c915c00 session 0x556d2b5581e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2db51000 session 0x556d2de8ef00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426885120 unmapped: 67452928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.472830772s of 10.495430946s, submitted: 4
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2dce4b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4872896 data_alloc: 251658240 data_used: 41492480
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a268e000/0x0/0x1bfc00000, data 0x5447590/0x5280000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426893312 unmapped: 67444736 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426893312 unmapped: 67444736 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2c8d7800 session 0x556d2d291680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2a80b400 session 0x556d2cbfa1e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 ms_handle_reset con 0x556d2ad52800 session 0x556d2d81a960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28c0000/0x0/0x1bfc00000, data 0x5216580/0x504e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4851682 data_alloc: 251658240 data_used: 41312256
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a28c0000/0x0/0x1bfc00000, data 0x5216580/0x504e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426901504 unmapped: 67436544 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b627400 session 0x556d2d10fc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b951c00 session 0x556d2c9e2b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2c915c00 session 0x556d2d81af00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 67428352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a28eb000/0x0/0x1bfc00000, data 0x4dff295/0x5022000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426909696 unmapped: 67428352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2a80b400 session 0x556d2dd345a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4697398 data_alloc: 251658240 data_used: 36945920
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 69795840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.024978638s of 11.152261734s, submitted: 36
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 ms_handle_reset con 0x556d2b627400 session 0x556d2d904960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424706048 unmapped: 69632000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 427 ms_handle_reset con 0x556d2ad52800 session 0x556d2b4d30e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424714240 unmapped: 69623808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4708353 data_alloc: 251658240 data_used: 37019648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424722432 unmapped: 69615616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 428 heartbeat osd_stat(store_statfs(0x1a33e5000/0x0/0x1bfc00000, data 0x4302c3d/0x4527000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 428 handle_osd_map epochs [429,429], i have 429, src has [1,429]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e3000/0x0/0x1bfc00000, data 0x4304846/0x452a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 69607424 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4710655 data_alloc: 251658240 data_used: 37019648
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dec1000 session 0x556d2d1ea5a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e3000/0x0/0x1bfc00000, data 0x4304846/0x452a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2ca30d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.204053879s of 11.287176132s, submitted: 45
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2df892c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a33e2000/0x0/0x1bfc00000, data 0x43048a8/0x452b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424738816 unmapped: 69599232 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424747008 unmapped: 69591040 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2d1ccf00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4719096 data_alloc: 251658240 data_used: 38354944
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2cbfb860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 69156864 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791514 data_alloc: 251658240 data_used: 38965248
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 69058560 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 69058560 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4791514 data_alloc: 251658240 data_used: 38965248
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.591960907s of 13.724922180s, submitted: 31
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2ac19c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dec1000 session 0x556d2dec94a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2cad70e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2da68960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 69050368 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4799802 data_alloc: 251658240 data_used: 40140800
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4852746 data_alloc: 251658240 data_used: 47648768
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 68796416 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2cfd70e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.680100441s of 11.751850128s, submitted: 4
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2db51000 session 0x556d2ab892c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b6e000/0x0/0x1bfc00000, data 0x4b7a846/0x4da0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2dd70800 session 0x556d2dce50e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 68534272 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2a80b400 session 0x556d2b4d3680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a2b92000/0x0/0x1bfc00000, data 0x4b56846/0x4d7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [1,1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4850717 data_alloc: 251658240 data_used: 48275456
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2ad52800 session 0x556d2dabcd20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a40f9000/0x0/0x1bfc00000, data 0x35ef846/0x3815000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429367296 unmapped: 64970752 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430522368 unmapped: 63815680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4680866 data_alloc: 234881024 data_used: 34750464
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d2000/0x0/0x1bfc00000, data 0x3f0e846/0x4134000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430538752 unmapped: 63799296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.299698830s of 12.514649391s, submitted: 127
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4670962 data_alloc: 234881024 data_used: 34750464
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429735936 unmapped: 64602112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2da69e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429744128 unmapped: 64593920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2cfef680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429752320 unmapped: 64585728 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37d6000/0x0/0x1bfc00000, data 0x3f12846/0x4138000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4671350 data_alloc: 234881024 data_used: 34754560
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429760512 unmapped: 64577536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.399600983s of 21.432668686s, submitted: 8
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2db51000 session 0x556d2de8fe00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429924352 unmapped: 64413696 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676511 data_alloc: 234881024 data_used: 34836480
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4676511 data_alloc: 234881024 data_used: 34836480
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429940736 unmapped: 64397312 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429948928 unmapped: 64389120 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.514173508s of 13.534637451s, submitted: 5
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4678591 data_alloc: 234881024 data_used: 34877440
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693807 data_alloc: 234881024 data_used: 36077568
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429957120 unmapped: 64380928 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693983 data_alloc: 234881024 data_used: 36073472
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.544389725s of 11.950169563s, submitted: 8
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4693167 data_alloc: 234881024 data_used: 36057088
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431005696 unmapped: 63332352 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4692815 data_alloc: 234881024 data_used: 36057088
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4692815 data_alloc: 234881024 data_used: 36057088
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b627400 session 0x556d2dec83c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431013888 unmapped: 63324160 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 ms_handle_reset con 0x556d2b951c00 session 0x556d2d9045a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429801472 unmapped: 64536576 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 heartbeat osd_stat(store_statfs(0x1a37b1000/0x0/0x1bfc00000, data 0x3f36869/0x415d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429809664 unmapped: 64528384 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.238044739s of 18.349649429s, submitted: 8
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698909 data_alloc: 234881024 data_used: 35725312
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429826048 unmapped: 64512000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429826048 unmapped: 64512000 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2b547860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a37a8000/0x0/0x1bfc00000, data 0x3fee650/0x4166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4712557 data_alloc: 234881024 data_used: 35729408
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2c8d9800 session 0x556d2b546f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2db51c00 session 0x556d2b56b680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a37a8000/0x0/0x1bfc00000, data 0x3fee650/0x4166000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429834240 unmapped: 64503808 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429842432 unmapped: 64495616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2b627400 session 0x556d2c6661e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429842432 unmapped: 64495616 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430612480 unmapped: 63725568 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2b951c00 session 0x556d2dd33e00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 ms_handle_reset con 0x556d2c8d9800 session 0x556d2acab860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777565 data_alloc: 234881024 data_used: 35729408
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f38000/0x0/0x1bfc00000, data 0x485d6b2/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a2f38000/0x0/0x1bfc00000, data 0x485d6b2/0x49d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430161920 unmapped: 64176128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4777565 data_alloc: 234881024 data_used: 35729408
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.582045555s of 15.460376740s, submitted: 42
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430170112 unmapped: 64167936 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a2f34000/0x0/0x1bfc00000, data 0x485f3d5/0x49d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 432 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2b4d2000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430194688 unmapped: 64143360 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 433 ms_handle_reset con 0x556d2dcd6400 session 0x556d2ca30960
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2dd0000/0x0/0x1bfc00000, data 0x49beeed/0x4b3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x182ef9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429522944 unmapped: 64815104 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2c946800 session 0x556d2b513c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2ad5c000 session 0x556d2dabed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4810492 data_alloc: 234881024 data_used: 35737600
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2b627400 session 0x556d2dce4f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428269568 unmapped: 66068480 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 434 ms_handle_reset con 0x556d2c8d9800 session 0x556d2d1032c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428040192 unmapped: 66297856 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2dcc5c00 session 0x556d2de8f4a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b951c00 session 0x556d2de9e000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1928000/0x0/0x1bfc00000, data 0x4cc6c8e/0x4e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428048384 unmapped: 66289664 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428064768 unmapped: 66273280 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4863765 data_alloc: 234881024 data_used: 35762176
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428081152 unmapped: 66256896 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.074008942s of 20.379779816s, submitted: 88
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1524000/0x0/0x1bfc00000, data 0x50c89e9/0x5249000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1361000/0x0/0x1bfc00000, data 0x528c9e9/0x540d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4898372 data_alloc: 251658240 data_used: 37662720
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1361000/0x0/0x1bfc00000, data 0x528c9e9/0x540d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428408832 unmapped: 65929216 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b627400 session 0x556d2d102d20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c8d9800 session 0x556d2d1030e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c946800 session 0x556d2dfc9680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2deba000 session 0x556d2d9005a0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 429244416 unmapped: 65093632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128e000/0x0/0x1bfc00000, data 0x535ea0c/0x54e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [1])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430055424 unmapped: 64282624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964757 data_alloc: 251658240 data_used: 45170688
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.676950455s of 12.754966736s, submitted: 19
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128c000/0x0/0x1bfc00000, data 0x5360a0c/0x54e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964989 data_alloc: 251658240 data_used: 45170688
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a128c000/0x0/0x1bfc00000, data 0x5360a0c/0x54e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430596096 unmapped: 63741952 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430604288 unmapped: 63733760 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4964989 data_alloc: 251658240 data_used: 45170688
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 430268416 unmapped: 64069632 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a113c000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 5004793 data_alloc: 251658240 data_used: 45223936
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a113c000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.769065857s of 14.865639687s, submitted: 37
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b627400 session 0x556d2acaba40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2b951c00 session 0x556d2d81b680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1144000/0x0/0x1bfc00000, data 0x57a8a0c/0x562a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2ad5c000 session 0x556d2de9ed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4999833 data_alloc: 251658240 data_used: 45346816
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2c8d9800 session 0x556d2ad30000
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431546368 unmapped: 62791680 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 heartbeat osd_stat(store_statfs(0x1a1145000/0x0/0x1bfc00000, data 0x57a89e9/0x5629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 431562752 unmapped: 62775296 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 ms_handle_reset con 0x556d2dd44400 session 0x556d2d0bc1e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d300ddc00 session 0x556d2de9ed20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2c946800 session 0x556d2ca310e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2dd44400 session 0x556d2d901a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2d75b400 session 0x556d2b559a40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2b627400 session 0x556d2c9e21e0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2ad5c000 session 0x556d2d81b680
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 428113920 unmapped: 66224128 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 ms_handle_reset con 0x556d2b627400 session 0x556d2d900b40
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 heartbeat osd_stat(store_statfs(0x1a251e000/0x0/0x1bfc00000, data 0x3ff977e/0x4177000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4770683 data_alloc: 251658240 data_used: 37744640
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425951232 unmapped: 68386816 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2a80b400 session 0x556d2dd35860
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2ad52800 session 0x556d2b512f00
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.628050804s of 10.047689438s, submitted: 132
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 437 ms_handle_reset con 0x556d2c946800 session 0x556d2cbfa3c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4760989 data_alloc: 251658240 data_used: 37629952
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 437 heartbeat osd_stat(store_statfs(0x1a261c000/0x0/0x1bfc00000, data 0x3f2148c/0x4150000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425959424 unmapped: 68378624 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 ms_handle_reset con 0x556d2a80b400 session 0x556d2de8fc20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 68771840 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 68763648 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 68755456 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 68747264 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 68739072 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 68730880 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 68722688 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 68714496 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 68706304 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 68698112 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 68689920 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 68673536 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 68665344 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 68853760 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425099264 unmapped: 69238784 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 69443584 heap: 494338048 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 80486400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf dump' '{prefix=perf dump}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf schema' '{prefix=perf schema}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424378368 unmapped: 81002496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424378368 unmapped: 81002496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424378368 unmapped: 81002496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424378368 unmapped: 81002496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6001.4 total, 600.0 interval#012Cumulative writes: 61K writes, 229K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.04 MB/s#012Cumulative WAL: 61K writes, 22K syncs, 2.67 writes per sync, written: 0.22 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4840 writes, 17K keys, 4840 commit groups, 1.0 writes per commit group, ingest: 15.96 MB, 0.03 MB/s#012Interval WAL: 4840 writes, 2054 syncs, 2.36 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.60              0.00         1    0.600       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.6 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556d2923f610#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6001.4 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 me
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424378368 unmapped: 81002496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424386560 unmapped: 80994304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424394752 unmapped: 80986112 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424394752 unmapped: 80986112 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424394752 unmapped: 80986112 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424402944 unmapped: 80977920 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424411136 unmapped: 80969728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424419328 unmapped: 80961536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424419328 unmapped: 80961536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424427520 unmapped: 80953344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424443904 unmapped: 80936960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424443904 unmapped: 80936960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424443904 unmapped: 80936960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424443904 unmapped: 80936960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424452096 unmapped: 80928768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424460288 unmapped: 80920576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424460288 unmapped: 80920576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424460288 unmapped: 80920576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 80912384 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 80912384 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 80912384 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424468480 unmapped: 80912384 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424476672 unmapped: 80904192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424476672 unmapped: 80904192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424476672 unmapped: 80904192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424476672 unmapped: 80904192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424493056 unmapped: 80887808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424501248 unmapped: 80879616 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424509440 unmapped: 80871424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424525824 unmapped: 80855040 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424534016 unmapped: 80846848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424534016 unmapped: 80846848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 80838656 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 80838656 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424542208 unmapped: 80838656 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 80830464 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 80830464 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 80830464 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424558592 unmapped: 80822272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424574976 unmapped: 80805888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 80797696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 80797696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 80797696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 80797696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 80797696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 80789504 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 80789504 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 80789504 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424607744 unmapped: 80773120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424615936 unmapped: 80764928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b3000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424624128 unmapped: 80756736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4525599 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 217.830841064s of 218.043731689s, submitted: 43
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424624128 unmapped: 80756736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a37b4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1948f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424632320 unmapped: 80748544 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424665088 unmapped: 80715776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424730624 unmapped: 80650240 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424747008 unmapped: 80633856 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424763392 unmapped: 80617472 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424771584 unmapped: 80609280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424796160 unmapped: 80584704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424804352 unmapped: 80576512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424812544 unmapped: 80568320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424812544 unmapped: 80568320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424812544 unmapped: 80568320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424812544 unmapped: 80568320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424820736 unmapped: 80560128 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424828928 unmapped: 80551936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424837120 unmapped: 80543744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424845312 unmapped: 80535552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424853504 unmapped: 80527360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424853504 unmapped: 80527360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424853504 unmapped: 80527360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424853504 unmapped: 80527360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424853504 unmapped: 80527360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424861696 unmapped: 80519168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424878080 unmapped: 80502784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424878080 unmapped: 80502784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424878080 unmapped: 80502784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424878080 unmapped: 80502784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424878080 unmapped: 80502784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424886272 unmapped: 80494592 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424886272 unmapped: 80494592 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424886272 unmapped: 80494592 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424886272 unmapped: 80494592 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 80486400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 80486400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424894464 unmapped: 80486400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424902656 unmapped: 80478208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424902656 unmapped: 80478208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424902656 unmapped: 80478208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424902656 unmapped: 80478208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424902656 unmapped: 80478208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424910848 unmapped: 80470016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424910848 unmapped: 80470016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424910848 unmapped: 80470016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424919040 unmapped: 80461824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424927232 unmapped: 80453632 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424927232 unmapped: 80453632 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424927232 unmapped: 80453632 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424927232 unmapped: 80453632 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424927232 unmapped: 80453632 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424935424 unmapped: 80445440 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 80437248 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 80437248 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 80437248 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 80437248 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424943616 unmapped: 80437248 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424951808 unmapped: 80429056 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424960000 unmapped: 80420864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 80412672 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 80412672 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 80412672 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 80412672 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424968192 unmapped: 80412672 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 80404480 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 80404480 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424976384 unmapped: 80404480 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 80396288 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 80396288 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 80396288 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 80396288 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424984576 unmapped: 80396288 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 424992768 unmapped: 80388096 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425000960 unmapped: 80379904 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425000960 unmapped: 80379904 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425000960 unmapped: 80379904 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425009152 unmapped: 80371712 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425017344 unmapped: 80363520 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425025536 unmapped: 80355328 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425025536 unmapped: 80355328 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425025536 unmapped: 80355328 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425025536 unmapped: 80355328 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425033728 unmapped: 80347136 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425033728 unmapped: 80347136 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425033728 unmapped: 80347136 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425033728 unmapped: 80347136 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425041920 unmapped: 80338944 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425041920 unmapped: 80338944 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425041920 unmapped: 80338944 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425050112 unmapped: 80330752 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425050112 unmapped: 80330752 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425050112 unmapped: 80330752 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425050112 unmapped: 80330752 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425050112 unmapped: 80330752 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425058304 unmapped: 80322560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425058304 unmapped: 80322560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425058304 unmapped: 80322560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425058304 unmapped: 80322560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425058304 unmapped: 80322560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425066496 unmapped: 80314368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425066496 unmapped: 80314368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425066496 unmapped: 80314368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc ms_handle_reset ms_handle_reset con 0x556d300dd400
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1950343944
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1950343944,v1:192.168.122.100:6801/1950343944]
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: mgrc handle_mgr_configure stats_period=5
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 ms_handle_reset con 0x556d2ad5d000 session 0x556d2da69c20
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425074688 unmapped: 80306176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425082880 unmapped: 80297984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425091072 unmapped: 80289792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425099264 unmapped: 80281600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425107456 unmapped: 80273408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 80265216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 80265216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 80265216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 80265216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425115648 unmapped: 80265216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425123840 unmapped: 80257024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425123840 unmapped: 80257024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425123840 unmapped: 80257024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425132032 unmapped: 80248832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425148416 unmapped: 80232448 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425156608 unmapped: 80224256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 80216064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 80216064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 80216064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425164800 unmapped: 80216064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 80207872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 80207872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 80207872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425172992 unmapped: 80207872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 80199680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 80199680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 80199680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425181184 unmapped: 80199680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 80191488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 80191488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 80191488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425189376 unmapped: 80191488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425197568 unmapped: 80183296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 80175104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 80175104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 80175104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 80175104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425205760 unmapped: 80175104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 80166912 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 80166912 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425213952 unmapped: 80166912 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 80158720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 80158720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 80158720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 80158720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425222144 unmapped: 80158720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 80150528 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 80150528 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425230336 unmapped: 80150528 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 80142336 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 80142336 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 80142336 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425238528 unmapped: 80142336 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425246720 unmapped: 80134144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425254912 unmapped: 80125952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425263104 unmapped: 80117760 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425271296 unmapped: 80109568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425271296 unmapped: 80109568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425271296 unmapped: 80109568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425271296 unmapped: 80109568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425271296 unmapped: 80109568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 80101376 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 80101376 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 80101376 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 ms_handle_reset con 0x556d2ab33000 session 0x556d2df883c0
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425279488 unmapped: 80101376 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425287680 unmapped: 80093184 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425295872 unmapped: 80084992 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425304064 unmapped: 80076800 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425304064 unmapped: 80076800 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425304064 unmapped: 80076800 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425304064 unmapped: 80076800 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425312256 unmapped: 80068608 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425312256 unmapped: 80068608 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425312256 unmapped: 80068608 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425312256 unmapped: 80068608 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 80060416 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 80060416 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 80060416 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425320448 unmapped: 80060416 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425328640 unmapped: 80052224 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425336832 unmapped: 80044032 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425336832 unmapped: 80044032 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425336832 unmapped: 80044032 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425336832 unmapped: 80044032 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425336832 unmapped: 80044032 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425345024 unmapped: 80035840 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425353216 unmapped: 80027648 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425361408 unmapped: 80019456 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425377792 unmapped: 80003072 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 79994880 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 79994880 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 79994880 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425385984 unmapped: 79994880 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425394176 unmapped: 79986688 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425394176 unmapped: 79986688 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425394176 unmapped: 79986688 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425394176 unmapped: 79986688 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425402368 unmapped: 79978496 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425410560 unmapped: 79970304 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425426944 unmapped: 79953920 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425426944 unmapped: 79953920 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425426944 unmapped: 79953920 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425426944 unmapped: 79953920 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 79945728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 79945728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 79945728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 79945728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425435136 unmapped: 79945728 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425443328 unmapped: 79937536 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425451520 unmapped: 79929344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425451520 unmapped: 79929344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425451520 unmapped: 79929344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425451520 unmapped: 79929344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425451520 unmapped: 79929344 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425467904 unmapped: 79912960 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425476096 unmapped: 79904768 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 79896576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 79896576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425484288 unmapped: 79896576 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425492480 unmapped: 79888384 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425500672 unmapped: 79880192 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425508864 unmapped: 79872000 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 79863808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 79863808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 79863808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425517056 unmapped: 79863808 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 79855616 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425525248 unmapped: 79855616 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425533440 unmapped: 79847424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425533440 unmapped: 79847424 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 234881024 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425541632 unmapped: 79839232 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425558016 unmapped: 79822848 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 79814656 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425566208 unmapped: 79814656 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 79806464 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425574400 unmapped: 79806464 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 79798272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 79798272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 79798272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425582592 unmapped: 79798272 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 79790080 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 79790080 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425590784 unmapped: 79790080 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 79781888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 79781888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425598976 unmapped: 79781888 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 79773696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 79773696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 79773696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425607168 unmapped: 79773696 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 79765504 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425615360 unmapped: 79765504 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425623552 unmapped: 79757312 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 79749120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 79749120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 79749120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425631744 unmapped: 79749120 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6601.4 total, 600.0 interval#012Cumulative writes: 61K writes, 230K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 23K syncs, 2.66 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 702 writes, 1067 keys, 702 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 702 writes, 350 syncs, 2.01 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425639936 unmapped: 79740928 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 79732736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 79732736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 79732736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425648128 unmapped: 79732736 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 79716352 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425664512 unmapped: 79716352 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 79708160 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 79708160 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 79708160 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425672704 unmapped: 79708160 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425680896 unmapped: 79699968 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425680896 unmapped: 79699968 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425689088 unmapped: 79691776 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425697280 unmapped: 79683584 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425705472 unmapped: 79675392 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425705472 unmapped: 79675392 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425713664 unmapped: 79667200 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425721856 unmapped: 79659008 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425738240 unmapped: 79642624 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425738240 unmapped: 79642624 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425738240 unmapped: 79642624 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425738240 unmapped: 79642624 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425738240 unmapped: 79642624 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425746432 unmapped: 79634432 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425746432 unmapped: 79634432 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425746432 unmapped: 79634432 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425754624 unmapped: 79626240 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425754624 unmapped: 79626240 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425754624 unmapped: 79626240 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425762816 unmapped: 79618048 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425762816 unmapped: 79618048 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425762816 unmapped: 79618048 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425762816 unmapped: 79618048 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425762816 unmapped: 79618048 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425779200 unmapped: 79601664 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425779200 unmapped: 79601664 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425779200 unmapped: 79601664 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425779200 unmapped: 79601664 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425779200 unmapped: 79601664 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425787392 unmapped: 79593472 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425787392 unmapped: 79593472 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425787392 unmapped: 79593472 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 79585280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 79585280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 79585280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 79585280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425795584 unmapped: 79585280 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 79577088 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 79577088 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425803776 unmapped: 79577088 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425811968 unmapped: 79568896 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425811968 unmapped: 79568896 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425811968 unmapped: 79568896 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 79560704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425820160 unmapped: 79560704 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425828352 unmapped: 79552512 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 79544320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 79544320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 79544320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425836544 unmapped: 79544320 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 79527936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 79527936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425852928 unmapped: 79527936 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425861120 unmapped: 79519744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425861120 unmapped: 79519744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425861120 unmapped: 79519744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425861120 unmapped: 79519744 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425869312 unmapped: 79511552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425869312 unmapped: 79511552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425869312 unmapped: 79511552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425869312 unmapped: 79511552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425869312 unmapped: 79511552 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425877504 unmapped: 79503360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425877504 unmapped: 79503360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425877504 unmapped: 79503360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425877504 unmapped: 79503360 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 79495168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 79495168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 79495168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425885696 unmapped: 79495168 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 79486976 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 79486976 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425893888 unmapped: 79486976 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425902080 unmapped: 79478784 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425918464 unmapped: 79462400 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 79454208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 79454208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 79454208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425926656 unmapped: 79454208 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 79446016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 598.946594238s of 600.224914551s, submitted: 352
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 79446016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 79446016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [0,0,0,0,1,0,0,2])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524878 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425934848 unmapped: 79446016 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425943040 unmapped: 79437824 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 425984000 unmapped: 79396864 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426049536 unmapped: 79331328 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426082304 unmapped: 79298560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426090496 unmapped: 79290368 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426098688 unmapped: 79282176 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426106880 unmapped: 79273984 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426115072 unmapped: 79265792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426115072 unmapped: 79265792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426115072 unmapped: 79265792 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426123264 unmapped: 79257600 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426131456 unmapped: 79249408 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426139648 unmapped: 79241216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426139648 unmapped: 79241216 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426147840 unmapped: 79233024 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426156032 unmapped: 79224832 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 79216640 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 79216640 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 79216640 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 79216640 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426164224 unmapped: 79216640 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426180608 unmapped: 79200256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426180608 unmapped: 79200256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426180608 unmapped: 79200256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426180608 unmapped: 79200256 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426188800 unmapped: 79192064 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426196992 unmapped: 79183872 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 79175680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426205184 unmapped: 79175680 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426213376 unmapped: 79167488 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 79159296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 79159296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426221568 unmapped: 79159296 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 79151104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 79151104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 79151104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 79151104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426229760 unmapped: 79151104 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 79134720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46492 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 79134720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 79134720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426246144 unmapped: 79134720 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426254336 unmapped: 79126528 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426254336 unmapped: 79126528 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426270720 unmapped: 79110144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426270720 unmapped: 79110144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426270720 unmapped: 79110144 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426278912 unmapped: 79101952 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426295296 unmapped: 79085568 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: bluestore.MempoolThread(0x556d2931db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4524719 data_alloc: 218103808 data_used: 23678976
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426442752 unmapped: 78938112 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: osd.0 438 heartbeat osd_stat(store_statfs(0x1a33a4000/0x0/0x1bfc00000, data 0x2d8a0cd/0x2fba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1989f9c6), peers [1,2] op hist [])
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}'
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426344448 unmapped: 79036416 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: prioritycache tune_memory target: 4294967296 mapped: 426082304 unmapped: 79298560 heap: 505380864 old mem: 2845415832 new mem: 2845415832
Nov 29 04:08:56 np0005539550 ceph-osd[84753]: do_command 'log dump' '{prefix=log dump}'
Nov 29 04:08:56 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:56 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:08:56 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46931 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38613 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:56 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46516 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46949 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 29 04:08:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955093459' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38625 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46534 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46958 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 29 04:08:57 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340157090' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38637 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46546 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:57 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46970 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38649 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46558 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:08:58.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 29 04:08:58 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224592684' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38658 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46573 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:08:58 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:08:58 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:08:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47006 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:58 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:58.733+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:58 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:58 np0005539550 nova_compute[257631]: 2025-11-29 09:08:58.816 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:08:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 29 04:08:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859725612' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46612 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:59 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:59.270+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38688 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:08:59 np0005539550 ceph-b66774a7-56d9-5535-bd8c-681234404870-mgr-compute-0-pdhsqi[74722]: 2025-11-29T09:08:59.295+0000 7f31e0735640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2025-11-29_09:08:59
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] do_upmap
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'volumes', 'vms', 'backups', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Nov 29 04:08:59 np0005539550 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Nov 29 04:08:59 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 29 04:08:59 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531553137' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/564753204' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 29 04:09:00 np0005539550 nova_compute[257631]: 2025-11-29 09:09:00.041 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:09:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:00.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2911475188' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317702421' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3160562955' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 29 04:09:00 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:00 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:00 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 29 04:09:00 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844945064' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2313399006' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 29 04:09:01 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047680289' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3961109704' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 29 04:09:01 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169594011' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 29 04:09:01 np0005539550 systemd[1]: Starting Hostname Service...
Nov 29 04:09:01 np0005539550 systemd[1]: Started Hostname Service.
Nov 29 04:09:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:02.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:02 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 29 04:09:02 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081357443' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 29 04:09:02 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:02 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:09:02 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:02.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:09:02 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47144 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:02 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47153 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38799 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47159 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47165 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38805 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.555989) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343556351, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2272, "num_deletes": 251, "total_data_size": 4021378, "memory_usage": 4091944, "flush_reason": "Manual Compaction"}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343589597, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 3941526, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78266, "largest_seqno": 80537, "table_properties": {"data_size": 3930997, "index_size": 6638, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23583, "raw_average_key_size": 21, "raw_value_size": 3909351, "raw_average_value_size": 3518, "num_data_blocks": 288, "num_entries": 1111, "num_filter_entries": 1111, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764407120, "oldest_key_time": 1764407120, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 33776 microseconds, and 14530 cpu microseconds.
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.589802) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 3941526 bytes OK
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.589902) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.591772) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.591787) EVENT_LOG_v1 {"time_micros": 1764407343591782, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.591813) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4011775, prev total WAL file size 4011775, number of live WAL files 2.
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.593180) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(3849KB)], [179(11MB)]
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343593421, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 16080092, "oldest_snapshot_seqno": -1}
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38811 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47180 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46744 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:03 np0005539550 nova_compute[257631]: 2025-11-29 09:09:03.817 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 11860 keys, 14092137 bytes, temperature: kUnknown
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343887767, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 14092137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14017788, "index_size": 43580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29701, "raw_key_size": 313688, "raw_average_key_size": 26, "raw_value_size": 13812115, "raw_average_value_size": 1164, "num_data_blocks": 1647, "num_entries": 11860, "num_filter_entries": 11860, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764400316, "oldest_key_time": 0, "file_creation_time": 1764407343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ff4fe731-8e89-4164-b95b-761a6503bf52", "db_session_id": "YJG4JH9P5UPLR9LZG3U4", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.888180) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 14092137 bytes
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.899207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.6 rd, 47.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 12377, records dropped: 517 output_compression: NoCompression
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.899271) EVENT_LOG_v1 {"time_micros": 1764407343899245, "job": 112, "event": "compaction_finished", "compaction_time_micros": 294477, "compaction_time_cpu_micros": 50013, "output_level": 6, "num_output_files": 1, "total_output_size": 14092137, "num_input_records": 12377, "num_output_records": 11860, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343900434, "job": 112, "event": "table_file_deletion", "file_number": 181}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764407343902624, "job": 112, "event": "table_file_deletion", "file_number": 179}
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.593049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.902673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.902680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.902682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.902684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mon[74435]: rocksdb: (Original Log Time 2025/11/29-09:09:03.902686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 29 04:09:03 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38823 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47195 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46765 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:09:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:04.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46771 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47210 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38838 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:04 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.001000025s ======
Nov 29 04:09:04 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:04.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46795 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 29 04:09:04 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4261607124' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47225 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:04 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38847 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 nova_compute[257631]: 2025-11-29 09:09:05.043 257641 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46810 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47240 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810539754' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38862 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46825 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46837 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 29 04:09:05 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1757069918' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 29 04:09:05 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46849 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190167422' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.100 - anonymous [29/Nov/2025:09:09:06.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:06 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46879 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 29 04:09:06 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47300 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:06 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46897 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 29 04:09:06 np0005539550 radosgw[93278]: ====== starting new request req=0x7fdb608746f0 =====
Nov 29 04:09:06 np0005539550 radosgw[93278]: ====== req done req=0x7fdb608746f0 op status=0 http_status=200 latency=0.000000000s ======
Nov 29 04:09:06 np0005539550 radosgw[93278]: beast: 0x7fdb608746f0: 192.168.122.102 - anonymous [29/Nov/2025:09:09:06.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 29 04:09:06 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1238092084' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 29 04:09:07 np0005539550 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Nov 29 04:09:07 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.38949 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:07 np0005539550 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.46936 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 29 04:09:07 np0005539550 ceph-mon[74435]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 29 04:09:07 np0005539550 ceph-mon[74435]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/153233698' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
